threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "I decided to spend an afternoon seeing exactly how much work would be\nneeded to support parameterized TID scans, ie nestloop-with-inner-TID-\nscan joins, as has been speculated about before, most recently here:\n\nhttps://www.postgresql.org/message-id/flat/CAMqTPq%3DhNg0GYFU0X%2BxmuKy8R2ARk1%2BA_uQpS%2BMnf71MYpBKzg%40mail.gmail.com\n\nIt turns out it's not that bad, less than 200 net new lines of code\n(all of it in the planner; the executor seems to require no work).\n\nMuch of the code churn is because tidpath.c is so ancient and crufty.\nIt was mostly ignoring the RestrictInfo infrastructure, in particular\nemitting the list of tidquals as just bare clauses not RestrictInfos.\nI had to change that in order to avoid inefficiencies in some places.\n\nI haven't really looked at how much of a merge problem there'll be\nwith Edmund Horner's work for TID range scans. My feeling about it\nis that we might be best off treating that as a totally separate\ncode path, because the requirements are significantly different (for\ninstance, a range scan needs AND semantics not OR semantics for the\nlist of quals to apply).\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 21 Dec 2018 18:34:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Joins on TID"
},
{
"msg_contents": "BTW, if we're to start taking joins on TID seriously, we should also\nadd the missing hash opclass for TID, so that you can do hash joins\nwhen dealing with a lot of rows.\n\n(In principle this also enables things like hash aggregation, though\nI'm not very clear on a use-case for grouping by TID.)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 21 Dec 2018 23:31:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Joins on TID"
},
{
"msg_contents": "On Sat, 22 Dec 2018 at 04:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> BTW, if we're to start taking joins on TID seriously, we should also\n> add the missing hash opclass for TID, so that you can do hash joins\n> when dealing with a lot of rows.\n>\n> (In principle this also enables things like hash aggregation, though\n> I'm not very clear on a use-case for grouping by TID.)\n>\n\nI don't think we are trying to do TID joins more seriously, just fix a\nspecial case.\n\nThe case cited requires the batches of work to be small, so nested loops\nworks fine.\n\nLooks to me that Edmund is trying to solve the same problem. If so, this is\nthe best solution.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Sat, 22 Dec 2018 at 04:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:BTW, if we're to start taking joins on TID seriously, we should also\nadd the missing hash opclass for TID, so that you can do hash joins\nwhen dealing with a lot of rows.\n\n(In principle this also enables things like hash aggregation, though\nI'm not very clear on a use-case for grouping by TID.)I don't think we are trying to do TID joins more seriously, just fix a special case.The case cited requires the batches of work to be small, so nested loops works fine.Looks to me that Edmund is trying to solve the same problem. If so, this is the best solution.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 22 Dec 2018 08:18:35 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Joins on TID"
},
{
"msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> On Sat, 22 Dec 2018 at 04:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, if we're to start taking joins on TID seriously, we should also\n>> add the missing hash opclass for TID, so that you can do hash joins\n>> when dealing with a lot of rows.\n\n> I don't think we are trying to do TID joins more seriously, just fix a\n> special case.\n> The case cited requires the batches of work to be small, so nested loops\n> works fine.\n> Looks to me that Edmund is trying to solve the same problem. If so, this is\n> the best solution.\n\nNo, I think what Edmund is on about is unrelated, except that it touches\nsome of the same code. He's interested in problems like \"find the last\nfew tuples in this table\". You can solve that today, with e.g.\n\"SELECT ... WHERE ctid >= '(n,1)'\", but you get a stupidly inefficient\nplan. If we think that's a use-case worth supporting then it'd be\nreasonable to provide less inefficient implementation(s).\n\nWhat I'm thinking about in this thread is joins on TID, which we have only\nvery weak support for today --- you'll basically always wind up with a\nmergejoin, which requires full-table scan and sort of its inputs. Still,\nthat's better than a naive nestloop, and for years we've been figuring\nthat that was good enough. Several people in the other thread that\nI cited felt that that isn't good enough. But if we think it's worth\ntaking seriously, then IMO we need to add both parameterized scans (for\nnestloop-with-inner-fetch-by-tid) and hash join, because each of those\ncan dominate depending on how many tuples you're joining.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 22 Dec 2018 11:31:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Joins on TID"
},
{
"msg_contents": "On Sat, 22 Dec 2018 at 16:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> What I'm thinking about in this thread is joins on TID, which we have only\n> very weak support for today --- you'll basically always wind up with a\n> mergejoin, which requires full-table scan and sort of its inputs. Still,\n> that's better than a naive nestloop, and for years we've been figuring\n> that that was good enough. Several people in the other thread that\n> I cited felt that that isn't good enough. But if we think it's worth\n> taking seriously, then IMO we need to add both parameterized scans (for\n> nestloop-with-inner-fetch-by-tid) and hash join, because each of those\n> can dominate depending on how many tuples you're joining.\n>\n\nThat would certainly help if you are building a column store, or other new\nindex types.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Sat, 22 Dec 2018 at 16:31, Tom Lane <tgl@sss.pgh.pa.us> wrote: What I'm thinking about in this thread is joins on TID, which we have only\nvery weak support for today --- you'll basically always wind up with a\nmergejoin, which requires full-table scan and sort of its inputs. Still,\nthat's better than a naive nestloop, and for years we've been figuring\nthat that was good enough. Several people in the other thread that\nI cited felt that that isn't good enough. But if we think it's worth\ntaking seriously, then IMO we need to add both parameterized scans (for\nnestloop-with-inner-fetch-by-tid) and hash join, because each of those\ncan dominate depending on how many tuples you're joining.That would certainly help if you are building a column store, or other new index types. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 22 Dec 2018 19:15:02 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Joins on TID"
},
{
"msg_contents": "Hi,\n\nWriting as someone who used TID joins and group by's in the past.\n\nOne use case is having a chance to peek into what will DELETE do.\nA lot of GIS tables don't have any notion of ID, and dirty datasets tend to\nhave many duplicates you need to cross-reference with something else. So,\nyou write your query in form of\n\nCREATE TABLE ttt as (SELECT distinct on (ctid) ctid as ct, field1, field2,\nb.field3, ... from table b join othertable b on ST_Whatever(a.geom,\nb.geom));\n\n<connect to table with QGIS, poke around, maybe delete some rows you doubt\nyou want to remove>\n\nDELETE FROM table a USING ttt b where a.ctid = b.ct;\nDROP TABLE ttt;\n\nHere:\n - distinct on ctid is used (hash?)\n - a.ctid = b.ct (hash join candidate?)\n\nI know it's all better with proper IDs, but sometimes it works like that,\nusually just once per dataset.\n\n\nсб, 22 дек. 2018 г. в 19:31, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> Simon Riggs <simon@2ndquadrant.com> writes:\n> > On Sat, 22 Dec 2018 at 04:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> BTW, if we're to start taking joins on TID seriously, we should also\n> >> add the missing hash opclass for TID, so that you can do hash joins\n> >> when dealing with a lot of rows.\n>\n> > I don't think we are trying to do TID joins more seriously, just fix a\n> > special case.\n> > The case cited requires the batches of work to be small, so nested loops\n> > works fine.\n> > Looks to me that Edmund is trying to solve the same problem. If so, this\n> is\n> > the best solution.\n>\n> No, I think what Edmund is on about is unrelated, except that it touches\n> some of the same code. He's interested in problems like \"find the last\n> few tuples in this table\". You can solve that today, with e.g.\n> \"SELECT ... WHERE ctid >= '(n,1)'\", but you get a stupidly inefficient\n> plan. If we think that's a use-case worth supporting then it'd be\n> reasonable to provide less inefficient implementation(s).\n>\n> What I'm thinking about in this thread is joins on TID, which we have only\n> very weak support for today --- you'll basically always wind up with a\n> mergejoin, which requires full-table scan and sort of its inputs. Still,\n> that's better than a naive nestloop, and for years we've been figuring\n> that that was good enough. Several people in the other thread that\n> I cited felt that that isn't good enough. But if we think it's worth\n> taking seriously, then IMO we need to add both parameterized scans (for\n> nestloop-with-inner-fetch-by-tid) and hash join, because each of those\n> can dominate depending on how many tuples you're joining.\n>\n> regards, tom lane\n>\n> --\nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,Writing as someone who used TID joins and group by's in the past.One use case is having a chance to peek into what will DELETE do.A lot of GIS tables don't have any notion of ID, and dirty datasets tend to have many duplicates you need to cross-reference with something else. So, you write your query in form of CREATE TABLE ttt as (SELECT distinct on (ctid) ctid as ct, field1, field2, b.field3, ... from table b join othertable b on ST_Whatever(a.geom, b.geom));<connect to table with QGIS, poke around, maybe delete some rows you doubt you want to remove>DELETE FROM table a USING ttt b where a.ctid = b.ct;DROP TABLE ttt;Here: - distinct on ctid is used (hash?) - a.ctid = b.ct (hash join candidate?)I know it's all better with proper IDs, but sometimes it works like that, usually just once per dataset.сб, 22 дек. 2018 г. в 19:31, Tom Lane <tgl@sss.pgh.pa.us>:Simon Riggs <simon@2ndquadrant.com> writes:\n> On Sat, 22 Dec 2018 at 04:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, if we're to start taking joins on TID seriously, we should also\n>> add the missing hash opclass for TID, so that you can do hash joins\n>> when dealing with a lot of rows.\n\n> I don't think we are trying to do TID joins more seriously, just fix a\n> special case.\n> The case cited requires the batches of work to be small, so nested loops\n> works fine.\n> Looks to me that Edmund is trying to solve the same problem. If so, this is\n> the best solution.\n\nNo, I think what Edmund is on about is unrelated, except that it touches\nsome of the same code. He's interested in problems like \"find the last\nfew tuples in this table\". You can solve that today, with e.g.\n\"SELECT ... WHERE ctid >= '(n,1)'\", but you get a stupidly inefficient\nplan. If we think that's a use-case worth supporting then it'd be\nreasonable to provide less inefficient implementation(s).\n\nWhat I'm thinking about in this thread is joins on TID, which we have only\nvery weak support for today --- you'll basically always wind up with a\nmergejoin, which requires full-table scan and sort of its inputs. Still,\nthat's better than a naive nestloop, and for years we've been figuring\nthat that was good enough. Several people in the other thread that\nI cited felt that that isn't good enough. But if we think it's worth\ntaking seriously, then IMO we need to add both parameterized scans (for\nnestloop-with-inner-fetch-by-tid) and hash join, because each of those\ncan dominate depending on how many tuples you're joining.\n\n regards, tom lane\n\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 23 Dec 2018 19:23:33 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: Joins on TID"
},
{
"msg_contents": "On Sat, 22 Dec 2018 at 12:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I decided to spend an afternoon seeing exactly how much work would be\n> needed to support parameterized TID scans, ie nestloop-with-inner-TID-\n> scan joins, as has been speculated about before, most recently here:\n>\n> https://www.postgresql.org/message-id/flat/CAMqTPq%3DhNg0GYFU0X%2BxmuKy8R2ARk1%2BA_uQpS%2BMnf71MYpBKzg%40mail.gmail.com\n>\n> It turns out it's not that bad, less than 200 net new lines of code\n> (all of it in the planner; the executor seems to require no work).\n>\n> Much of the code churn is because tidpath.c is so ancient and crufty.\n> It was mostly ignoring the RestrictInfo infrastructure, in particular\n> emitting the list of tidquals as just bare clauses not RestrictInfos.\n> I had to change that in order to avoid inefficiencies in some places.\n\nIt seems good, and I can see you've committed it now. (I should have\ncommented sooner, but it's the big summer holiday period here, which\nmeans I have plenty of time to work on PostgreSQL, but none of my\nusual resources. In any case, I was going to say \"this looks useful\nand not too complicated, please go ahead\".)\n\nI did notice that multiple tidquals are no longer removed from scan_clauses:\n\nEXPLAIN SELECT * FROM pg_class WHERE ctid = '(1,1)' OR ctid = '(2,2)';\n\n Tid Scan on pg_class (cost=0.01..8.03 rows=2 width=265)\n TID Cond: ((ctid = '(1,1)'::tid) OR (ctid = '(2,2)'::tid))\n Filter: ((ctid = '(1,1)'::tid) OR (ctid = '(2,2)'::tid))\n\nI guess if we thought it was a big deal we could attempt to recreate\nthe old logic with RestrictInfos.\n\n> I haven't really looked at how much of a merge problem there'll be\n> with Edmund Horner's work for TID range scans. My feeling about it\n> is that we might be best off treating that as a totally separate\n> code path, because the requirements are significantly different (for\n> instance, a range scan needs AND semantics not OR semantics for the\n> list of quals to apply).\n\nWell, I guess it's up to me to merge it. I can't quite see which\nparts we'd use a separate code path for. Can you elaborate?\n\nEdmund\n\n",
"msg_date": "Tue, 1 Jan 2019 13:53:06 +1300",
"msg_from": "Edmund Horner <ejrh00@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Joins on TID"
},
{
"msg_contents": "Edmund Horner <ejrh00@gmail.com> writes:\n> On Sat, 22 Dec 2018 at 12:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I decided to spend an afternoon seeing exactly how much work would be\n>> needed to support parameterized TID scans, ie nestloop-with-inner-TID-\n>> scan joins, as has been speculated about before, most recently here:\n>> ...\n\n> It seems good, and I can see you've committed it now. (I should have\n> commented sooner, but it's the big summer holiday period here, which\n> means I have plenty of time to work on PostgreSQL, but none of my\n> usual resources. In any case, I was going to say \"this looks useful\n> and not too complicated, please go ahead\".)\n\nOK.\n\n> I did notice that multiple tidquals are no longer removed from scan_clauses:\n> EXPLAIN SELECT * FROM pg_class WHERE ctid = '(1,1)' OR ctid = '(2,2)';\n> Tid Scan on pg_class (cost=0.01..8.03 rows=2 width=265)\n> TID Cond: ((ctid = '(1,1)'::tid) OR (ctid = '(2,2)'::tid))\n> Filter: ((ctid = '(1,1)'::tid) OR (ctid = '(2,2)'::tid))\n\nI fixed that in the committed version, I believe. (I'd been\noveroptimistic about whether logic could be removed from\ncreate_tidscan_plan.)\n\n>> I haven't really looked at how much of a merge problem there'll be\n>> with Edmund Horner's work for TID range scans. My feeling about it\n>> is that we might be best off treating that as a totally separate\n>> code path, because the requirements are significantly different (for\n>> instance, a range scan needs AND semantics not OR semantics for the\n>> list of quals to apply).\n\n> Well, I guess it's up to me to merge it. I can't quite see which\n> parts we'd use a separate code path for. Can you elaborate?\n\nThe thing that's bothering me is something I hadn't really focused on\nbefore, but which looms large now that I've thought about it: the\nTID-quals list of a TidPath or TidScan has OR semantics, viz it can\ndirectly represent\n\n\tctid = this OR ctid = that OR ctid = the_other\n\nas a list of tideq OpExprs. But what you want for a range scan on\nTID is implicit-AND, because you might have either a one-sided\ncondition, say\n\n\tctid >= this\n\nor a range condition\n\n\tctid >= this AND ctid <= that\n\nI see that what you've done to make this sort-of work in the existing\npatch is to insist that a range scan have just one member at the OR-d list\nlevel and that has to be an AND'ed sublist, but TBH I think that's a mess;\nfor instance I wonder whether the code works correctly if faced with cases\nlike\n\n\tctid >= this OR ctid <= that\n\nI don't think it's at all practical to have tidpath.c dealing with both\ncases in one scan of the quals --- even if you can make it work, it'll be\nunreasonably complicated and hard to understand. I'd be inclined to just\nhave it thumb through the restrictinfo or joininfo list a second time,\nlooking for inequalities, and build a path for that case separately.\n\nI suspect that on the whole, you'd be better off treating the range-scan\ncase as completely separate, with a different Path type and different\nPlan type too (ie, separate executor support). Yes, this would involve\nsome duplication of support code, but I think the end result would be\na lot cleaner and easier to understand.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 01 Jan 2019 16:00:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Joins on TID"
}
] |
[
{
"msg_contents": "After running a testing server out of storage, I tried to track down why it\nwas so hard to get it back up again. (Rather than what I usually do which\nis just throwing it away and making the test be smaller).\n\nI couldn't start a backend because it couldn't write the relcache init file.\n\nI found this comment, but it did not carry its sentiment to completion:\n\n /*\n * We used to consider this a fatal error, but we might as well\n * continue with backend startup ...\n */\n\nWith the attached patch applied, I could at least get a backend going so I\ncould drop some tables/indexes and free up space.\n\nI'm not enamoured with the implementation of passing a flag down\nto write_item, but it seemed better than making write_item return an error\ncode and then checking the return status in a dozen places. Maybe we could\nturn write_item into a macro, so the macro can implement the \"return\" from\nthe outer function directly?\n\nCheers,\n\nJeff",
"msg_date": "Sat, 22 Dec 2018 20:49:58 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make relcache init write errors not be fatal"
},
{
"msg_contents": "Hi,\n\nOn 2018-12-22 20:49:58 -0500, Jeff Janes wrote:\n> After running a testing server out of storage, I tried to track down why it\n> was so hard to get it back up again. (Rather than what I usually do which\n> is just throwing it away and making the test be smaller).\n> \n> I couldn't start a backend because it couldn't write the relcache init file.\n> \n> I found this comment, but it did not carry its sentiment to completion:\n> \n> /*\n> * We used to consider this a fatal error, but we might as well\n> * continue with backend startup ...\n> */\n> \n> With the attached patch applied, I could at least get a backend going so I\n> could drop some tables/indexes and free up space.\n\nWhy is this a good idea? It'll just cause hard to debug performance\nissues imo.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sat, 22 Dec 2018 17:54:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Make relcache init write errors not be fatal"
},
{
"msg_contents": "On Sat, Dec 22, 2018 at 8:54 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2018-12-22 20:49:58 -0500, Jeff Janes wrote:\n> > After running a testing server out of storage, I tried to track down why\n> it\n> > was so hard to get it back up again. (Rather than what I usually do\n> which\n> > is just throwing it away and making the test be smaller).\n> >\n> > I couldn't start a backend because it couldn't write the relcache init\n> file.\n> >\n> > I found this comment, but it did not carry its sentiment to completion:\n> >\n> > /*\n> > * We used to consider this a fatal error, but we might as well\n> > * continue with backend startup ...\n> > */\n> >\n> > With the attached patch applied, I could at least get a backend going so\n> I\n> > could drop some tables/indexes and free up space.\n>\n> Why is this a good idea? It'll just cause hard to debug performance\n> issues imo.\n>\n>\nYou get lots of WARNINGs, so it shouldn't be too hard to debug. And once\nyou drop a table or an index, the init will succeed and you wouldn't have\nthe performance issues at all anymore.\n\nThe alternative, barring finding extraneous data on the same partition that\ncan be removed, seems to be having indefinite downtime until you can locate\na larger hard drive and move everything to it, or using dangerous hacks.\n\nCheers,\n\nJeff\n\nOn Sat, Dec 22, 2018 at 8:54 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2018-12-22 20:49:58 -0500, Jeff Janes wrote:\n> After running a testing server out of storage, I tried to track down why it\n> was so hard to get it back up again. (Rather than what I usually do which\n> is just throwing it away and making the test be smaller).\n> \n> I couldn't start a backend because it couldn't write the relcache init file.\n> \n> I found this comment, but it did not carry its sentiment to completion:\n> \n> /*\n> * We used to consider this a fatal error, but we might as well\n> * continue with backend startup ...\n> */\n> \n> With the attached patch applied, I could at least get a backend going so I\n> could drop some tables/indexes and free up space.\n\nWhy is this a good idea? It'll just cause hard to debug performance\nissues imo.\nYou get lots of WARNINGs, so it shouldn't be too hard to debug. And once you drop a table or an index, the init will succeed and you wouldn't have the performance issues at all anymore.The alternative, barring finding extraneous data on the same partition that can be removed, seems to be having indefinite downtime until you can locate a larger hard drive and move everything to it, or using dangerous hacks.Cheers,Jeff",
"msg_date": "Sat, 22 Dec 2018 21:26:00 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relcache init write errors not be fatal"
}
] |
[
{
"msg_contents": "src/bin/pg_upgrade/test.sh runs installcheck, which writes to files in\nsrc/test/regress. This has at least two disadvantages when check-world runs\nboth this test suite and the \"make check\" suite:\n\n1. The suite finishing second will overwrite the other's regression.{out,diffs}.\n2. If these suites run a given test file in parallel (possible with \"make -j\n check-world\"), they simultaneously edit a file in src/test/regress/results.\n This can cause reporting of spurious failures. On my system, the symptom\n is a regression.diffs indicating that the .out file contained ranges of NUL\n bytes (holes) and/or lacked expected lines.\n\nA disadvantage of any change here is that it degrades buildfarm reports, which\nrecover slowly as owners upgrade to a fixed buildfarm release. This will be\nsimilar to the introduction of --outputdir=output_iso. On non-upgraded\nanimals, pg_upgradeCheck failures will omit regression.diffs.\n\nI think the right fix, attached, is to use \"pg_regress --outputdir\" to\nredirect these files to src/bin/pg_upgrade/tmp_check/regress. I chose that\nparticular path because it will still fit naturally if we ever rewrite test.sh\nusing src/test/perl. I'm recommending that the buildfarm capture[1] files\nmatching src/bin/pg_upgrade/tmp_check/*/*.diffs, which will work even if we\nmake this test suite run installcheck more than once. This revealed a few\nplaces where tests assume @abs_builddir@ is getcwd(), which I fixed.\n\nThanks,\nnm\n\n[1] https://github.com/PGBuildFarm/client-code/blob/REL_9/PGBuild/Modules/TestUpgrade.pm#L126",
"msg_date": "Sun, 23 Dec 2018 19:44:11 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Move regression.diffs of pg_upgrade test suite"
},
{
"msg_contents": "\nOn 12/23/18 10:44 PM, Noah Misch wrote:\n> src/bin/pg_upgrade/test.sh runs installcheck, which writes to files in\n> src/test/regress. This has at least two disadvantages when check-world runs\n> both this test suite and the \"make check\" suite:\n>\n> 1. The suite finishing second will overwrite the other's regression.{out,diffs}.\n> 2. If these suites run a given test file in parallel (possible with \"make -j\n> check-world\"), they simultaneously edit a file in src/test/regress/results.\n> This can cause reporting of spurious failures. On my system, the symptom\n> is a regression.diffs indicating that the .out file contained ranges of NUL\n> bytes (holes) and/or lacked expected lines.\n>\n> A disadvantage of any change here is that it degrades buildfarm reports, which\n> recover slowly as owners upgrade to a fixed buildfarm release. This will be\n> similar to the introduction of --outputdir=output_iso. On non-upgraded\n> animals, pg_upgradeCheck failures will omit regression.diffs.\n>\n> I think the right fix, attached, is to use \"pg_regress --outputdir\" to\n> redirect these files to src/bin/pg_upgrade/tmp_check/regress. I chose that\n> particular path because it will still fit naturally if we ever rewrite test.sh\n> using src/test/perl. I'm recommending that the buildfarm capture[1] files\n> matching src/bin/pg_upgrade/tmp_check/*/*.diffs, which will work even if we\n> make this test suite run installcheck more than once. This revealed a few\n> places where tests assume @abs_builddir@ is getcwd(), which I fixed.\n>\n> Thanks,\n> nm\n>\n> [1] https://github.com/PGBuildFarm/client-code/blob/REL_9/PGBuild/Modules/TestUpgrade.pm#L126\n\nSeems reasonable.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 24 Dec 2018 15:11:33 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Move regression.diffs of pg_upgrade test suite"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 12/23/18 10:44 PM, Noah Misch wrote:\n>> A disadvantage of any change here is that it degrades buildfarm reports, which\n>> recover slowly as owners upgrade to a fixed buildfarm release. This will be\n>> similar to the introduction of --outputdir=output_iso. On non-upgraded\n>> animals, pg_upgradeCheck failures will omit regression.diffs.\n>> \n>> I think the right fix, attached, is to use \"pg_regress --outputdir\" to\n>> redirect these files to src/bin/pg_upgrade/tmp_check/regress.\n\n> Seems reasonable.\n\nDo we need to change anything in the buildfarm client to improve its\nresponse to this? If so, seems like it might be advisable to make a\nbuildfarm release with the upgrade before committing the change.\nSure, not all owners will update right away, but if they don't even\nhave the option then we're not in a good place.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 26 Dec 2018 17:02:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move regression.diffs of pg_upgrade test suite"
},
{
"msg_contents": "On Wed, Dec 26, 2018 at 05:02:37PM -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 12/23/18 10:44 PM, Noah Misch wrote:\n> >> A disadvantage of any change here is that it degrades buildfarm reports, which\n> >> recover slowly as owners upgrade to a fixed buildfarm release. This will be\n> >> similar to the introduction of --outputdir=output_iso. On non-upgraded\n> >> animals, pg_upgradeCheck failures will omit regression.diffs.\n> >> \n> >> I think the right fix, attached, is to use \"pg_regress --outputdir\" to\n> >> redirect these files to src/bin/pg_upgrade/tmp_check/regress.\n> \n> > Seems reasonable.\n> \n> Do we need to change anything in the buildfarm client to improve its\n> response to this? If so, seems like it might be advisable to make a\n> buildfarm release with the upgrade before committing the change.\n> Sure, not all owners will update right away, but if they don't even\n> have the option then we're not in a good place.\n\nIt would have been convenient if, for each test target, PostgreSQL code\ndecides the list of interesting log files and presents that list for the\nbuildfarm client to consume. It's probably overkill to redesign that now,\nthough. I also don't think it's of top importance to have unbroken access to\nthis regression.diffs, because defects that cause this run to fail will\neventually upset \"install-check-C\" and/or \"check\". Even so, it's fine to\npatch the buildfarm client in advance of the postgresql.git change:\n\ndiff --git a/PGBuild/Modules/TestUpgrade.pm b/PGBuild/Modules/TestUpgrade.pm\nindex 19b48b3..dfff17f 100644\n--- a/PGBuild/Modules/TestUpgrade.pm\n+++ b/PGBuild/Modules/TestUpgrade.pm\n@@ -117,11 +117,16 @@ sub check\n @checklog = run_log($cmd);\n }\n \n+ # Pre-2019 runs could create src/test/regress/regression.diffs. Its\n+ # inclusion is a harmless no-op for later runs; if another stage\n+ # (e.g. make_check()) failed and created that file, the run ends before\n+ # reaching this stage.\n my @logfiles = glob(\n \"$self->{pgsql}/contrib/pg_upgrade/*.log\n $self->{pgsql}/contrib/pg_upgrade/log/*\n $self->{pgsql}/src/bin/pg_upgrade/*.log\n $self->{pgsql}/src/bin/pg_upgrade/log/*\n+ $self->{pgsql}/src/bin/pg_upgrade/tmp_check/*/*.diffs\n $self->{pgsql}/src/test/regress/*.diffs\"\n );\n foreach my $log (@logfiles)\n\n",
"msg_date": "Wed, 26 Dec 2018 14:44:28 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Move regression.diffs of pg_upgrade test suite"
},
{
"msg_contents": "\nOn 12/26/18 5:44 PM, Noah Misch wrote:\n> On Wed, Dec 26, 2018 at 05:02:37PM -0500, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> On 12/23/18 10:44 PM, Noah Misch wrote:\n>>>> A disadvantage of any change here is that it degrades buildfarm reports, which\n>>>> recover slowly as owners upgrade to a fixed buildfarm release. This will be\n>>>> similar to the introduction of --outputdir=output_iso. On non-upgraded\n>>>> animals, pg_upgradeCheck failures will omit regression.diffs.\n>>>>\n>>>> I think the right fix, attached, is to use \"pg_regress --outputdir\" to\n>>>> redirect these files to src/bin/pg_upgrade/tmp_check/regress.\n>>> Seems reasonable.\n>> Do we need to change anything in the buildfarm client to improve its\n>> response to this? If so, seems like it might be advisable to make a\n>> buildfarm release with the upgrade before committing the change.\n>> Sure, not all owners will update right away, but if they don't even\n>> have the option then we're not in a good place.\n> It would have been convenient if, for each test target, PostgreSQL code\n> decides the list of interesting log files and presents that list for the\n> buildfarm client to consume. It's probably overkill to redesign that now,\n> though. I also don't think it's of top importance to have unbroken access to\n> this regression.diffs, because defects that cause this run to fail will\n> eventually upset \"install-check-C\" and/or \"check\". Even so, it's fine to\n> patch the buildfarm client in advance of the postgresql.git change:\n>\n> diff --git a/PGBuild/Modules/TestUpgrade.pm b/PGBuild/Modules/TestUpgrade.pm\n> index 19b48b3..dfff17f 100644\n> --- a/PGBuild/Modules/TestUpgrade.pm\n> +++ b/PGBuild/Modules/TestUpgrade.pm\n> @@ -117,11 +117,16 @@ sub check\n> @checklog = run_log($cmd);\n> }\n> \n> + # Pre-2019 runs could create src/test/regress/regression.diffs. Its\n> + # inclusion is a harmless no-op for later runs; if another stage\n> + # (e.g. make_check()) failed and created that file, the run ends before\n> + # reaching this stage.\n> my @logfiles = glob(\n> \"$self->{pgsql}/contrib/pg_upgrade/*.log\n> $self->{pgsql}/contrib/pg_upgrade/log/*\n> $self->{pgsql}/src/bin/pg_upgrade/*.log\n> $self->{pgsql}/src/bin/pg_upgrade/log/*\n> + $self->{pgsql}/src/bin/pg_upgrade/tmp_check/*/*.diffs\n> $self->{pgsql}/src/test/regress/*.diffs\"\n> );\n> foreach my $log (@logfiles)\n\n\nI'll commit this or something similar, but I generally try not to make\nnew releases more frequently than once every 3 months, and it's only six\nweeks since the last release. So unless there's a very good reason I am\nnot planning on a release before February.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Dec 2018 10:41:46 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Move regression.diffs of pg_upgrade test suite"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 10:41:46AM -0500, Andrew Dunstan wrote:\n> On 12/26/18 5:44 PM, Noah Misch wrote:\n> > On Wed, Dec 26, 2018 at 05:02:37PM -0500, Tom Lane wrote:\n> >> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> >>> On 12/23/18 10:44 PM, Noah Misch wrote:\n> >>>> A disadvantage of any change here is that it degrades buildfarm reports, which\n> >>>> recover slowly as owners upgrade to a fixed buildfarm release. This will be\n> >>>> similar to the introduction of --outputdir=output_iso. On non-upgraded\n> >>>> animals, pg_upgradeCheck failures will omit regression.diffs.\n\n> >> Do we need to change anything in the buildfarm client to improve its\n> >> response to this? If so, seems like it might be advisable to make a\n> >> buildfarm release with the upgrade before committing the change.\n> >> Sure, not all owners will update right away, but if they don't even\n> >> have the option then we're not in a good place.\n> >\n> > It would have been convenient if, for each test target, PostgreSQL code\n> > decides the list of interesting log files and presents that list for the\n> > buildfarm client to consume. It's probably overkill to redesign that now,\n> > though. I also don't think it's of top importance to have unbroken access to\n> > this regression.diffs, because defects that cause this run to fail will\n> > eventually upset \"install-check-C\" and/or \"check\". Even so, it's fine to\n> > patch the buildfarm client in advance of the postgresql.git change:\n> >\n> > diff --git a/PGBuild/Modules/TestUpgrade.pm b/PGBuild/Modules/TestUpgrade.pm\n\n> I'll commit this or something similar, but I generally try not to make\n> new releases more frequently than once every 3 months, and it's only six\n> weeks since the last release. So unless there's a very good reason I am\n> not planning on a release before February.\n\nThere's no rush; I don't recall other reports of the spurious failure\ndescribed in the original post. I'll plan to push the postgresql.git change\naround 2019-03-31, so animals updating within a month of release will have no\ndegraded pg_upgradeCheck failure reports.\n\n",
"msg_date": "Sun, 30 Dec 2018 11:28:56 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Move regression.diffs of pg_upgrade test suite"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 11:28:56AM -0500, Noah Misch wrote:\n> On Sun, Dec 30, 2018 at 10:41:46AM -0500, Andrew Dunstan wrote:\n> > On 12/26/18 5:44 PM, Noah Misch wrote:\n> > > On Wed, Dec 26, 2018 at 05:02:37PM -0500, Tom Lane wrote:\n> > >> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > >>> On 12/23/18 10:44 PM, Noah Misch wrote:\n> > >>>> A disadvantage of any change here is that it degrades buildfarm reports, which\n> > >>>> recover slowly as owners upgrade to a fixed buildfarm release. This will be\n> > >>>> similar to the introduction of --outputdir=output_iso. On non-upgraded\n> > >>>> animals, pg_upgradeCheck failures will omit regression.diffs.\n> \n> > >> Do we need to change anything in the buildfarm client to improve its\n> > >> response to this? If so, seems like it might be advisable to make a\n> > >> buildfarm release with the upgrade before committing the change.\n> > >> Sure, not all owners will update right away, but if they don't even\n> > >> have the option then we're not in a good place.\n> > >\n> > > It would have been convenient if, for each test target, PostgreSQL code\n> > > decides the list of interesting log files and presents that list for the\n> > > buildfarm client to consume. It's probably overkill to redesign that now,\n> > > though. I also don't think it's of top importance to have unbroken access to\n> > > this regression.diffs, because defects that cause this run to fail will\n> > > eventually upset \"install-check-C\" and/or \"check\". Even so, it's fine to\n> > > patch the buildfarm client in advance of the postgresql.git change:\n> > >\n> > > diff --git a/PGBuild/Modules/TestUpgrade.pm b/PGBuild/Modules/TestUpgrade.pm\n> \n> > I'll commit this or something similar, but I generally try not to make\n> > new releases more frequently than once every 3 months, and it's only six\n> > weeks since the last release. So unless there's a very good reason I am\n> > not planning on a release before February.\n> \n> There's no rush; I don't recall other reports of the spurious failure\n> described in the original post. I'll plan to push the postgresql.git change\n> around 2019-03-31, so animals updating within a month of release will have no\n> degraded pg_upgradeCheck failure reports.\n\nThe buildfarm release landed 2019-04-04, so I pushed $SUBJECT today, in commit\nbd1592e. The buildfarm was unanimous against it, for two reasons. First, the\npatch was incompatible with NO_TEMP_INSTALL=1, which the buildfarm uses. In a\nnormal \"make -C src/bin/pg_upgrade check\", the act of creating the temporary\ninstallation also creates \"tmp_check\". With NO_TEMP_INSTALL=1, it's instead\nthe initdb that creates \"tmp_check\". I plan to fix that by removing and\ncreating \"tmp_check\" early. This fixes another longstanding bug; a rerun of\n\"vcregress upgradecheck\" would fail with 'directory \"[...]/tmp_check/data\"\nexists but is not empty'. It's also more consistent with $(prove_check),\neliminates the possibility that a file in \"tmp_check\" survives from an earlier\nrun, and ends NO_TEMP_INSTALL=1 changing the \"tmp_check\" creation umask.\n\nSecond, I broke \"vcregress installcheck\" by writing \"funcname $arg\" where\nfuncname was declared later in the file. Neither the function invocation\nstyle nor the function declaration order were in line with that file's style,\nso I'm changing both.",
"msg_date": "Sun, 19 May 2019 18:24:36 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Move regression.diffs of pg_upgrade test suite"
}
] |
[
{
"msg_contents": "Hi\n\nAttached is mainly to fix a comment in $subject which has a typo in the referenced initdb\noption (\"--walsegsize\", should be \"--wal-segsize\"), and while I'm there also adds a\ncouple of \"the\" for readability.\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 24 Dec 2018 13:05:25 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "On Mon, Dec 24, 2018 at 01:05:25PM +0900, Ian Barwick wrote:\n> Attached is mainly to fix a comment in $subject which has a typo in\n> the referenced initdb option (\"--walsegsize\", should be\n> \"--wal-segsize\"), and while I'm there also adds a couple of \"the\"\n> for readability.\n\nAll that (the error as well as the extra \"the\" for clarity in this\nsentence) seems right to me. Any opinions from others?\n--\nMichael",
"msg_date": "Mon, 24 Dec 2018 17:57:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Dec 24, 2018 at 01:05:25PM +0900, Ian Barwick wrote:\n>> Attached is mainly to fix a comment in $subject which has a typo in\n>> the referenced initdb option (\"--walsegsize\", should be\n>> \"--wal-segsize\"), and while I'm there also adds a couple of \"the\"\n>> for readability.\n\n> All that (the error as well as the extra \"the\" for clarity in this\n> sentence) seems right to me. Any opinions from others?\n\nThe text still seems a bit awkward. Maybe \"... to be used when initdb\nis run without the ...\"\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 25 Dec 2018 10:22:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "On Tue, Dec 25, 2018 at 10:22:30AM -0500, Tom Lane wrote:\n> The text still seems a bit awkward. Maybe \"... to be used when initdb\n> is run without the ...\"\n\nlike the attached perhaps? At the same time I am thinking about\nreformulating the second sentence as well..\n--\nMichael",
"msg_date": "Wed, 26 Dec 2018 09:36:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "On Wed, Dec 26, 2018 at 09:36:57AM +0900, Michael Paquier wrote:\n> like the attached perhaps? At the same time I am thinking about\n> reformulating the second sentence as well..\n>\n> /*\n> - * This is default value for wal_segment_size to be used at initdb when run\n> - * without --walsegsize option. Must be a valid segment size.\n> + * This is the default value for wal_segment_size to be used when initdb is run\n> + * without the --wal-segsize option. It must be a valid segment size.\n> */\n> #define DEFAULT_XLOG_SEG_SIZE\t(16*1024*1024)\n\nSo, any objections with this change? If somebody has a better\nwording, please feel free to chime in.\n--\nMichael",
"msg_date": "Fri, 28 Dec 2018 10:41:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "On 2018-Dec-28, Michael Paquier wrote:\n\n> On Wed, Dec 26, 2018 at 09:36:57AM +0900, Michael Paquier wrote:\n> > like the attached perhaps? At the same time I am thinking about\n> > reformulating the second sentence as well..\n> >\n> > /*\n> > - * This is default value for wal_segment_size to be used at initdb when run\n> > - * without --walsegsize option. Must be a valid segment size.\n> > + * This is the default value for wal_segment_size to be used when initdb is run\n> > + * without the --wal-segsize option. It must be a valid segment size.\n> > */\n> > #define DEFAULT_XLOG_SEG_SIZE\t(16*1024*1024)\n> \n> So, any objections with this change? If somebody has a better\n> wording, please feel free to chime in.\n\nLooks good to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 28 Dec 2018 00:37:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "On Fri, Dec 28, 2018 at 12:37:41AM -0300, Alvaro Herrera wrote:\n> Looks good to me.\n\nThanks for the lookup. I have committed and back-patched to v11 for\nconsistency.\n--\nMichael",
"msg_date": "Sat, 29 Dec 2018 08:27:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
},
{
"msg_contents": "On 12/29/18 8:27 AM, Michael Paquier wrote:\n> On Fri, Dec 28, 2018 at 12:37:41AM -0300, Alvaro Herrera wrote:\n>> Looks good to me.\n> \n> Thanks for the lookup. I have committed and back-patched to v11 for\n> consistency.\n\nThanks!\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 31 Dec 2018 09:50:29 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor comment fix for pg_config_manual.h"
}
] |
[
{
"msg_contents": "Hi\n\nOn these pages:\n\n - https://www.postgresql.org/docs/current/functions-array.html\n - https://www.postgresql.org/docs/current/functions-string.html\n\nwe point out via \"See also\" the existence of aggregate array and string\nfunctions, but I think it would be useful to also mention the existence\nof string-related array functions and array-related string (regexp) functions\nrespectively.\n\n(Background: due to brain fade I was looking on the array functions page\nfor the array-related function whose name was escaping me which does something\nwith regexes to make an array, and was puzzled to find no reference on that page).\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 24 Dec 2018 15:28:17 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Small doc tweak for array/string functions"
},
{
"msg_contents": "Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> On these pages:\n> - https://www.postgresql.org/docs/current/functions-array.html\n> - https://www.postgresql.org/docs/current/functions-string.html\n> we point out via \"See also\" the existence of aggregate array and string\n> functions, but I think it would be useful to also mention the existence\n> of string-related array functions and array-related string (regexp) functions\n> respectively.\n\nHmm. The existing cross-references there feel a bit ad-hoc to me already,\nand the proposed additions even more so. Surely we don't want to conclude\nthat every function that takes or returns an array needs to be cited on\nthe functions-array page; and that idea would be even sillier if applied\nto strings. How can we define a less spur-of-the-moment approach to\ndeciding what to list?\n\nThe patch as shown might be just fine, but I'd like to have some rationale\nfor which things we're listing or not listing.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 26 Dec 2018 16:55:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small doc tweak for array/string functions"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI would like to propose a change, which allow CLUSTER, VACUUM FULL and \nREINDEX to modify relation tablespace on the fly. Actually, all these \ncommands rebuild relation filenodes from the scratch, thus it seems \nnatural to allow specifying them a new location. It may be helpful, when \na server went out of disk, so you can attach new partition and perform \ne.g. VACUUM FULL, which will free some space and move data to a new \nlocation at the same time. Otherwise, you cannot complete VACUUM FULL \nuntil you have up to x2 relation disk space on a single partition.\n\nPlease, find attached a patch, which extend CLUSTER, VACUUM FULL and \nREINDEX with additional options:\n\nREINDEX [ ( VERBOSE ) ] { INDEX | TABLE } name [ SET TABLESPACE \nnew_tablespace ]\n\nCLUSTER [VERBOSE] table_name [ USING index_name ] [ SET TABLESPACE \nnew_tablespace ]\nCLUSTER [VERBOSE] [ SET TABLESPACE new_tablespace ]\n\nVACUUM ( FULL [, ...] ) [ SET TABLESPACE new_tablespace ] [ \ntable_and_columns [, ...] ]\nVACUUM FULL [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ SET TABLESPACE \nnew_tablespace ] [ table_and_columns [, ...] ]\n\nThereby I have a few questions:\n\n1) What do you think about this concept in general?\n\n2) Is SET TABLESPACE an appropriate syntax for this functionality? I \nthought also about a plain TABLESPACE keyword, but it seems to be \nmisleading, and WITH (options) clause like in CREATE SUBSCRIPTION ... \nWITH (options). So I preferred SET TABLESPACE, since the same syntax is \nused currently in ALTER to change tablespace, but maybe someone will \nhave a better idea.\n\n3) I was not able to update the lexer for VACUUM FULL to use SET \nTABLESPACE after table_and_columns and completely get rid of \nshift/reduce conflicts. I guess it happens, since table_and_columns is \noptional and may be of variable length, but have no idea how to deal \nwith it. Any thoughts?\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professionalhttps://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 24 Dec 2018 14:08:43 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the\n fly"
},
{
"msg_contents": "On Mon, Dec 24, 2018 at 6:08 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> I would like to propose a change, which allow CLUSTER, VACUUM FULL and\n> REINDEX to modify relation tablespace on the fly.\n\nALTER TABLE already has a lot of logic that is oriented towards being\nable to do multiple things at the same time. If we added CLUSTER,\nVACUUM FULL, and REINDEX to that set, then you could, say, change a\ndata type, cluster, and change tablespaces all in a single SQL\ncommand.\n\nThat would be cool, but probably a lot of work. :-(\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 26 Dec 2018 13:09:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2018-Dec-26, Robert Haas wrote:\n\n> On Mon, Dec 24, 2018 at 6:08 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n> > I would like to propose a change, which allow CLUSTER, VACUUM FULL and\n> > REINDEX to modify relation tablespace on the fly.\n> \n> ALTER TABLE already has a lot of logic that is oriented towards being\n> able to do multiple things at the same time. If we added CLUSTER,\n> VACUUM FULL, and REINDEX to that set, then you could, say, change a\n> data type, cluster, and change tablespaces all in a single SQL\n> command.\n\nThat's a great observation.\n\n> That would be cool, but probably a lot of work. :-(\n\nBut is it? ALTER TABLE is already doing one kind of table rewrite\nduring phase 3, and CLUSTER is just a different kind of table rewrite\n(which happens to REINDEX), and VACUUM FULL is just a special case of\nCLUSTER. Maybe what we need is an ALTER TABLE variant that executes\nCLUSTER's table rewrite during phase 3 instead of its ad-hoc table\nrewrite.\n\nAs for REINDEX, I think it's valuable to move tablespace together with\nthe reindexing. You can already do it with the CREATE INDEX\nCONCURRENTLY recipe we recommend, of course; but REINDEX CONCURRENTLY is\nnot going to provide that, and it seems worth doing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 26 Dec 2018 15:19:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 26, 2018 at 03:19:06PM -0300, Alvaro Herrera wrote:\n> As for REINDEX, I think it's valuable to move tablespace together with\n> the reindexing. You can already do it with the CREATE INDEX\n> CONCURRENTLY recipe we recommend, of course; but REINDEX CONCURRENTLY is\n> not going to provide that, and it seems worth doing.\n\nEven for plain REINDEX that seems useful.\n--\nMichael",
"msg_date": "Thu, 27 Dec 2018 10:57:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi,\n\nThank you all for replies.\n\n>> ALTER TABLE already has a lot of logic that is oriented towards being\n>> able to do multiple things at the same time. If we added CLUSTER,\n>> VACUUM FULL, and REINDEX to that set, then you could, say, change a\n>> data type, cluster, and change tablespaces all in a single SQL\n>> command.\n> That's a great observation.\n\nIndeed, I thought that ALTER TABLE executes all actions sequentially one \nby one, e.g. in the case of\n\nALTER TABLE test_int CLUSTER ON test_int_idx, SET TABLESPACE test_tblspc;\n\nit executes CLUSTER and THEN executes SET TABLESPACE. However, if I get \nit right, ALTER TABLE is rather smart, so in such a case it follows the \nsteps:\n\n1) Only saves new tablespace Oid during prepare phase 1 without actual work;\n\n2) Only executes mark_index_clustered during phase 2, again without \nactual work done;\n\n3) And finally rewrites relation during phase 3, where CLUSTER and SET \nTABLESPACE are effectively performed.\n\n>> That would be cool, but probably a lot of work. :-(\n> But is it? ALTER TABLE is already doing one kind of table rewrite\n> during phase 3, and CLUSTER is just a different kind of table rewrite\n> (which happens to REINDEX), and VACUUM FULL is just a special case of\n> CLUSTER. Maybe what we need is an ALTER TABLE variant that executes\n> CLUSTER's table rewrite during phase 3 instead of its ad-hoc table\n> rewrite.\n\nAccording to the ALTER TABLE example above, it is already exist for CLUSTER.\n\n> As for REINDEX, I think it's valuable to move tablespace together with\n> the reindexing. You can already do it with the CREATE INDEX\n> CONCURRENTLY recipe we recommend, of course; but REINDEX CONCURRENTLY is\n> not going to provide that, and it seems worth doing.\n\nMaybe I am missing something, but according to the docs REINDEX \nCONCURRENTLY does not exist yet, DROP then CREATE CONCURRENTLY is \nsuggested instead. Thus, we have to add REINDEX CONCURRENTLY first, but \nit is a matter of different patch, I guess.\n\n>> Even for plain REINDEX that seems useful.\n>> --\n>> Michael\n\nTo summarize:\n\n1) Alvaro and Michael agreed, that REINDEX with tablespace move may be \nuseful. This is done in the patch attached to my initial email. Adding \nREINDEX to ALTER TABLE as new action seems quite questionable for me and \nnot completely semantically correct. ALTER already looks bulky.\n\n2) If I am correct, 'ALTER TABLE ... CLUSTER ON ..., SET TABLESPACE ...' \ndoes exactly what I wanted to add to CLUSTER in my patch. So probably no \nwork is necessary here.\n\n3) VACUUM FULL. It seems, that we can add special case 'ALTER TABLE ... \nVACUUM FULL, SET TABLESPACE ...', which will follow relatively the same \npath as with CLUSTER ON, but without any specific index. Relation should \nbe rewritten in the new tablespace during phase 3.\n\nWhat do you think?\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Thu, 27 Dec 2018 15:06:54 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2018-Dec-27, Alexey Kondratov wrote:\n\n> To summarize:\n> \n> 1) Alvaro and Michael agreed, that REINDEX with tablespace move may be\n> useful. This is done in the patch attached to my initial email. Adding\n> REINDEX to ALTER TABLE as new action seems quite questionable for me and not\n> completely semantically correct. ALTER already looks bulky.\n\nAgreed on these points.\n\n> 2) If I am correct, 'ALTER TABLE ... CLUSTER ON ..., SET TABLESPACE ...'\n> does exactly what I wanted to add to CLUSTER in my patch. So probably no\n> work is necessary here.\n\nWell, ALTER TABLE CLUSTER ON does not really cluster the table; it only\nindicates which index to cluster on, for the next time you run\nstandalone CLUSTER. I think it would be valuable to have those ALTER\nTABLE variants that rewrite the table do so using the cluster order, if\nthere is one, instead of the heap order, which is what it does today.\n\n> 3) VACUUM FULL. It seems, that we can add special case 'ALTER TABLE ...\n> VACUUM FULL, SET TABLESPACE ...', which will follow relatively the same path\n> as with CLUSTER ON, but without any specific index. Relation should be\n> rewritten in the new tablespace during phase 3.\n\nWell, VACUUM FULL is just a table rewrite using the CLUSTER code that\ndoesn't cluster on any index: it just uses the heap order. So in\nessence it's the same as a table-rewriting ALTER TABLE. In other words,\nif you get the index-ordered table rewriting in ALTER TABLE, I don't\nthink this part adds anything useful; and it seems very confusing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 27 Dec 2018 10:24:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 10:24 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2018-Dec-27, Alexey Kondratov wrote:\n>\n> > To summarize:\n> >\n> > 1) Alvaro and Michael agreed, that REINDEX with tablespace move may be\n> > useful. This is done in the patch attached to my initial email. Adding\n> > REINDEX to ALTER TABLE as new action seems quite questionable for me and not\n> > completely semantically correct. ALTER already looks bulky.\n>\n> Agreed on these points.\n\nAs an alternative idea, I think we can have a new ALTER INDEX variants\nthat rebuilds the index while moving tablespace, something like ALTER\nINDEX ... REBUILD SET TABLESPACE ....\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Fri, 28 Dec 2018 17:31:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Fri, Dec 28, 2018 at 11:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Dec 27, 2018 at 10:24 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2018-Dec-27, Alexey Kondratov wrote:\n> >\n> > > To summarize:\n> > >\n> > > 1) Alvaro and Michael agreed, that REINDEX with tablespace move may be\n> > > useful. This is done in the patch attached to my initial email. Adding\n> > > REINDEX to ALTER TABLE as new action seems quite questionable for me and not\n> > > completely semantically correct. ALTER already looks bulky.\n> >\n> > Agreed on these points.\n>\n> As an alternative idea, I think we can have a new ALTER INDEX variants\n> that rebuilds the index while moving tablespace, something like ALTER\n> INDEX ... REBUILD SET TABLESPACE ....\n\n+1\n\nIt seems the easiest way to have feature-full commands. If we put\nfunctionality of CLUSTER and VACUUM FULL to ALTER TABLE, and put\nfunctionality of REINDEX to ALTER INDEX, then CLUSTER, VACUUM FULL and\nREINDEX would be just syntax sugar.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 21:27:58 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi hackers,\n\nOn 2018-12-27 04:57, Michael Paquier wrote:\n> On Wed, Dec 26, 2018 at 03:19:06PM -0300, Alvaro Herrera wrote:\n>> As for REINDEX, I think it's valuable to move tablespace together with\n>> the reindexing. You can already do it with the CREATE INDEX\n>> CONCURRENTLY recipe we recommend, of course; but REINDEX CONCURRENTLY \n>> is\n>> not going to provide that, and it seems worth doing.\n> \n> Even for plain REINDEX that seems useful.\n> \n\nI've rebased the patch and put it on the closest commitfest. It is \nupdated to allow user to do REINDEX CONCURRENTLY + SET TABLESPACE \naltogether, since plain REINDEX CONCURRENTLY became available this year.\n\nOn 2019-06-07 21:27, Alexander Korotkov wrote:\n> On Fri, Dec 28, 2018 at 11:32 AM Masahiko Sawada \n> <sawada.mshk@gmail.com> wrote:\n>> On Thu, Dec 27, 2018 at 10:24 PM Alvaro Herrera\n>> <alvherre@2ndquadrant.com> wrote:\n>> >\n>> > On 2018-Dec-27, Alexey Kondratov wrote:\n>> >\n>> > > To summarize:\n>> > >\n>> > > 1) Alvaro and Michael agreed, that REINDEX with tablespace move may be\n>> > > useful. This is done in the patch attached to my initial email. Adding\n>> > > REINDEX to ALTER TABLE as new action seems quite questionable for me and not\n>> > > completely semantically correct. ALTER already looks bulky.\n>> >\n>> > Agreed on these points.\n>> \n>> As an alternative idea, I think we can have a new ALTER INDEX variants\n>> that rebuilds the index while moving tablespace, something like ALTER\n>> INDEX ... REBUILD SET TABLESPACE ....\n> \n> +1\n> \n> It seems the easiest way to have feature-full commands. If we put\n> functionality of CLUSTER and VACUUM FULL to ALTER TABLE, and put\n> functionality of REINDEX to ALTER INDEX, then CLUSTER, VACUUM FULL and\n> REINDEX would be just syntax sugar.\n> \n\nI definitely bought into the idea of 'change a data type, cluster, and \nchange tablespace all in a single SQL command', but stuck with some \narchitectural questions, when it got to the code.\n\nCurrently, the only one kind of table rewrite is done by ALTER TABLE. It \nis preformed by simply reading tuples one by one via \ntable_scan_getnextslot and inserting into the new table via tuple_insert \ntable access method (AM). In the same time, CLUSTER table rewrite is \nimplemented as a separated table AM relation_copy_for_cluster, which is \nactually a direct link to the heap AM heapam_relation_copy_for_cluster. \nBasically speaking, CLUSTER table rewrite happens 2 abstraction layers \nlower than ALTER TABLE one. Furthermore, CLUSTER seems to be a \nheap-specific AM and may be meaningless for some other storages, which \nis even more important because of coming pluggable storages, isn't it?\n\nMaybe I overly complicate the problem, but to perform a data type change \n(or any other ALTER TABLE modification), cluster, and change tablespace \nin a row we have to bring all this high-level stuff done by ALTER TABLE \nto heapam_relation_copy_for_cluster. But is it even possible without \nleaking abstractions?\n\nI'm working toward adding REINDEX to ALTER INDEX, so it was possible to \ndo 'ALTER INDEX ... REINDEX CONCURRENTLY SET TABLESPACE ...', but ALTER \nTABLE + CLUSTER/VACUUM FULL is quite questionable for me now.\n\nAnyway, new patch, which adds SET TABLESPACE to REINDEX is attached and \nthis functionality seems really useful, so I will be very appreciate if \nsomeone will take a look on it.\n\n\nRegards\n--\nAlexey Kondratov\nPostgres Professional https://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 31 Aug 2019 23:54:18 +0300",
"msg_from": "a.kondratov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi Alexey\nHere are a few comment\nOn Sat, Aug 31, 2019 at 11:54 PM <a.kondratov@postgrespro.ru> wrote:\n\n> Hi hackers,\n>\n>\n> Anyway, new patch, which adds SET TABLESPACE to REINDEX is attached and\n> this functionality seems really useful, so I will be very appreciate if\n> someone will take a look on it.\n>\n\n* There are NOWAIT option in alter index, is there a reason not to have\nsimilar option here?\n* SET TABLESPACE command is not documented\n* There are multiple checking for whether the relation is temporary tables\nof other sessions, one in check_relation_is_movable and other independently\n\n*+ char *tablespacename;\n\ncalling it new_tablespacename will make it consistent with other places\n\n*The patch did't applied cleanly http://cfbot.cputube.org/patch_24_2269.log\n\nregards\n\nSurafel\n\nHi AlexeyHere are a few commentOn Sat, Aug 31, 2019 at 11:54 PM <a.kondratov@postgrespro.ru> wrote:Hi hackers,\n\nAnyway, new patch, which adds SET TABLESPACE to REINDEX is attached and \nthis functionality seems really useful, so I will be very appreciate if \nsomeone will take a look on it.*\n\n\n\t\n\tThere are NOWAIT option in\nalter index, is there a reason not to have similar option here?*\n\n\n\t\n\tSET TABLESPACE command is not\ndocumented\n* There are multiple checking\nfor whether the relation is temporary tables of other sessions, one\nin check_relation_is_movable and other independently\n\n*+\tchar\t *tablespacename;\ncalling it\nnew_tablespacename will make it consistent with other places*The patch did't applied cleanly http://cfbot.cputube.org/patch_24_2269.logregards Surafel",
"msg_date": "Tue, 17 Sep 2019 14:04:37 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi Surafel,\n\nThank you for looking at the patch!\n\nOn 17.09.2019 14:04, Surafel Temesgen wrote:\n> * There are NOWAIT option in alter index, is there a reason not to \n> have similar option here?\n\nCurrently in Postgres SET TABLESPACE always comes with [ NOWAIT ] \noption, so I hope it worth adding this option here for convenience. \nAdded in the new version.\n\n> * SET TABLESPACE command is not documented\n\nActually, new_tablespace parameter was documented, but I've added a more \ndetailed section for SET TABLESPACE too.\n\n> * There are multiple checking for whether the relation is temporary \n> tables of other sessions, one in check_relation_is_movable and other \n> independently\n\nYes, and there is a comment section in the code describing why. There is \na repeatable bunch of checks for verification whether relation movable \nor not, so I put it into a separated function -- \ncheck_relation_is_movable. However, if we want to do only REINDEX, then \nsome of them are excess, so the only one RELATION_IS_OTHER_TEMP is used. \nThus, RELATION_IS_OTHER_TEMP is never executed twice, just different \ncode paths.\n\n> *+ char *tablespacename;\n>\n> calling it new_tablespacename will make it consistent with other places\n>\n\nOK, changed, although I don't think it is important, since this is the \nonly one tablespace variable there.\n\n> *The patch did't applied cleanly \n> http://cfbot.cputube.org/patch_24_2269.log\n>\n\nPatch is rebased and attached with all the fixes described above.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 18 Sep 2019 15:46:20 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 03:46:20PM +0300, Alexey Kondratov wrote:\n> Currently in Postgres SET TABLESPACE always comes with [ NOWAIT ] option, so\n> I hope it worth adding this option here for convenience. Added in the new\n> version.\n\nIt seems to me that it would be good to keep the patch as simple as\npossible for its first version, and split it into two if you would\nlike to add this new option instead of bundling both together. This\nmakes the review of one and the other more simple. Anyway, regarding\nthe grammar, is SET TABLESPACE really our best choice here? What\nabout:\n- TABLESPACE = foo, in parenthesis only?\n- Only using TABLESPACE, without SET at the end of the query?\n\nSET is used in ALTER TABLE per the set of subqueries available there,\nbut that's not the case of REINDEX.\n\n+-- check that all relations moved to new tablespace\n+SELECT relname FROM pg_class\n+WHERE reltablespace=(SELECT oid FROM pg_tablespace WHERE\nspcname='regress_tblspace')\n+AND relname IN ('regress_tblspace_test_tbl_idx');\n+ relname\n+-------------------------------\n+ regress_tblspace_test_tbl_idx\n+(1 row)\nJust to check one relation you could use \\d with the relation (index\nor table) name.\n\n- if (RELATION_IS_OTHER_TEMP(iRel))\n- ereport(ERROR,\n- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n- errmsg(\"cannot reindex temporary tables of other\n- sessions\")))\nI would keep the order of this operation in order with\nCheckTableNotInUse().\n--\nMichael",
"msg_date": "Thu, 19 Sep 2019 13:43:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi Michael,\n\nThank you for your comments.\n\nOn 19.09.2019 7:43, Michael Paquier wrote:\n> On Wed, Sep 18, 2019 at 03:46:20PM +0300, Alexey Kondratov wrote:\n>> Currently in Postgres SET TABLESPACE always comes with [ NOWAIT ] option, so\n>> I hope it worth adding this option here for convenience. Added in the new\n>> version.\n> It seems to me that it would be good to keep the patch as simple as\n> possible for its first version, and split it into two if you would\n> like to add this new option instead of bundling both together. This\n> makes the review of one and the other more simple.\n\nOK, it makes sense. I would also prefer first patch as simple as \npossible, but adding this NOWAIT option required only a few dozens of \nlines, so I just bundled everything together. Anyway, I will split \npatches if we decide to keep [ SET TABLESPACE ... [NOWAIT] ] grammar.\n\n> Anyway, regarding\n> the grammar, is SET TABLESPACE really our best choice here? What\n> about:\n> - TABLESPACE = foo, in parenthesis only?\n> - Only using TABLESPACE, without SET at the end of the query?\n>\n> SET is used in ALTER TABLE per the set of subqueries available there,\n> but that's not the case of REINDEX.\n\nI like SET TABLESPACE grammar, because it already exists and used both \nin ALTER TABLE and ALTER INDEX. Thus, if we once add 'ALTER INDEX \nindex_name REINDEX SET TABLESPACE' (as was proposed earlier in the \nthread), then it will be consistent with 'REINDEX index_name SET \nTABLESPACE'. If we use just plain TABLESPACE, then it may be misleading \nin the following cases:\n\n- REINDEX TABLE table_name TABLESPACE tablespace_name\n- REINDEX (TABLESPACE = tablespace_name) TABLE table_name\n\nsince it may mean 'Reindex all indexes of table_name, that stored in the \ntablespace_name', doesn't it?\n\nHowever, I have rather limited experience with Postgres, so I doesn't \ninsist.\n\n> +-- check that all relations moved to new tablespace\n> +SELECT relname FROM pg_class\n> +WHERE reltablespace=(SELECT oid FROM pg_tablespace WHERE\n> spcname='regress_tblspace')\n> +AND relname IN ('regress_tblspace_test_tbl_idx');\n> + relname\n> +-------------------------------\n> + regress_tblspace_test_tbl_idx\n> +(1 row)\n> Just to check one relation you could use \\d with the relation (index\n> or table) name.\n\nYes, \\d outputs tablespace name if it differs from pg_default, but it \nshows other information in addition, which is not necessary here. Also \nits output has more chances to be changed later, which may lead to the \nfailed tests. This query output is more or less stable and new relations \nmay be easily added to tests if we once add tablespace change to \nCLUSTER/VACUUM FULL. I can change test to use \\d, but not sure that it \nwould reduce test output length or will be helpful for a future tests \nsupport.\n\n> - if (RELATION_IS_OTHER_TEMP(iRel))\n> - ereport(ERROR,\n> - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> - errmsg(\"cannot reindex temporary tables of other\n> - sessions\")))\n> I would keep the order of this operation in order with\n> CheckTableNotInUse().\n\nSure, I haven't noticed that reordered these operations, thanks.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 14:44:34 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 12:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n> It seems to me that it would be good to keep the patch as simple as\n> possible for its first version, and split it into two if you would\n> like to add this new option instead of bundling both together. This\n> makes the review of one and the other more simple. Anyway, regarding\n> the grammar, is SET TABLESPACE really our best choice here? What\n> about:\n> - TABLESPACE = foo, in parenthesis only?\n> - Only using TABLESPACE, without SET at the end of the query?\n>\n> SET is used in ALTER TABLE per the set of subqueries available there,\n> but that's not the case of REINDEX.\n\nSo, earlier in this thread, I suggested making this part of ALTER\nTABLE, and several people seemed to like that idea. Did we have a\nreason for dropping that approach?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Sep 2019 09:21:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 19.09.2019 16:21, Robert Haas wrote:\n> On Thu, Sep 19, 2019 at 12:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> It seems to me that it would be good to keep the patch as simple as\n>> possible for its first version, and split it into two if you would\n>> like to add this new option instead of bundling both together. This\n>> makes the review of one and the other more simple. Anyway, regarding\n>> the grammar, is SET TABLESPACE really our best choice here? What\n>> about:\n>> - TABLESPACE = foo, in parenthesis only?\n>> - Only using TABLESPACE, without SET at the end of the query?\n>>\n>> SET is used in ALTER TABLE per the set of subqueries available there,\n>> but that's not the case of REINDEX.\n> So, earlier in this thread, I suggested making this part of ALTER\n> TABLE, and several people seemed to like that idea. Did we have a\n> reason for dropping that approach?\n\nIf we add this option to REINDEX, then for 'ALTER TABLE tb_name action1, \nREINDEX SET TABLESPACE tbsp_name, action3' action2 will be just a direct \nalias to 'REINDEX TABLE tb_name SET TABLESPACE tbsp_name'. So it seems \npractical to do this for REINDEX first.\n\nThe only one concern I have against adding REINDEX to ALTER TABLE in \nthis context is that it will allow user to write such a chimera:\n\nALTER TABLE tb_name REINDEX SET TABLESPACE tbsp_name, SET TABLESPACE \ntbsp_name;\n\nwhen they want to move both table and all the indexes. Because simple\n\nALTER TABLE tb_name REINDEX, SET TABLESPACE tbsp_name;\n\nlooks ambiguous. Should it change tablespace of table, indexes or both?\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:40:41 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 05:40:41PM +0300, Alexey Kondratov wrote:\n> On 19.09.2019 16:21, Robert Haas wrote:\n>> So, earlier in this thread, I suggested making this part of ALTER\n>> TABLE, and several people seemed to like that idea. Did we have a\n>> reason for dropping that approach?\n\nPersonally, I don't find this idea very attractive as ALTER TABLE is\nalready complicated enough with all the subqueries we already support\nin the command, all the logic we need to maintain to make combinations\nof those subqueries in a minimum number of steps, and also the number\nof bugs we have seen because of the amount of complication present.\n\n> If we add this option to REINDEX, then for 'ALTER TABLE tb_name action1,\n> REINDEX SET TABLESPACE tbsp_name, action3' action2 will be just a direct\n> alias to 'REINDEX TABLE tb_name SET TABLESPACE tbsp_name'. So it seems\n> practical to do this for REINDEX first.\n> \n> The only one concern I have against adding REINDEX to ALTER TABLE in this\n> context is that it will allow user to write such a chimera:\n>\n> ALTER TABLE tb_name REINDEX SET TABLESPACE tbsp_name, SET TABLESPACE\n> tbsp_name;\n>\n> when they want to move both table and all the indexes. Because simple\n> ALTER TABLE tb_name REINDEX, SET TABLESPACE tbsp_name;\n> looks ambiguous. Should it change tablespace of table, indexes or both?\n\nTricky question, but we don't change the tablespace of indexes when\nusing an ALTER TABLE, so I would say no on compatibility grounds.\nALTER TABLE has never touched the tablespace of indexes, and I don't\nthink that we should begin to do so.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 11:06:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 20/9/19 4:06, Michael Paquier wrote:\n> On Thu, Sep 19, 2019 at 05:40:41PM +0300, Alexey Kondratov wrote:\n>> On 19.09.2019 16:21, Robert Haas wrote:\n>>> So, earlier in this thread, I suggested making this part of ALTER\n>>> TABLE, and several people seemed to like that idea. Did we have a\n>>> reason for dropping that approach?\n> Personally, I don't find this idea very attractive as ALTER TABLE is\n> already complicated enough with all the subqueries we already support\n> in the command, all the logic we need to maintain to make combinations\n> of those subqueries in a minimum number of steps, and also the number\n> of bugs we have seen because of the amount of complication present.\n\nYes, but please keep the other options: At it is, cluster, vacuum full \nand reindex already rewrite the table in full; Being able to write the \nresult to a different tablespace than the original object was stored in \nenables a whole world of very interesting possibilities.... including a \nquick way out of a \"so little disk space available that vacuum won't \nwork properly\" situation --- which I'm sure MANY users will appreciate, \nincluding me\n\n> If we add this option to REINDEX, then for 'ALTER TABLE tb_name action1,\n> REINDEX SET TABLESPACE tbsp_name, action3' action2 will be just a direct\n> alias to 'REINDEX TABLE tb_name SET TABLESPACE tbsp_name'. So it seems\n> practical to do this for REINDEX first.\n>\n> The only one concern I have against adding REINDEX to ALTER TABLE in this\n> context is that it will allow user to write such a chimera:\n>\n> ALTER TABLE tb_name REINDEX SET TABLESPACE tbsp_name, SET TABLESPACE\n> tbsp_name;\n>\n> when they want to move both table and all the indexes. Because simple\n> ALTER TABLE tb_name REINDEX, SET TABLESPACE tbsp_name;\n> looks ambiguous. Should it change tablespace of table, indexes or both?\n\nIndeed.\n\nIMHO, that form of the command should not allow that much flexibility... \neven on the \"principle of least surprise\" grounds :S\n\nThat is, I'd restrict the ability to change (output) tablespace to the \n\"direct\" form --- REINDEX name, VACUUM (FULL) name, CLUSTER name --- \nwhereas the ALTER table|index SET TABLESPACE would continue to work.\n\nNow that I come to think of it, maybe saying \"output\" or \"move to\" \nrather than \"set tablespace\" would make more sense for this variation of \nthe commands? (clearer, less prone to confusion)?\n\n> Tricky question, but we don't change the tablespace of indexes when\n> using an ALTER TABLE, so I would say no on compatibility grounds.\n> ALTER TABLE has never touched the tablespace of indexes, and I don't\n> think that we should begin to do so.\n\nIndeed.\n\n\nI might be missing something, but is there any reason to not *require* a \nexplicit transaction for the above multi-action commands? I mean, have \nit be:\n\nBEGIN;\n\nALTER TABLE tb_name SET TABLESPACE tbsp_name;��� -- moves the table .... \nbut possibly NOT the indexes?\n\nALTER TABLE tb_name REINDEX [OUTPUT TABLESPACE tbsp_name];��� -- \nREINDEX, placing the resulting index on tbsp_name instead of the \noriginal one\n\nCOMMIT;\n\n... and have the parser/planner combine the steps if it'd make sense (it \nprobably wouldn't in this example)?\n\n\nJust my .02�\n\n\nThanks,\n\n ��� / J.L.\n\n\n\n\n",
"msg_date": "Fri, 20 Sep 2019 10:26:21 +0200",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2019-Sep-19, Robert Haas wrote:\n\n> So, earlier in this thread, I suggested making this part of ALTER\n> TABLE, and several people seemed to like that idea. Did we have a\n> reason for dropping that approach?\n\nHmm, my own reading of that was to add tablespace changing abilities to\nALTER TABLE *in addition* to this patch, not instead of it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Sep 2019 13:38:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 20.09.2019 19:38, Alvaro Herrera wrote:\n> On 2019-Sep-19, Robert Haas wrote:\n>\n>> So, earlier in this thread, I suggested making this part of ALTER\n>> TABLE, and several people seemed to like that idea. Did we have a\n>> reason for dropping that approach?\n> Hmm, my own reading of that was to add tablespace changing abilities to\n> ALTER TABLE *in addition* to this patch, not instead of it.\n\nThat was my understanding too.\n\nOn 20.09.2019 11:26, Jose Luis Tallon wrote:\n> On 20/9/19 4:06, Michael Paquier wrote:\n>> Personally, I don't find this idea very attractive as ALTER TABLE is\n>> already complicated enough with all the subqueries we already support\n>> in the command, all the logic we need to maintain to make combinations\n>> of those subqueries in a minimum number of steps, and also the number\n>> of bugs we have seen because of the amount of complication present.\n>\n> Yes, but please keep the other options: At it is, cluster, vacuum full \n> and reindex already rewrite the table in full; Being able to write the \n> result to a different tablespace than the original object was stored \n> in enables a whole world of very interesting possibilities.... \n> including a quick way out of a \"so little disk space available that \n> vacuum won't work properly\" situation --- which I'm sure MANY users \n> will appreciate, including me \n\nYes, sure, that was my main motivation. The first message in the thread \ncontains a patch, which adds SET TABLESPACE support to all of CLUSTER, \nVACUUM FULL and REINDEX. However, there came up an idea to integrate \nCLUSTER/VACUUM FULL with ALTER TABLE and do their work + all the ALTER \nTABLE stuff in a single table rewrite. I've dig a little bit into this \nand ended up with some architectural questions and concerns [1]. So I \ndecided to start with a simple REINDEX patch.\n\nAnyway, I've followed Michael's advice and split the last patch into two:\n\n1) Adds all the main functionality, but with simplified 'REINDEX INDEX [ \nCONCURRENTLY ] ... [ TABLESPACE ... ]' grammar;\n\n2) Adds a more sophisticated syntax with '[ SET TABLESPACE ... [ NOWAIT \n] ]'.\n\nPatch 1 contains all the docs and tests and may be applied/committed \nseparately or together with 2, which is fully optional.\n\nRecent merge conflicts and reindex_index validations order are also \nfixed in the attached version.\n\n[1] \nhttps://www.postgresql.org/message-id/6b2a5c4de19f111ef24b63428033bb67%40postgrespro.ru\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Tue, 24 Sep 2019 16:02:39 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: tested, failed\n\n* I had to replace heap_open/close with table_open/close to get the\r\npatch to compile against master\r\n\r\nIn the documentation \r\n\r\n+ <para>\r\n+ This specifies a tablespace, where all rebuilt indexes will be created.\r\n+ Can be used only with <literal>REINDEX INDEX</literal> and\r\n+ <literal>REINDEX TABLE</literal>, since the system indexes are not\r\n+ movable, but <literal>SCHEMA</literal>, <literal>DATABASE</literal> or\r\n+ <literal>SYSTEM</literal> very likely will has one.\r\n+ </para>\r\n\r\nI found the \"SCHEMA,DATABASE or SYSTEM very likely will has one.\" portion confusing and would be inclined to remove it or somehow reword it.\r\n\r\nConsider the following\r\n\r\n-------------\r\n create index foo_bar_idx on foo(bar) tablespace pg_default;\r\nCREATE INDEX\r\nreindex=# \\d foo\r\n Table \"public.foo\"\r\n Column | Type | Collation | Nullable | Default \r\n--------+---------+-----------+----------+---------\r\n id | integer | | not null | \r\n bar | text | | | \r\nIndexes:\r\n \"foo_pkey\" PRIMARY KEY, btree (id)\r\n \"foo_bar_idx\" btree (bar)\r\n\r\nreindex=# reindex index foo_bar_idx tablespace tst1;\r\nREINDEX\r\nreindex=# reindex index foo_bar_idx tablespace pg_default;\r\nREINDEX\r\nreindex=# \\d foo\r\n Table \"public.foo\"\r\n Column | Type | Collation | Nullable | Default \r\n--------+---------+-----------+----------+---------\r\n id | integer | | not null | \r\n bar | text | | | \r\nIndexes:\r\n \"foo_pkey\" PRIMARY KEY, btree (id)\r\n \"foo_bar_idx\" btree (bar), tablespace \"pg_default\"\r\n--------\r\n\r\nIt is a bit strange that it says \"pg_default\" as the tablespace. If I do\r\nthis with a alter table to the table, moving the table back to pg_default\r\nmakes it look as it did before.\r\n\r\nOtherwise the first patch seems fine.\r\n\r\n\r\nWith the second patch(for NOWAIT) I did the following\r\n\r\nT1: begin;\r\nT1: insert into foo select generate_series(1,1000);\r\nT2: reindex index foo_bar_idx set tablespace tst1 nowait;\r\n\r\nT2 is waiting for a lock. This isn't what I would expect.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Sun, 17 Nov 2019 00:53:27 +0000",
"msg_from": "Steve Singer <steve@ssinger.info>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER,\n VACUUM FULL and REINDEX to change tablespace on the\n fly"
},
{
"msg_contents": "Hi Steve,\n\nThank you for review.\n\nOn 17.11.2019 3:53, Steve Singer wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, failed\n> Spec compliant: not tested\n> Documentation: tested, failed\n>\n> * I had to replace heap_open/close with table_open/close to get the\n> patch to compile against master\n>\n> In the documentation\n>\n> + <para>\n> + This specifies a tablespace, where all rebuilt indexes will be created.\n> + Can be used only with <literal>REINDEX INDEX</literal> and\n> + <literal>REINDEX TABLE</literal>, since the system indexes are not\n> + movable, but <literal>SCHEMA</literal>, <literal>DATABASE</literal> or\n> + <literal>SYSTEM</literal> very likely will has one.\n> + </para>\n>\n> I found the \"SCHEMA,DATABASE or SYSTEM very likely will has one.\" portion confusing and would be inclined to remove it or somehow reword it.\n\nIn the attached new version REINDEX with TABLESPACE and {SCHEMA, \nDATABASE, SYSTEM} now behaves more like with CONCURRENTLY, i.e. it skips \nunsuitable relations and shows warning. So this section in docs has been \nupdated as well.\n\nAlso the whole patch has been reworked. I noticed that my code in \nreindex_index was doing pretty much the same as inside \nRelationSetNewRelfilenode. So I just added a possibility to specify new \ntablespace for RelationSetNewRelfilenode instead. Thus, even with \naddition of new tests the patch becomes less complex.\n\n> Consider the following\n>\n> -------------\n> create index foo_bar_idx on foo(bar) tablespace pg_default;\n> CREATE INDEX\n> reindex=# \\d foo\n> Table \"public.foo\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> id | integer | | not null |\n> bar | text | | |\n> Indexes:\n> \"foo_pkey\" PRIMARY KEY, btree (id)\n> \"foo_bar_idx\" btree (bar)\n>\n> reindex=# reindex index foo_bar_idx tablespace tst1;\n> REINDEX\n> reindex=# reindex index foo_bar_idx tablespace pg_default;\n> REINDEX\n> reindex=# \\d foo\n> Table \"public.foo\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> id | integer | | not null |\n> bar | text | | |\n> Indexes:\n> \"foo_pkey\" PRIMARY KEY, btree (id)\n> \"foo_bar_idx\" btree (bar), tablespace \"pg_default\"\n> --------\n>\n> It is a bit strange that it says \"pg_default\" as the tablespace. If I do\n> this with a alter table to the table, moving the table back to pg_default\n> makes it look as it did before.\n>\n> Otherwise the first patch seems fine.\n\nYes, I missed the fact that default tablespace of database is stored \nimplicitly as InvalidOid, but I was setting it explicitly as specified. \nI have changed this behavior to stay consistent with ALTER TABLE.\n\n> With the second patch(for NOWAIT) I did the following\n>\n> T1: begin;\n> T1: insert into foo select generate_series(1,1000);\n> T2: reindex index foo_bar_idx set tablespace tst1 nowait;\n>\n> T2 is waiting for a lock. This isn't what I would expect.\n\nIndeed, I have added nowait option for RangeVarGetRelidExtended, so it \nshould not wait if index is locked. However, for reindex we also have to \nput share lock on the parent table relation, which is done by opening it \nvia table_open(heapId, ShareLock).\n\nThe only one solution I can figure out right now is to wrap all such \nopens with ConditionalLockRelationOid(relId, ShareLock) and then do \nactual open with NoLock. This is how something similar is implemented in \nVACUUM if VACOPT_SKIP_LOCKED is specified. However, there are multiple \ncode paths with table_open, so it becomes a bit ugly.\n\nI will leave the second patch aside for now and experiment with it. \nActually, its main idea was to mimic ALTER INDEX ... SET TABLESPACE \n[NOWAIT] syntax, but probably it is better to stick with more brief \nplain TABLESPACE like in CREATE INDEX.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\nP.S. I have also added all previous thread participants to CC in order to do not split the thread. Sorry if it was a bad idea.",
"msg_date": "Wed, 20 Nov 2019 21:16:48 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, 20 Nov 2019, Alexey Kondratov wrote:\n\n> Hi Steve,\n>\n> Thank you for review.\n\n\nI've looked through the patch and tested it.\nI don't see any issues with this version. I think it is ready for a \ncommitter.\n\n\n>\n> Regards\n>\n> -- \n> Alexey Kondratov\n>\n> Postgres Professional https://www.postgrespro.com\n> Russian Postgres Company\n>\n> P.S. I have also added all previous thread participants to CC in order to do \n> not split the thread. Sorry if it was a bad idea.\n>\n\nSteve\n\n\n\n\n",
"msg_date": "Sat, 23 Nov 2019 20:10:51 -0500 (EST)",
"msg_from": "Steve Singer <steve@ssinger.info>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, 20 Nov 2019 at 19:16, Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> Hi Steve,\n>\n> Thank you for review.\n>\n> On 17.11.2019 3:53, Steve Singer wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, failed\n> > Spec compliant: not tested\n> > Documentation: tested, failed\n> >\n> > * I had to replace heap_open/close with table_open/close to get the\n> > patch to compile against master\n> >\n> > In the documentation\n> >\n> > + <para>\n> > + This specifies a tablespace, where all rebuilt indexes will be created.\n> > + Can be used only with <literal>REINDEX INDEX</literal> and\n> > + <literal>REINDEX TABLE</literal>, since the system indexes are not\n> > + movable, but <literal>SCHEMA</literal>, <literal>DATABASE</literal> or\n> > + <literal>SYSTEM</literal> very likely will has one.\n> > + </para>\n> >\n> > I found the \"SCHEMA,DATABASE or SYSTEM very likely will has one.\" portion confusing and would be inclined to remove it or somehow reword it.\n>\n> In the attached new version REINDEX with TABLESPACE and {SCHEMA,\n> DATABASE, SYSTEM} now behaves more like with CONCURRENTLY, i.e. it skips\n> unsuitable relations and shows warning. So this section in docs has been\n> updated as well.\n>\n> Also the whole patch has been reworked. I noticed that my code in\n> reindex_index was doing pretty much the same as inside\n> RelationSetNewRelfilenode. So I just added a possibility to specify new\n> tablespace for RelationSetNewRelfilenode instead. Thus, even with\n> addition of new tests the patch becomes less complex.\n>\n> > Consider the following\n> >\n> > -------------\n> > create index foo_bar_idx on foo(bar) tablespace pg_default;\n> > CREATE INDEX\n> > reindex=# \\d foo\n> > Table \"public.foo\"\n> > Column | Type | Collation | Nullable | Default\n> > --------+---------+-----------+----------+---------\n> > id | integer | | not null |\n> > bar | text | | |\n> > Indexes:\n> > \"foo_pkey\" PRIMARY KEY, btree (id)\n> > \"foo_bar_idx\" btree (bar)\n> >\n> > reindex=# reindex index foo_bar_idx tablespace tst1;\n> > REINDEX\n> > reindex=# reindex index foo_bar_idx tablespace pg_default;\n> > REINDEX\n> > reindex=# \\d foo\n> > Table \"public.foo\"\n> > Column | Type | Collation | Nullable | Default\n> > --------+---------+-----------+----------+---------\n> > id | integer | | not null |\n> > bar | text | | |\n> > Indexes:\n> > \"foo_pkey\" PRIMARY KEY, btree (id)\n> > \"foo_bar_idx\" btree (bar), tablespace \"pg_default\"\n> > --------\n> >\n> > It is a bit strange that it says \"pg_default\" as the tablespace. If I do\n> > this with a alter table to the table, moving the table back to pg_default\n> > makes it look as it did before.\n> >\n> > Otherwise the first patch seems fine.\n>\n> Yes, I missed the fact that default tablespace of database is stored\n> implicitly as InvalidOid, but I was setting it explicitly as specified.\n> I have changed this behavior to stay consistent with ALTER TABLE.\n>\n> > With the second patch(for NOWAIT) I did the following\n> >\n> > T1: begin;\n> > T1: insert into foo select generate_series(1,1000);\n> > T2: reindex index foo_bar_idx set tablespace tst1 nowait;\n> >\n> > T2 is waiting for a lock. This isn't what I would expect.\n>\n> Indeed, I have added nowait option for RangeVarGetRelidExtended, so it\n> should not wait if index is locked. However, for reindex we also have to\n> put share lock on the parent table relation, which is done by opening it\n> via table_open(heapId, ShareLock).\n>\n> The only one solution I can figure out right now is to wrap all such\n> opens with ConditionalLockRelationOid(relId, ShareLock) and then do\n> actual open with NoLock. This is how something similar is implemented in\n> VACUUM if VACOPT_SKIP_LOCKED is specified. However, there are multiple\n> code paths with table_open, so it becomes a bit ugly.\n>\n> I will leave the second patch aside for now and experiment with it.\n> Actually, its main idea was to mimic ALTER INDEX ... SET TABLESPACE\n> [NOWAIT] syntax, but probably it is better to stick with more brief\n> plain TABLESPACE like in CREATE INDEX.\n>\n\nThank you for working on this.\n\nI looked at v4 patch. Here are some comments:\n\n+ /* Skip all mapped relations if TABLESPACE is specified */\n+ if (OidIsValid(tableSpaceOid) &&\n+ classtuple->relfilenode == 0)\n\nI think we can use OidIsValid(classtuple->relfilenode) instead.\n\n---\n+ <para>\n+ This specifies a tablespace, where all rebuilt indexes will be created.\n+ Cannot be used with \"mapped\" and temporary relations. If\n<literal>SCHEMA</literal>,\n+ <literal>DATABASE</literal> or <literal>SYSTEM</literal> is\nspecified, then\n+ all unsuitable relations will be skipped and a single\n<literal>WARNING</literal>\n+ will be generated.\n+ </para>\n\nThis change says that temporary relation is not supported but it\nactually seems to work. Which is correct?\n\npostgres(1:37821)=# select relname, relpersistence from pg_class where\nrelname like 'tmp%';\n relname | relpersistence\n----------+----------------\n tmp | t\n tmp_pkey | t\n(2 rows)\n\npostgres(1:37821)=# reindex table tmp tablespace ts;\nREINDEX\n\n---\n\n+ if (newTableSpaceName)\n+ {\n+ tableSpaceOid = get_tablespace_oid(newTableSpaceName, false);\n+\n+ /* Can't move a non-shared relation into pg_global */\n+ if (tableSpaceOid == GLOBALTABLESPACE_OID)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"only shared relations\ncan be placed in pg_global tablespace\")));\n+ }\n\n+ if (OidIsValid(tablespaceOid) && RelationIsMapped(iRel))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot move system relation \\\"%s\\\"\",\n+\nRelationGetRelationName(iRel))));\n\nISTM the kind of above errors are the same: the given tablespace\nexists but moving tablespace to it is not allowed since it's not\nsupported in PostgreSQL. So I think we can use\nERRCODE_FEATURE_NOT_SUPPORTED instead of\nERRCODE_INVALID_PARAMETER_VALUE (which is used at 3 places) .\nThoughts?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Nov 2019 23:09:55 +0100",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 11:09:55PM +0100, Masahiko Sawada wrote:\n> Thank you for working on this.\n\nI have been looking at the latest patch as well.\n\n> I looked at v4 patch. Here are some comments:\n> \n> + /* Skip all mapped relations if TABLESPACE is specified */\n> + if (OidIsValid(tableSpaceOid) &&\n> + classtuple->relfilenode == 0)\n> \n> I think we can use OidIsValid(classtuple->relfilenode) instead.\n\nYes, definitely.\n\n> This change says that temporary relation is not supported but it\n> actually seems to work. Which is correct?\n\nYeah, I don't really see a reason why it would not work.\n\n> ISTM the kind of above errors are the same: the given tablespace\n> exists but moving tablespace to it is not allowed since it's not\n> supported in PostgreSQL. So I think we can use\n> ERRCODE_FEATURE_NOT_SUPPORTED instead of\n> ERRCODE_INVALID_PARAMETER_VALUE (which is used at 3 places) .\n\nYes, it is also not project style to use full sentences in error\nmessages, so I would suggest instead (note the missing quotes in the\noriginal patch):\ncannot move non-shared relation to tablespace \\\"%s\\\"\n\n@@ -3455,6 +3461,8 @@ RelationSetNewRelfilenode(Relation relation,\nchar persistence)\n */\n newrnode = relation->rd_node;\n newrnode.relNode = newrelfilenode;\n+ if (OidIsValid(tablespaceOid))\n+ newrnode.spcNode = newTablespaceOid;\nThe core of the patch is actually here. It seems to me that this is a\nvery bad idea because you actually hijack a logic which happens at a\nmuch lower level which is based on the state of the tablespace stored\nin the relation cache entry of the relation being reindexed, then the\ntablespace choice actually happens in RelationInitPhysicalAddr() which\nfor the new relfilenode once the follow-up CCI is done. So this very\nlikely needs more thoughts, and bringing to the point: shouldn't you\nactually be careful that the relation tablespace is correctly updated\nbefore reindexing it and before creating its new relfilenode? This\nway, RelationSetNewRelfilenode() does not need any additional work,\nand I think that this saves from potential bugs in the choice of the\ntablespace used with the new relfilenode.\n\nThere is no need for opt_tablespace_name as new node for the parsing\ngrammar of gram.y as OptTableSpace is able to do the exact same job.\n\n+ /* Skip all mapped relations if TABLESPACE is specified */\n+ if (OidIsValid(tableSpaceOid) &&\n+ classtuple->relfilenode == 0)\n+ {\n+ if (!system_warning)\n+ ereport(WARNING,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot move indexes of system relations, skipping all\")));\n+ system_warning = true;\n continue;\nIt seems to me that you need to use RelationIsMapped() here, and we\nhave no tests for it. On top of that, we should warn about *both*\nfor catalogs reindexes and mapped relation whose tablespaces are being\nchanged once each.\n\nYour patch has forgotten to update copyfuncs.c and equalfuncs.c with\nthe new tablespace string field.\n\nIt would be nice to add tab completion for this new clause in psql.\nThis is not ready for committer yet in my opinion, and more work is\ndone, so I am marking it as returned with feedback for now.\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 12:54:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 12:54:16PM +0900, Michael Paquier wrote:\n> + /* Skip all mapped relations if TABLESPACE is specified */\n> + if (OidIsValid(tableSpaceOid) &&\n> + classtuple->relfilenode == 0)\n> + {\n> + if (!system_warning)\n> + ereport(WARNING,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot move indexes of system relations, skipping all\")));\n> + system_warning = true;\n> continue;\n> It seems to me that you need to use RelationIsMapped() here, and we\n> have no tests for it. On top of that, we should warn about *both*\n> for catalogs reindexes and mapped relation whose tablespaces are being\n> changed once each.\n\nDitto. This has been sent too quickly. You cannot use\nRelationIsMapped() here because there is no Relation at hand, but I\nwould suggest to use OidIsValid, and mention that this is the same\ncheck as RelationIsMapped().\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 13:05:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 12:54:16PM +0900, Michael Paquier wrote:\n> It would be nice to add tab completion for this new clause in psql.\n> This is not ready for committer yet in my opinion, and more work is\n> done, so I am marking it as returned with feedback for now.\n\nAnd I have somewhat missed to notice the timing of the review replies\nas you did not have room to reply, so fixed the CF entry to \"waiting\non author\", and bumped it to next CF instead.\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 13:28:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 27.11.2019 6:54, Michael Paquier wrote:\n> On Tue, Nov 26, 2019 at 11:09:55PM +0100, Masahiko Sawada wrote:\n>> I looked at v4 patch. Here are some comments:\n>>\n>> + /* Skip all mapped relations if TABLESPACE is specified */\n>> + if (OidIsValid(tableSpaceOid) &&\n>> + classtuple->relfilenode == 0)\n>>\n>> I think we can use OidIsValid(classtuple->relfilenode) instead.\n> Yes, definitely.\n\nYes, switched to !OidIsValid(classtuple->relfilenode). Also I added a \ncomment that it is meant to be equivalent to RelationIsMapped() and \nextended tests.\n\n>\n>> This change says that temporary relation is not supported but it\n>> actually seems to work. Which is correct?\n> Yeah, I don't really see a reason why it would not work.\n\nMy bad, I was keeping in mind RELATION_IS_OTHER_TEMP validation, but it \nis for temp tables of other backends only, so it definitely should not \nbe in the doc. Removed.\n\n> Your patch has forgotten to update copyfuncs.c and equalfuncs.c with\n> the new tablespace string field.\n\nFixed, thanks.\n\n> It would be nice to add tab completion for this new clause in psql.\n\nAdded.\n\n> There is no need for opt_tablespace_name as new node for the parsing\n> grammar of gram.y as OptTableSpace is able to do the exact same job.\n\nSure, it was an artifact from the times, where I used optional SET \nTABLESPACE clause. Removed.\n\n>\n> @@ -3455,6 +3461,8 @@ RelationSetNewRelfilenode(Relation relation,\n> char persistence)\n> */\n> newrnode = relation->rd_node;\n> newrnode.relNode = newrelfilenode;\n> + if (OidIsValid(tablespaceOid))\n> + newrnode.spcNode = newTablespaceOid;\n> The core of the patch is actually here. It seems to me that this is a\n> very bad idea because you actually hijack a logic which happens at a\n> much lower level which is based on the state of the tablespace stored\n> in the relation cache entry of the relation being reindexed, then the\n> tablespace choice actually happens in RelationInitPhysicalAddr() which\n> for the new relfilenode once the follow-up CCI is done. So this very\n> likely needs more thoughts, and bringing to the point: shouldn't you\n> actually be careful that the relation tablespace is correctly updated\n> before reindexing it and before creating its new relfilenode? This\n> way, RelationSetNewRelfilenode() does not need any additional work,\n> and I think that this saves from potential bugs in the choice of the\n> tablespace used with the new relfilenode.\n\nWhen I did the first version of the patch I was looking on \nATExecSetTableSpace, which implements ALTER ... SET TABLESPACE. And \nthere is very similar pipeline there:\n\n1) Find pg_class entry with SearchSysCacheCopy1\n\n2) Create new relfilenode with GetNewRelFileNode\n\n3) Set new tablespace for this relfilenode\n\n4) Do some work with new relfilenode\n\n5) Update pg_class entry with new tablespace\n\n6) Do CommandCounterIncrement\n\nThe only difference is that point 3) and tablespace part of 5) were \nmissing in RelationSetNewRelfilenode, so I added them, and I do 4) after \n6) in REINDEX. Thus, it seems that in my implementation of tablespace \nchange in REINDEX I am more sure that \"the relation tablespace is \ncorrectly updated before reindexing\", since I do reindex after CCI \n(point 6), doesn't it?\n\nSo why it is fine for ATExecSetTableSpace to do pretty much the same, \nbut not for REINDEX? Or the key point is in doing actual work before \nCCI, but for me it seems a bit against what you have wrote?\n\nThus, I cannot get your point correctly here. Can you, please, elaborate \na little bit more your concerns?\n\n>> ISTM the kind of above errors are the same: the given tablespace\n>> exists but moving tablespace to it is not allowed since it's not\n>> supported in PostgreSQL. So I think we can use\n>> ERRCODE_FEATURE_NOT_SUPPORTED instead of\n>> ERRCODE_INVALID_PARAMETER_VALUE (which is used at 3 places) .\n> Yes, it is also not project style to use full sentences in error\n> messages, so I would suggest instead (note the missing quotes in the\n> original patch):\n> cannot move non-shared relation to tablespace \\\"%s\\\"\n\nSame here. I have taken this validation directly from tablecmds.c part \nfor ALTER ... SET TABLESPACE. And there is exactly the same message \n\"only shared relations can be placed in pg_global tablespace\" with \nERRCODE_INVALID_PARAMETER_VALUE there.\n\nHowever, I understand your point, but still, would it be better if I \nstick to the same ERRCODE/message? Or should I introduce new \nERRCODE/message for the same case?\n\n> And I have somewhat missed to notice the timing of the review replies\n> as you did not have room to reply, so fixed the CF entry to \"waiting\n> on author\", and bumped it to next CF instead.\n\nThank you! Attached is a patch, that addresses all the issues above, \nexcepting the last two points (core part and error messages for \npg_global), which are not clear for me right now.\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 27 Nov 2019 20:47:06 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 08:47:06PM +0300, Alexey Kondratov wrote:\n> The only difference is that point 3) and tablespace part of 5) were missing\n> in RelationSetNewRelfilenode, so I added them, and I do 4) after 6) in\n> REINDEX. Thus, it seems that in my implementation of tablespace change in\n> REINDEX I am more sure that \"the relation tablespace is correctly updated\n> before reindexing\", since I do reindex after CCI (point 6), doesn't it?\n> \n> So why it is fine for ATExecSetTableSpace to do pretty much the same, but\n> not for REINDEX? Or the key point is in doing actual work before CCI, but\n> for me it seems a bit against what you have wrote?\n\nNope, the order is not the same on what you do here, causing a\nduplication in the tablespace selection within\nRelationSetNewRelfilenode() and when flushing the relation on the new\ntablespace for the first time after the CCI happens, please see\nbelow. And we should avoid that.\n\n> Thus, I cannot get your point correctly here. Can you, please, elaborate a\n> little bit more your concerns?\n\nThe case of REINDEX CONCURRENTLY is pretty simple, because a new\nrelation which is a copy of the old relation is created before doing\nthe reindex, so you simply need to set the tablespace OID correctly\nin index_concurrently_create_copy(). And actually, I think that the\ncomputation is incorrect because we need to check after\nMyDatabaseTableSpace as well, no?\n\nThe case of REINDEX is more tricky, because you are working on a\nrelation that already exists, hence I think that what you need to do a\ndifferent thing before the actual REINDEX:\n1) Update the existing relation's pg_class tuple to point to the new\ntablespace.\n2) Do a CommandCounterIncrement.\nSo I think that the order of the operations you are doing is incorrect,\nand that you have a risk of breaking the existing tablespace assignment\nlogic done when first flushing a new relfilenode.\n\nThis actually brings an extra thing: when doing a plain REINDEX you\nneed to make sure that the past relfilenode of the relation gets away\nproperly. The attached POC patch does that before doing the CCI which\nis a bit ugly, but that's enough to show my point, and there is no\nneed to touch RelationSetNewRelfilenode() this way.\n--\nMichael",
"msg_date": "Mon, 2 Dec 2019 17:21:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 02.12.2019 11:21, Michael Paquier wrote:\n> On Wed, Nov 27, 2019 at 08:47:06PM +0300, Alexey Kondratov wrote:\n>> The only difference is that point 3) and tablespace part of 5) were missing\n>> in RelationSetNewRelfilenode, so I added them, and I do 4) after 6) in\n>> REINDEX. Thus, it seems that in my implementation of tablespace change in\n>> REINDEX I am more sure that \"the relation tablespace is correctly updated\n>> before reindexing\", since I do reindex after CCI (point 6), doesn't it?\n>>\n>> So why it is fine for ATExecSetTableSpace to do pretty much the same, but\n>> not for REINDEX? Or the key point is in doing actual work before CCI, but\n>> for me it seems a bit against what you have wrote?\n> Nope, the order is not the same on what you do here, causing a\n> duplication in the tablespace selection within\n> RelationSetNewRelfilenode() and when flushing the relation on the new\n> tablespace for the first time after the CCI happens, please see\n> below. And we should avoid that.\n>\n>> Thus, I cannot get your point correctly here. Can you, please, elaborate a\n>> little bit more your concerns?\n> The case of REINDEX CONCURRENTLY is pretty simple, because a new\n> relation which is a copy of the old relation is created before doing\n> the reindex, so you simply need to set the tablespace OID correctly\n> in index_concurrently_create_copy(). And actually, I think that the\n> computation is incorrect because we need to check after\n> MyDatabaseTableSpace as well, no?\n\nNo, the same logic already exists in heap_create:\n\n ��� if (reltablespace == MyDatabaseTableSpace)\n ��� ��� reltablespace = InvalidOid;\n\nWhich is called by index_concurrently_create_copy -> index_create -> \nheap_create.\n\n> The case of REINDEX is more tricky, because you are working on a\n> relation that already exists, hence I think that what you need to do a\n> different thing before the actual REINDEX:\n> 1) Update the existing relation's pg_class tuple to point to the new\n> tablespace.\n> 2) Do a CommandCounterIncrement.\n> So I think that the order of the operations you are doing is incorrect,\n> and that you have a risk of breaking the existing tablespace assignment\n> logic done when first flushing a new relfilenode.\n>\n> This actually brings an extra thing: when doing a plain REINDEX you\n> need to make sure that the past relfilenode of the relation gets away\n> properly. The attached POC patch does that before doing the CCI which\n> is a bit ugly, but that's enough to show my point, and there is no\n> need to touch RelationSetNewRelfilenode() this way.\n\nThank you for the detailed answer and PoC patch. I will recheck \neverything and dig deeper into this problem, and come up with something \ncloser to the next 01.2020 commitfest.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Mon, 2 Dec 2019 12:41:03 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2019-12-02 11:21, Michael Paquier wrote:\n> On Wed, Nov 27, 2019 at 08:47:06PM +0300, Alexey Kondratov wrote:\n> \n>> Thus, I cannot get your point correctly here. Can you, please, \n>> elaborate a\n>> little bit more your concerns?\n> \n> The case of REINDEX CONCURRENTLY is pretty simple, because a new\n> relation which is a copy of the old relation is created before doing\n> the reindex, so you simply need to set the tablespace OID correctly\n> in index_concurrently_create_copy(). And actually, I think that the\n> computation is incorrect because we need to check after\n> MyDatabaseTableSpace as well, no?\n> \n> The case of REINDEX is more tricky, because you are working on a\n> relation that already exists, hence I think that what you need to do a\n> different thing before the actual REINDEX:\n> 1) Update the existing relation's pg_class tuple to point to the new\n> tablespace.\n> 2) Do a CommandCounterIncrement.\n> So I think that the order of the operations you are doing is incorrect,\n> and that you have a risk of breaking the existing tablespace assignment\n> logic done when first flushing a new relfilenode.\n> \n> This actually brings an extra thing: when doing a plain REINDEX you\n> need to make sure that the past relfilenode of the relation gets away\n> properly. The attached POC patch does that before doing the CCI which\n> is a bit ugly, but that's enough to show my point, and there is no\n> need to touch RelationSetNewRelfilenode() this way.\n> \n\nOK, I hope that now I understand your concerns better. Another thing I \njust realised is that RelationSetNewRelfilenode is also used for mapped \nrelations, which are not movable at all, so adding a tablespace options \nthere seems to be not semantically correct as well. However, I still \nhave not find a way to reproduce how to actually brake anything with my \nprevious version of the patch.\n\nAs for doing RelationDropStorage before CCI, I do not think that there \nis something wrong with it, this is exactly what \nRelationSetNewRelfilenode does. I have only moved RelationDropStorage \nbefore CatalogTupleUpdate compared to your proposal to match order \ninside RelationSetNewRelfilenode.\n\n> \n> Your patch has forgotten to update copyfuncs.c and equalfuncs.c with\n> the new tablespace string field.\n> \n> It would be nice to add tab completion for this new clause in psql.\n> This is not ready for committer yet in my opinion, and more work is\n> done, so I am marking it as returned with feedback for now.\n> \n\nFinally, I have also merged and unified all your and Masahiko's \nproposals with my recent changes: ereport corrections, tab-completion, \ndocs update, copy/equalfuncs update, etc. New version is attached. Have \nit come any closer to a committable state now?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Sat, 04 Jan 2020 21:38:24 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Jan 04, 2020 at 09:38:24PM +0300, Alexey Kondratov wrote:\n> Finally, I have also merged and unified all your and Masahiko's proposals\n> with my recent changes: ereport corrections, tab-completion, docs update,\n> copy/equalfuncs update, etc. New version is attached. Have it come any\n> closer to a committable state now?\n\nI have not yet reviewed this patch in details (I have that on my\nTODO), but at quick glance what you have here is rather close to what\nI'd expect to be committable as the tablespace OID assignment from\nyour patch is consistent in the REINDEX code paths with the existing\nALTER TABLE handling.\n--\nMichael",
"msg_date": "Tue, 7 Jan 2020 17:05:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "For your v7 patch, which handles REINDEX to a new tablespace, I have a few\nminor comments:\n\n+ * the relation will be rebuilt. If InvalidOid is used, the default\n\n=> should say \"currrent\", not default ?\n\n+++ b/doc/src/sgml/ref/reindex.sgml\n+ <term><literal>TABLESPACE</literal></term>\n...\n+ <term><replaceable class=\"parameter\">new_tablespace</replaceable></term>\n\n=> I saw you split the description of TABLESPACE from new_tablespace based on\ncomment earlier in the thread, but I suggest that the descriptions for these\nshould be merged, like:\n\n+ <varlistentry>\n+ <term><literal>TABLESPACE</literal><replaceable class=\"parameter\">new_tablespace</replaceable></term>\n+ <listitem>\n+ <para>\n+ Allow specification of a tablespace where all rebuilt indexes will be created.\n+ Cannot be used with \"mapped\" relations. If <literal>SCHEMA</literal>,\n+ <literal>DATABASE</literal> or <literal>SYSTEM</literal> are specified, then\n+ all unsuitable relations will be skipped and a single <literal>WARNING</literal>\n+ will be generated.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nThe existing patch is very natural, especially the parts in the original patch\nhandling vacuum full and cluster. Those were removed to concentrate on\nREINDEX, and based on comments that it might be nice if ALTER handled CLUSTER\nand VACUUM FULL. On a separate thread, I brought up the idea of ALTER using\nclustered order. Tom pointed out some issues with my implementation, but\ndidn't like the idea, either.\n\nSo I suggest to re-include the CLUSTER/VAC FULL parts as a separate 0002 patch,\nthe same way they were originally implemented.\n\nBTW, I think if \"ALTER\" were updated to support REINDEX (to allow multiple\noperations at once), it might be either:\n|ALTER INDEX i SET TABLESPACE , REINDEX -- to reindex a single index on a given tlbspc\nor\n|ALTER TABLE tbl REINDEX USING INDEX TABLESPACE spc; -- to reindex all inds on table inds moved to a given tblspc\n\"USING INDEX TABLESPACE\" is already used for ALTER..ADD column/table CONSTRAINT.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Feb 2020 10:48:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-02-11 19:48, Justin Pryzby wrote:\n> For your v7 patch, which handles REINDEX to a new tablespace, I have a \n> few\n> minor comments:\n> \n> + * the relation will be rebuilt. If InvalidOid is used, the default\n> \n> => should say \"currrent\", not default ?\n> \n\nYes, it keeps current index tablespace in that case, thanks.\n\n> \n> +++ b/doc/src/sgml/ref/reindex.sgml\n> + <term><literal>TABLESPACE</literal></term>\n> ...\n> + <term><replaceable \n> class=\"parameter\">new_tablespace</replaceable></term>\n> \n> => I saw you split the description of TABLESPACE from new_tablespace \n> based on\n> comment earlier in the thread, but I suggest that the descriptions for \n> these\n> should be merged, like:\n> \n> + <varlistentry>\n> + <term><literal>TABLESPACE</literal><replaceable\n> class=\"parameter\">new_tablespace</replaceable></term>\n> + <listitem>\n> + <para>\n> + Allow specification of a tablespace where all rebuilt indexes\n> will be created.\n> + Cannot be used with \"mapped\" relations. If \n> <literal>SCHEMA</literal>,\n> + <literal>DATABASE</literal> or <literal>SYSTEM</literal> are\n> specified, then\n> + all unsuitable relations will be skipped and a single\n> <literal>WARNING</literal>\n> + will be generated.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n\nIt sounds good to me, but here I just obey the structure, which is used \nall around. Documentation of ALTER TABLE/DATABASE, REINDEX and many \nothers describes each literal/parameter in a separate entry, e.g. \nnew_tablespace. So I would prefer to keep it as it is for now.\n\n> \n> The existing patch is very natural, especially the parts in the \n> original patch\n> handling vacuum full and cluster. Those were removed to concentrate on\n> REINDEX, and based on comments that it might be nice if ALTER handled \n> CLUSTER\n> and VACUUM FULL. On a separate thread, I brought up the idea of ALTER \n> using\n> clustered order. Tom pointed out some issues with my implementation, \n> but\n> didn't like the idea, either.\n> \n> So I suggest to re-include the CLUSTER/VAC FULL parts as a separate \n> 0002 patch,\n> the same way they were originally implemented.\n> \n> BTW, I think if \"ALTER\" were updated to support REINDEX (to allow \n> multiple\n> operations at once), it might be either:\n> |ALTER INDEX i SET TABLESPACE , REINDEX -- to reindex a single index\n> on a given tlbspc\n> or\n> |ALTER TABLE tbl REINDEX USING INDEX TABLESPACE spc; -- to reindex all\n> inds on table inds moved to a given tblspc\n> \"USING INDEX TABLESPACE\" is already used for ALTER..ADD column/table \n> CONSTRAINT.\n> \n\nYes, I also think that allowing REINDEX/CLUSTER/VACUUM FULL to put \nresulting relation in a different tablespace is a very natural \noperation. However, I did a couple of attempts to integrate latter two \nwith ALTER TABLE and failed with it, since it is already complex enough. \nI am still willing to proceed with it, but not sure how soon it will be.\n\nAnyway, new version is attached. It is rebased in order to resolve \nconflicts with a recent fix of REINDEX CONCURRENTLY + temp relations, \nand includes this small comment fix.\n\n\nRegards\n--\nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 29 Feb 2020 15:35:27 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Feb 29, 2020 at 03:35:27PM +0300, Alexey Kondratov wrote:\n> Anyway, new version is attached. It is rebased in order to resolve conflicts\n> with a recent fix of REINDEX CONCURRENTLY + temp relations, and includes\n> this small comment fix.\n\nThanks for rebasing - I actually started to do that yesterday.\n\nI extracted the bits from your original 0001 patch which handled CLUSTER and\nVACUUM FULL. I don't think if there's any interest in combining that with\nALTER anymore. On another thread (1), I tried to implement that, and Tom\npointed out problem with the implementation, but also didn't like the idea.\n\nI'm including some proposed fixes, but didn't yet update the docs, errors or\ntests for that. (I'm including your v8 untouched in hopes of not messing up\nthe cfbot). My fixes avoid an issue if you try to REINDEX onto pg_default, I\nthink due to moving system toast indexes.\n\ntemplate1=# REINDEX DATABASE template1 TABLESPACE pg_default;\n2020-02-29 08:01:41.835 CST [23382] WARNING: cannot change tablespace of indexes for mapped relations, skipping all\nWARNING: cannot change tablespace of indexes for mapped relations, skipping all\n2020-02-29 08:01:41.894 CST [23382] ERROR: SMgrRelation hashtable corrupted\n2020-02-29 08:01:41.894 CST [23382] STATEMENT: REINDEX DATABASE template1 TABLESPACE pg_default;\n2020-02-29 08:01:41.894 CST [23382] WARNING: AbortTransaction while in COMMIT state\n2020-02-29 08:01:41.895 CST [23382] PANIC: cannot abort transaction 491, it was already committed\n\n-- \nJustin\n\n(1) https://www.postgresql.org/message-id/flat/20200208150453.GV403%40telsasoft.com",
"msg_date": "Sat, 29 Feb 2020 08:53:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Feb 29, 2020 at 08:53:04AM -0600, Justin Pryzby wrote:\n> On Sat, Feb 29, 2020 at 03:35:27PM +0300, Alexey Kondratov wrote:\n> > Anyway, new version is attached. It is rebased in order to resolve conflicts\n> > with a recent fix of REINDEX CONCURRENTLY + temp relations, and includes\n> > this small comment fix.\n> \n> Thanks for rebasing - I actually started to do that yesterday.\n> \n> I extracted the bits from your original 0001 patch which handled CLUSTER and\n> VACUUM FULL. I don't think if there's any interest in combining that with\n> ALTER anymore. On another thread (1), I tried to implement that, and Tom\n> pointed out problem with the implementation, but also didn't like the idea.\n> \n> I'm including some proposed fixes, but didn't yet update the docs, errors or\n> tests for that. (I'm including your v8 untouched in hopes of not messing up\n> the cfbot). My fixes avoid an issue if you try to REINDEX onto pg_default, I\n> think due to moving system toast indexes.\n\nI was able to avoid this issue by adding a call to GetNewRelFileNode, even\nthough that's already called by RelationSetNewRelfilenode(). Not sure if\nthere's a better way, or if it's worth Alexey's v3 patch which added a\ntablespace param to RelationSetNewRelfilenode.\n\nThe current logic allows moving all the indexes and toast indexes, but I think\nwe should use IsSystemRelation() unless allow_system_table_mods, like existing\nbehavior of ALTER.\n\ntemplate1=# ALTER TABLE pg_extension_oid_index SET tablespace pg_default;\nERROR: permission denied: \"pg_extension_oid_index\" is a system catalog\ntemplate1=# REINDEX INDEX pg_extension_oid_index TABLESPACE pg_default;\nREINDEX\n\nFinally, I think the CLUSTER is missing permission checks. It looks like\nrelation_is_movable was factored out, but I don't see how that helps ?\n\nAlexey, I'm hoping to hear back if you think these changes are ok or if you'll\npublish a new version of the patch addressing the crash I reported.\nOr if you're too busy, maybe someone else can adopt the patch (I can help).\n\n-- \nJustin",
"msg_date": "Mon, 9 Mar 2020 15:04:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi Justin,\n\nOn 09.03.2020 23:04, Justin Pryzby wrote:\n> On Sat, Feb 29, 2020 at 08:53:04AM -0600, Justin Pryzby wrote:\n>> On Sat, Feb 29, 2020 at 03:35:27PM +0300, Alexey Kondratov wrote:\n>>> Anyway, new version is attached. It is rebased in order to resolve conflicts\n>>> with a recent fix of REINDEX CONCURRENTLY + temp relations, and includes\n>>> this small comment fix.\n>> Thanks for rebasing - I actually started to do that yesterday.\n>>\n>> I extracted the bits from your original 0001 patch which handled CLUSTER and\n>> VACUUM FULL. I don't think if there's any interest in combining that with\n>> ALTER anymore. On another thread (1), I tried to implement that, and Tom\n>> pointed out problem with the implementation, but also didn't like the idea.\n>>\n>> I'm including some proposed fixes, but didn't yet update the docs, errors or\n>> tests for that. (I'm including your v8 untouched in hopes of not messing up\n>> the cfbot). My fixes avoid an issue if you try to REINDEX onto pg_default, I\n>> think due to moving system toast indexes.\n> I was able to avoid this issue by adding a call to GetNewRelFileNode, even\n> though that's already called by RelationSetNewRelfilenode(). Not sure if\n> there's a better way, or if it's worth Alexey's v3 patch which added a\n> tablespace param to RelationSetNewRelfilenode.\n\nDo you have any understanding of what exactly causes this error? I have \ntried to debug it a little bit, but still cannot figure out why we need \nthis extra GetNewRelFileNode() call and a mechanism how it helps.\n\nProbably you mean v4 patch. Yes, interestingly, if we do everything at \nonce inside RelationSetNewRelfilenode(), then there is no issue at all with:\n\nREINDEX DATABASE template1 TABLESPACE pg_default;\n\nIt feels like I am doing a monkey coding here, so I want to understand \nit better :)\n\n> The current logic allows moving all the indexes and toast indexes, but I think\n> we should use IsSystemRelation() unless allow_system_table_mods, like existing\n> behavior of ALTER.\n>\n> template1=# ALTER TABLE pg_extension_oid_index SET tablespace pg_default;\n> ERROR: permission denied: \"pg_extension_oid_index\" is a system catalog\n> template1=# REINDEX INDEX pg_extension_oid_index TABLESPACE pg_default;\n> REINDEX\n\nYeah, we definitely should obey the same rules as ALTER TABLE / INDEX in \nmy opinion.\n\n> Finally, I think the CLUSTER is missing permission checks. It looks like\n> relation_is_movable was factored out, but I don't see how that helps ?\n\nI did this relation_is_movable refactoring in order to share the same \ncheck between REINDEX + TABLESPACE and ALTER INDEX + SET TABLESPACE. \nThen I realized that REINDEX already has its own temp tables check and \ndoes mapped relations validation in multiple places, so I just added \nglobal tablespace checks instead. Thus, relation_is_movable seems to be \noutdated right now. Probably, we have to do another refactoring here \nonce all proper validations will be accumulated in this patch set.\n\n> Alexey, I'm hoping to hear back if you think these changes are ok or if you'll\n> publish a new version of the patch addressing the crash I reported.\n> Or if you're too busy, maybe someone else can adopt the patch (I can help).\n\nSorry for the late response, I was not going to abandon this patch, but \nwas a bit busy last month.\n\nMany thanks for you review and fixups! There are some inconsistencies \nlike mentions of SET TABLESPACE in error messages and so on. I am going \nto refactor and include your fixes 0003-0004 into 0001 and 0002, but \nkeep 0005 separated for now, since this part requires more understanding \nIMO (and comparison with v4 implementation).\n\nThat way, I am going to prepare a more clear patch set till the middle \nof the next week. I will be glad to receive more feedback from you then.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Thu, 12 Mar 2020 20:08:46 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 08:08:46PM +0300, Alexey Kondratov wrote:\n> On 09.03.2020 23:04, Justin Pryzby wrote:\n>> On Sat, Feb 29, 2020 at 08:53:04AM -0600, Justin Pryzby wrote:\n>>> On Sat, Feb 29, 2020 at 03:35:27PM +0300, Alexey Kondratov wrote:\n>>> tests for that. (I'm including your v8 untouched in hopes of not messing up\n>>> the cfbot). My fixes avoid an issue if you try to REINDEX onto pg_default, I\n>>> think due to moving system toast indexes.\n>> I was able to avoid this issue by adding a call to GetNewRelFileNode, even\n>> though that's already called by RelationSetNewRelfilenode(). Not sure if\n>> there's a better way, or if it's worth Alexey's v3 patch which added a\n>> tablespace param to RelationSetNewRelfilenode.\n> \n> Do you have any understanding of what exactly causes this error? I have\n> tried to debug it a little bit, but still cannot figure out why we need this\n> extra GetNewRelFileNode() call and a mechanism how it helps.\n\nThe PANIC is from smgr hashtable, which couldn't find an entry it expected. My\nvery tentative understanding is that smgr is prepared to handle a *relation*\nwhich is dropped/recreated multiple times in a transaction, but it's *not*\nprepared to deal with a given RelFileNode(Backend) being dropped/recreated,\nsince that's used as a hash key.\n\nI revisited it and solved it in a somewhat nicer way. It's still not clear to\nme if there's an issue with your original way of adding a tablespace parameter\nto RelationSetNewRelfilenode().\n\n> Probably you mean v4 patch. Yes, interestingly, if we do everything at once\n> inside RelationSetNewRelfilenode(), then there is no issue at all with:\n\nYes, I meant to say \"worth revisiting the v4 patch\".\n\n> Many thanks for you review and fixups! There are some inconsistencies like\n> mentions of SET TABLESPACE in error messages and so on. I am going to\n> refactor and include your fixes 0003-0004 into 0001 and 0002, but keep 0005\n> separated for now, since this part requires more understanding IMO (and\n> comparison with v4 implementation).\n\nI'd suggest to keep the CLUSTER/VACUUM FULL separate from REINDEX, in case\nMichael or someone else wants to progress one but cannot commit to both. But\nprobably we should plan to finish this in July.\n\n-- \nJustin",
"msg_date": "Wed, 25 Mar 2020 18:40:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-03-26 02:40, Justin Pryzby wrote:\n> On Thu, Mar 12, 2020 at 08:08:46PM +0300, Alexey Kondratov wrote:\n>> On 09.03.2020 23:04, Justin Pryzby wrote:\n>>> On Sat, Feb 29, 2020 at 08:53:04AM -0600, Justin Pryzby wrote:\n>>>> On Sat, Feb 29, 2020 at 03:35:27PM +0300, Alexey Kondratov wrote:\n>>>> tests for that. (I'm including your v8 untouched in hopes of not \n>>>> messing up\n>>>> the cfbot). My fixes avoid an issue if you try to REINDEX onto \n>>>> pg_default, I\n>>>> think due to moving system toast indexes.\n>>> I was able to avoid this issue by adding a call to GetNewRelFileNode, \n>>> even\n>>> though that's already called by RelationSetNewRelfilenode(). Not \n>>> sure if\n>>> there's a better way, or if it's worth Alexey's v3 patch which added \n>>> a\n>>> tablespace param to RelationSetNewRelfilenode.\n>> \n>> Do you have any understanding of what exactly causes this error? I \n>> have\n>> tried to debug it a little bit, but still cannot figure out why we \n>> need this\n>> extra GetNewRelFileNode() call and a mechanism how it helps.\n> \n> The PANIC is from smgr hashtable, which couldn't find an entry it \n> expected. My\n> very tentative understanding is that smgr is prepared to handle a \n> *relation*\n> which is dropped/recreated multiple times in a transaction, but it's \n> *not*\n> prepared to deal with a given RelFileNode(Backend) being \n> dropped/recreated,\n> since that's used as a hash key.\n> \n> I revisited it and solved it in a somewhat nicer way.\n> \n\nI included your new solution regarding this part from 0004 into 0001. It \nseems that at least a tip of the problem was in that we tried to change \ntablespace to pg_default being already there.\n\n> \n> It's still not clear to\n> me if there's an issue with your original way of adding a tablespace \n> parameter\n> to RelationSetNewRelfilenode().\n> \n\nYes, it is not clear for me too.\n\n> \n>> Many thanks for you review and fixups! There are some inconsistencies \n>> like\n>> mentions of SET TABLESPACE in error messages and so on. I am going to\n>> refactor and include your fixes 0003-0004 into 0001 and 0002, but keep \n>> 0005\n>> separated for now, since this part requires more understanding IMO \n>> (and\n>> comparison with v4 implementation).\n> \n> I'd suggest to keep the CLUSTER/VACUUM FULL separate from REINDEX, in \n> case\n> Michael or someone else wants to progress one but cannot commit to \n> both.\n> \n\nYes, sure, I did not have plans to melt everything into a single patch.\n\nSo, it has taken much longer to understand and rework all these fixes \nand permission validations. Attached is the updated patch set.\n\n0001:\n — It is mostly the same, but refactored\n — I also included your most recent fix for REINDEX DATABASE with \nallow_system_table_mods=1\n — With this patch REINDEX + TABLESPACE simply errors out, when index on \nTOAST table is met and allow_system_table_mods=0\n\n0002:\n — I reworked it a bit, since REINDEX CONCURRENTLY is not allowed on \nsystem catalog anyway, that is checked at the hegher levels of statement \nprocessing. So we have to care about TOAST relations\n — Also added the same check into the plain REINDEX\n — It works fine, but I am not entirely happy that with this patch \nerrors/warnings are a bit inconsistent:\n\ntemplate1=# REINDEX INDEX CONCURRENTLY pg_toast.pg_toast_12773_index \nTABLESPACE pg_default;\nWARNING: skipping tablespace change of \"pg_toast_12773_index\"\nDETAIL: Cannot move system relation, only REINDEX CONCURRENTLY is \nperformed.\n\ntemplate1=# REINDEX TABLE CONCURRENTLY pg_toast.pg_toast_12773 \nTABLESPACE pg_default;\nERROR: permission denied: \"pg_toast_12773\" is a system catalog\n\nAnd REINDEX DATABASE CONCURRENTLY will generate a warning again.\n\nMaybe we should always throw a warning and do only reindex if it is not \npossible to change tablespace?\n\n0003:\n — I have get rid of some of previous refactoring pieces like \ncheck_relation_is_movable for now. Let all these validations to settle \nand then think whether we could do it better\n — Added CLUSTER to copy/equalfuncs\n — Cleaned up messages and comments\n\nI hope that I did not forget anything from your proposals.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 26 Mar 2020 20:09:15 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "> I included your new solution regarding this part from 0004 into 0001. It\n> seems that at least a tip of the problem was in that we tried to change\n> tablespace to pg_default being already there.\n\nRight, causing it to try to drop that filenode twice.\n\n> +++ b/doc/src/sgml/ref/cluster.sgml\n> + The name of a specific tablespace to store clustered relations.\n\nCould you phrase these like you did in the comments:\n\" the name of the tablespace where the clustered relation is to be rebuilt.\"\n\n> +++ b/doc/src/sgml/ref/reindex.sgml\n> + The name of a specific tablespace to store rebuilt indexes.\n\n\" The name of a tablespace where indexes will be rebuilt\"\n\n> +++ b/doc/src/sgml/ref/vacuum.sgml\n> + The name of a specific tablespace to write a new copy of the table.\n\n> + This specifies a tablespace, where all rebuilt indexes will be created.\n\nsay \"specifies the tablespace where\", with no comma.\n\n> +\t\t\telse if (!OidIsValid(classtuple->relfilenode))\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * Skip all mapped relations.\n> +\t\t\t\t * relfilenode == 0 checks after that, similarly to\n> +\t\t\t\t * RelationIsMapped().\n\nI would say \"OidIsValid(relfilenode) checks for that, ...\"\n\n> @@ -262,7 +280,7 @@ cluster(ClusterStmt *stmt, bool isTopLevel)\n> * and error messages should refer to the operation as VACUUM not CLUSTER.\n> */\n> void\n> -cluster_rel(Oid tableOid, Oid indexOid, int options)\n> +cluster_rel(Oid tableOid, Oid indexOid, Oid tablespaceOid, int options)\n\nAdd a comment here about the tablespaceOid parameter, like the other functions\nwhere it's added.\n\nThe permission checking is kind of duplicitive, so I'd suggest to factor it\nout. Ideally we'd only have one place that checks for pg_global/system/mapped.\nIt needs to check that it's not a system relation, or that system_table_mods\nare allowed, and in any case that if it's a mapped rel, that it's not being\nmoved. I would pass a boolean indicating if the tablespace is being changed.\n\nAnother issue is this:\n> +VACUUM ( FULL [, ...] ) [ TABLESPACE <replaceable class=\"parameter\">new_tablespace</replaceable> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\nAs you mentioned in your v1 patch, in the other cases, \"tablespace\n[tablespace]\" is added at the end of the command rather than in the middle. I\nwasn't able to make that work, maybe because \"tablespace\" isn't a fully\nreserved word (?). I didn't try with \"SET TABLESPACE\", although I understand\nit'd be better without \"SET\".\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 26 Mar 2020 13:01:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-03-26 21:01, Justin Pryzby wrote:\n> \n>> @@ -262,7 +280,7 @@ cluster(ClusterStmt *stmt, bool isTopLevel)\n>> * and error messages should refer to the operation as VACUUM not \n>> CLUSTER.\n>> */\n>> void\n>> -cluster_rel(Oid tableOid, Oid indexOid, int options)\n>> +cluster_rel(Oid tableOid, Oid indexOid, Oid tablespaceOid, int \n>> options)\n> \n> Add a comment here about the tablespaceOid parameter, like the other \n> functions\n> where it's added.\n> \n> The permission checking is kind of duplicitive, so I'd suggest to \n> factor it\n> out. Ideally we'd only have one place that checks for \n> pg_global/system/mapped.\n> It needs to check that it's not a system relation, or that \n> system_table_mods\n> are allowed, and in any case that if it's a mapped rel, that it's not \n> being\n> moved. I would pass a boolean indicating if the tablespace is being \n> changed.\n> \n\nYes, but I wanted to make sure first that all necessary validations are \nthere to do not miss something as I did last time. I do not like \nrepetitive code either, so I would like to introduce more common check \nafter reviewing the code as a whole.\n\n> \n> Another issue is this:\n>> +VACUUM ( FULL [, ...] ) [ TABLESPACE <replaceable \n>> class=\"parameter\">new_tablespace</replaceable> ] [ <replaceable \n>> class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> As you mentioned in your v1 patch, in the other cases, \"tablespace\n> [tablespace]\" is added at the end of the command rather than in the \n> middle. I\n> wasn't able to make that work, maybe because \"tablespace\" isn't a fully\n> reserved word (?). I didn't try with \"SET TABLESPACE\", although I \n> understand\n> it'd be better without \"SET\".\n> \n\nInitially I tried \"SET TABLESPACE\", but also failed to completely get \nrid of shift/reduce conflicts. I will try to rewrite VACUUM's part again \nwith OptTableSpace. Maybe I will manage it this time.\n\nI will take into account all your text edits as well.\n\n\nThanks\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Thu, 26 Mar 2020 22:22:08 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "> Another issue is this:\n> > +VACUUM ( FULL [, ...] ) [ TABLESPACE <replaceable class=\"parameter\">new_tablespace</replaceable> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> As you mentioned in your v1 patch, in the other cases, \"tablespace\n> [tablespace]\" is added at the end of the command rather than in the middle. I\n> wasn't able to make that work, maybe because \"tablespace\" isn't a fully\n> reserved word (?). I didn't try with \"SET TABLESPACE\", although I understand\n> it'd be better without \"SET\".\n\nI think we should use the parenthesized syntax for vacuum - it seems clear in\nhindsight.\n\nPossibly REINDEX should use that, too, instead of adding OptTablespace at the\nend. I'm not sure.\n\nCLUSTER doesn't support parenthesized syntax, but .. maybe it should?\n\nAlso, perhaps VAC FULL (and CLUSTER, if it grows parenthesized syntax), should\nsupport something like this:\n\nUSING INDEX TABLESPACE name\n\nI guess I would prefer just \"index tablespace\", without \"using\":\n\n|VACUUM(FULL, TABLESPACE ts, INDEX TABLESPACE its) t;\n|CLUSTER(VERBOSE, TABLESPACE ts, INDEX TABLESPACE its) t;\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 26 Mar 2020 23:01:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 11:01:06PM -0500, Justin Pryzby wrote:\n> > Another issue is this:\n> > > +VACUUM ( FULL [, ...] ) [ TABLESPACE <replaceable class=\"parameter\">new_tablespace</replaceable> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> > As you mentioned in your v1 patch, in the other cases, \"tablespace\n> > [tablespace]\" is added at the end of the command rather than in the middle. I\n> > wasn't able to make that work, maybe because \"tablespace\" isn't a fully\n> > reserved word (?). I didn't try with \"SET TABLESPACE\", although I understand\n> > it'd be better without \"SET\".\n> \n> I think we should use the parenthesized syntax for vacuum - it seems clear in\n> hindsight.\n\nI implemented this last night but forgot to attach it.\n\n> Possibly REINDEX should use that, too, instead of adding OptTablespace at the\n> end. I'm not sure.\n> \n> CLUSTER doesn't support parenthesized syntax, but .. maybe it should?\n> \n> Also, perhaps VAC FULL (and CLUSTER, if it grows parenthesized syntax), should\n> support something like this:\n> \n> USING INDEX TABLESPACE name\n> \n> I guess I would prefer just \"index tablespace\", without \"using\":\n> \n> |VACUUM(FULL, TABLESPACE ts, INDEX TABLESPACE its) t;\n> |CLUSTER(VERBOSE, TABLESPACE ts, INDEX TABLESPACE its) t;",
"msg_date": "Fri, 27 Mar 2020 15:15:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 11:01:06PM -0500, Justin Pryzby wrote:\n> > Another issue is this:\n> > > +VACUUM ( FULL [, ...] ) [ TABLESPACE <replaceable class=\"parameter\">new_tablespace</replaceable> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> > As you mentioned in your v1 patch, in the other cases, \"tablespace\n> > [tablespace]\" is added at the end of the command rather than in the middle. I\n> > wasn't able to make that work, maybe because \"tablespace\" isn't a fully\n> > reserved word (?). I didn't try with \"SET TABLESPACE\", although I understand\n> > it'd be better without \"SET\".\n> \n> I think we should use the parenthesized syntax for vacuum - it seems clear in\n> hindsight.\n> \n> Possibly REINDEX should use that, too, instead of adding OptTablespace at the\n> end. I'm not sure.\n\nThe attached mostly implements generic parenthesized options to REINDEX(...),\nso I'm soliciting opinions: should TABLESPACE be implemented in parenthesized\nsyntax or non?\n\n> CLUSTER doesn't support parenthesized syntax, but .. maybe it should?\n\nAnd this ?\n\n-- \nJustin",
"msg_date": "Fri, 27 Mar 2020 19:11:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-03-28 03:11, Justin Pryzby wrote:\n> On Thu, Mar 26, 2020 at 11:01:06PM -0500, Justin Pryzby wrote:\n>> > Another issue is this:\n>> > > +VACUUM ( FULL [, ...] ) [ TABLESPACE <replaceable class=\"parameter\">new_tablespace</replaceable> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n>> > As you mentioned in your v1 patch, in the other cases, \"tablespace\n>> > [tablespace]\" is added at the end of the command rather than in the middle. I\n>> > wasn't able to make that work, maybe because \"tablespace\" isn't a fully\n>> > reserved word (?). I didn't try with \"SET TABLESPACE\", although I understand\n>> > it'd be better without \"SET\".\n> \n\nSET does not change anything in my experience. The problem is that \nopt_vacuum_relation_list is... optional and TABLESPACE is not a fully \nreserved word (why?) as you correctly noted. I have managed to put \nTABLESPACE to the end, but with vacuum_relation_list, like:\n\n| VACUUM opt_full opt_freeze opt_verbose opt_analyze \nvacuum_relation_list TABLESPACE name\n| VACUUM '(' vac_analyze_option_list ')' vacuum_relation_list TABLESPACE \nname\n\nIt means that one would not be able to do VACUUM FULL of the entire \ndatabase + TABLESPACE change. I do not think that it is a common \nscenario, but this limitation would be very annoying, wouldn't it?\n\n> \n>> \n>> I think we should use the parenthesized syntax for vacuum - it seems \n>> clear in\n>> hindsight.\n>> \n>> Possibly REINDEX should use that, too, instead of adding OptTablespace \n>> at the\n>> end. I'm not sure.\n> \n> The attached mostly implements generic parenthesized options to \n> REINDEX(...),\n> so I'm soliciting opinions: should TABLESPACE be implemented in \n> parenthesized\n> syntax or non?\n> \n>> CLUSTER doesn't support parenthesized syntax, but .. maybe it should?\n> \n> And this ?\n> \n\nHmm, I went through the well known to me SQL commands in Postgres and a \nbit more. Parenthesized options list is mostly used in two common cases:\n\n- In the beginning for boolean options only, e.g. VACUUM\n- In the end for options of a various type, but accompanied by WITH, \ne.g. COPY, CREATE SUBSCRIPTION\n\nMoreover, TABLESPACE is already used in CREATE TABLE/INDEX in the same \nway I did in 0001-0002. That way, putting TABLESPACE option into the \nparenthesized options list does not look to be convenient and \nsemantically correct, so I do not like it. Maybe others will have a \ndifferent opinion.\n\nPutting it into the WITH (...) options list looks like an option to me. \nHowever, doing it only for VACUUM will ruin the consistency, while doing \nit for CLUSTER and REINDEX is not necessary, so I do not like it either.\n\nTo summarize, currently I see only 2 + 1 extra options:\n\n1) Keep everything with syntax as it is in 0001-0002\n2) Implement tail syntax for VACUUM, but with limitation for VACUUM FULL \nof the entire database + TABLESPACE change\n3) Change TABLESPACE to a fully reserved word\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Mon, 30 Mar 2020 21:02:22 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 09:02:22PM +0300, Alexey Kondratov wrote:\n> Hmm, I went through the well known to me SQL commands in Postgres and a bit\n> more. Parenthesized options list is mostly used in two common cases:\n\nThere's also ANALYZE(VERBOSE), REINDEX(VERBOSE).\nThere was debate a year ago [0] as to whether to make \"reindex CONCURRENTLY\" a\nseparate command, or to use parenthesized syntax \"REINDEX (CONCURRENTLY)\". I\nwould propose to support that now (and implemented that locally).\n\n..and explain(...)\n\n> - In the beginning for boolean options only, e.g. VACUUM\n\nYou're right that those are currently boolean, but note that explain(FORMAT ..)\nis not boolean.\n\n> Putting it into the WITH (...) options list looks like an option to me.\n> However, doing it only for VACUUM will ruin the consistency, while doing it\n> for CLUSTER and REINDEX is not necessary, so I do not like it either.\n\nIt's not necessary but I think it's a more flexible way to add new\nfunctionality (requiring no changes to the grammar for vacuum, and for\nREINDEX/CLUSTER it would allow future options to avoid changing the grammar).\n\nIf we use parenthesized syntax for vacuum, my proposal is to do it for REINDEX, and\nconsider adding parenthesized syntax for cluster, too.\n\n> To summarize, currently I see only 2 + 1 extra options:\n> \n> 1) Keep everything with syntax as it is in 0001-0002\n> 2) Implement tail syntax for VACUUM, but with limitation for VACUUM FULL of\n> the entire database + TABLESPACE change\n> 3) Change TABLESPACE to a fully reserved word\n\n+ 4) Use parenthesized syntax for all three.\n\nNote, I mentioned that maybe VACUUM/CLUSTER should support not only \"TABLESPACE\nfoo\" but also \"INDEX TABLESPACE bar\" (I would use that, too). I think that\nwould be easy to implement, and for sure it would suggest using () for both.\n(For sure we don't want to implement \"VACUUM t TABLESPACE foo\" now, and then\nlater implement \"INDEX TABLESPACE bar\" and realize that for consistency we\ncannot parenthesize it.\n\nMichael ? Alvaro ? Robert ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 30 Mar 2020 13:34:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-03-30 21:34, Justin Pryzby wrote:\n> On Mon, Mar 30, 2020 at 09:02:22PM +0300, Alexey Kondratov wrote:\n>> Hmm, I went through the well known to me SQL commands in Postgres and \n>> a bit\n>> more. Parenthesized options list is mostly used in two common cases:\n> \n> There's also ANALYZE(VERBOSE), REINDEX(VERBOSE).\n> There was debate a year ago [0] as to whether to make \"reindex \n> CONCURRENTLY\" a\n> separate command, or to use parenthesized syntax \"REINDEX \n> (CONCURRENTLY)\". I\n> would propose to support that now (and implemented that locally).\n> \n\nI am fine with allowing REINDEX (CONCURRENTLY), but then we will have to \nsupport both syntaxes as we already do for VACUUM. Anyway, if we agree \nto add parenthesized options to REINDEX/CLUSTER, then it should be done \nas a separated patch before the current patch set.\n\n> \n> ..and explain(...)\n> \n>> - In the beginning for boolean options only, e.g. VACUUM\n> \n> You're right that those are currently boolean, but note that \n> explain(FORMAT ..)\n> is not boolean.\n> \n\nYep, I forgot EXPLAIN, this is a good example.\n\n> \n> .. and create table (LIKE ..)\n> \n\nLIKE is used in the table definition, so it is a slightly different \ncase.\n\n> \n>> Putting it into the WITH (...) options list looks like an option to \n>> me.\n>> However, doing it only for VACUUM will ruin the consistency, while \n>> doing it\n>> for CLUSTER and REINDEX is not necessary, so I do not like it either.\n> \n> It's not necessary but I think it's a more flexible way to add new\n> functionality (requiring no changes to the grammar for vacuum, and for\n> REINDEX/CLUSTER it would allow future options to avoid changing the \n> grammar).\n> \n> If we use parenthesized syntax for vacuum, my proposal is to do it for\n> REINDEX, and\n> consider adding parenthesized syntax for cluster, too.\n> \n>> To summarize, currently I see only 2 + 1 extra options:\n>> \n>> 1) Keep everything with syntax as it is in 0001-0002\n>> 2) Implement tail syntax for VACUUM, but with limitation for VACUUM \n>> FULL of\n>> the entire database + TABLESPACE change\n>> 3) Change TABLESPACE to a fully reserved word\n> \n> + 4) Use parenthesized syntax for all three.\n> \n> Note, I mentioned that maybe VACUUM/CLUSTER should support not only \n> \"TABLESPACE\n> foo\" but also \"INDEX TABLESPACE bar\" (I would use that, too). I think \n> that\n> would be easy to implement, and for sure it would suggest using () for \n> both.\n> (For sure we don't want to implement \"VACUUM t TABLESPACE foo\" now, and \n> then\n> later implement \"INDEX TABLESPACE bar\" and realize that for consistency \n> we\n> cannot parenthesize it.\n> \n> Michael ? Alvaro ? Robert ?\n> \n\nYes, I would be glad to hear other opinions too, before doing this \npreliminary refactoring.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Tue, 31 Mar 2020 13:56:07 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 01:56:07PM +0300, Alexey Kondratov wrote:\n> I am fine with allowing REINDEX (CONCURRENTLY), but then we will have to\n> support both syntaxes as we already do for VACUUM. Anyway, if we agree to\n> add parenthesized options to REINDEX/CLUSTER, then it should be done as a\n> separated patch before the current patch set.\n\nLast year for the patch for REINDEX CONCURRENTLY, we had the argument\nof supporting only the parenthesized grammar or not, and the choice\nhas been made to use what we have now, as you mentioned upthread. I\nwould honestly prefer that for now on we only add the parenthesized\nversion of an option if something new is added to such utility\ncommands (vacuum, analyze, reindex, etc.) as that's much more\nextensible from the point of view of the parser. And this, even if\nyou need to rework things a bit more things around\nreindex_option_elem for the tablespace option proposed here.\n--\nMichael",
"msg_date": "Wed, 1 Apr 2020 15:03:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Apr 01, 2020 at 03:03:34PM +0900, Michael Paquier wrote:\n> On Tue, Mar 31, 2020 at 01:56:07PM +0300, Alexey Kondratov wrote:\n> > I am fine with allowing REINDEX (CONCURRENTLY), but then we will have to\n> > support both syntaxes as we already do for VACUUM. Anyway, if we agree to\n> > add parenthesized options to REINDEX/CLUSTER, then it should be done as a\n> > separated patch before the current patch set.\n> \n> would honestly prefer that for now on we only add the parenthesized\n> version of an option if something new is added to such utility\n> commands (vacuum, analyze, reindex, etc.) as that's much more\n> extensible from the point of view of the parser. And this, even if\n> you need to rework things a bit more things around\n> reindex_option_elem for the tablespace option proposed here.\n\nThanks for your input.\n\nI'd already converted VACUUM and REINDEX to use a parenthesized TABLESPACE\noption, and just converted CLUSTER to take an option list and do the same.\n\nAlexey suggested that those changes should be done as a separate patch, with\nthe tablespace options built on top. Which makes sense. I had quite some fun\nrebasing these with patches in that order.\n\nHowever, I've kept my changes separate from Alexey's patch, to make it easier\nfor him to integrate. So there's \"fix!\" commits which are not logically\nseparate and should be read as if they're merged with their parent commits.\nThat makes the patchset look kind of dirty. So I'm first going to send the\n\"before rebase\" patchset. There's a few fixme items, but I think this is in\npretty good shape, and I'd appreciate review.\n\nI'll follow up later with the \"after rebase\" patchset. Maybe Alexey will want\nto integrate that.\n\nI claimed it would be easy, so I also implemented (INDEX_TABESPACE ..) option:\n\ntemplate1=# VACUUM (TABLESPACE pg_default, INDEX_TABLESPACE ts, FULL) t;\ntemplate1=# CLUSTER (TABLESPACE pg_default, INDEX_TABLESPACE ts) t;\n\n-- \nJustin",
"msg_date": "Wed, 1 Apr 2020 06:57:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Apr 01, 2020 at 06:57:18AM -0500, Justin Pryzby wrote:\n> Alexey suggested that those changes should be done as a separate patch, with\n> the tablespace options built on top. Which makes sense. I had quite some fun\n> rebasing these with patches in that order.\n> \n> However, I've kept my changes separate from Alexey's patch, to make it easier\n> for him to integrate. So there's \"fix!\" commits which are not logically\n> separate and should be read as if they're merged with their parent commits.\n> That makes the patchset look kind of dirty. So I'm first going to send the\n> \"before rebase\" patchset. There's a few fixme items, but I think this is in\n> pretty good shape, and I'd appreciate review.\n> \n> I'll follow up later with the \"after rebase\" patchset. \n\nAttached. As I said, the v15 patches might be easier to review, even though\nv16 is closer to what's desirable to merge.\n\n> Maybe Alexey will want to integrate that.\n\nOr maybe you'd want me to squish my changes into yours and resend after any\nreview comments ?\n\n-- \nJustin",
"msg_date": "Wed, 1 Apr 2020 08:08:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Apr 01, 2020 at 08:08:36AM -0500, Justin Pryzby wrote:\n> Or maybe you'd want me to squish my changes into yours and resend after any\n> review comments ?\n\nI didn't hear any feedback, so I've now squished all \"parenthesized\" and \"fix\"\ncommits. 0004 reduces duplicative error handling, as a separate commit so\nAlexey can review it and/or integrate it. The last two commits save a few\ndozen lines of code, but not sure they're desirable.\n\nAs this changes REINDEX/CLUSTER to allow parenthesized options, it might be\npretty reasonable if someone were to kick this to the July CF.\n\n-- \nJustin",
"msg_date": "Fri, 3 Apr 2020 13:27:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-04-03 21:27, Justin Pryzby wrote:\n> On Wed, Apr 01, 2020 at 08:08:36AM -0500, Justin Pryzby wrote:\n>> Or maybe you'd want me to squish my changes into yours and resend \n>> after any\n>> review comments ?\n> \n> I didn't hear any feedback, so I've now squished all \"parenthesized\" \n> and \"fix\"\n> commits.\n> \n\nThanks for the input, but I am afraid that the patch set became a bit \nmessy now. I have eyeballed it and found some inconsistencies.\n\n \tconst char *name;\t\t\t/* name of database to reindex */\n-\tint\t\t\toptions;\t\t/* Reindex options flags */\n+\tList\t\t*rawoptions;\t\t/* Raw options */\n+\tint\t\toptions;\t\t\t/* Parsed options */\n \tbool\t\tconcurrent;\t\t/* reindex concurrently? */\n\nYou introduced rawoptions in the 0002, but then removed it in 0003. So \nis it required or not? Probably this is a rebase artefact.\n\n+/* XXX: reusing reindex_option_list */\n+\t\t\t| CLUSTER opt_verbose '(' reindex_option_list ')' qualified_name \ncluster_index_specification\n\nCould we actually simply reuse vac_analyze_option_list? From the first \nsight it does just the right thing, excepting the special handling of \nspelling ANALYZE/ANALYSE, but it does not seem to be a problem.\n\n> \n> 0004 reduces duplicative error handling, as a separate commit so\n> Alexey can review it and/or integrate it.\n> \n\n@@ -2974,27 +2947,6 @@ ReindexRelationConcurrently(Oid relationOid, Oid \ntablespaceOid, int options)\n\t/* Open relation to get its indexes */\n\theapRelation = table_open(relationOid, ShareUpdateExclusiveLock);\n-\t/*\n-\t * We don't support moving system relations into different \ntablespaces,\n-\t * unless allow_system_table_mods=1.\n-\t */\n-\tif (OidIsValid(tablespaceOid) &&\n-\t\t!allowSystemTableMods && IsSystemRelation(heapRelation))\n-\t\tereport(ERROR,\n-\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n-\t\t\t\terrmsg(\"permission denied: \\\"%s\\\" is a system catalog\",\n-\t\t\t\t\t\tRelationGetRelationName(heapRelation))));\n\nReindexRelationConcurrently is used for all cases, but it hits different \ncode paths in the case of database, table and index. I have not checked \nyet, but are you sure it is safe removing these validations in the case \nof REINDEX CONCURRENTLY?\n\n> \n> The last two commits save a few\n> dozen lines of code, but not sure they're desirable.\n> \n\nSincerely, I do not think that passing raw strings down to the guts is a \ngood idea. Yes, it saves us a few checks here and there now, but it may \nreduce a further reusability of these internal routines in the future.\n\n> \n> XXX: for cluster/vacuum, it might be more friendly to check before \n> clustering\n> the table, rather than after clustering and re-indexing.\n> \n\nYes, I think it would be much more user-friendly.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Mon, 06 Apr 2020 20:43:46 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Apr 06, 2020 at 08:43:46PM +0300, Alexey Kondratov wrote:\n> Thanks for the input, but I am afraid that the patch set became a bit messy\n> now. I have eyeballed it and found some inconsistencies.\n> \n> \tconst char *name;\t\t\t/* name of database to reindex */\n> -\tint\t\t\toptions;\t\t/* Reindex options flags */\n> +\tList\t\t*rawoptions;\t\t/* Raw options */\n> +\tint\t\toptions;\t\t\t/* Parsed options */\n> \tbool\t\tconcurrent;\t\t/* reindex concurrently? */\n> \n> You introduced rawoptions in the 0002, but then removed it in 0003. So is it\n> required or not? Probably this is a rebase artefact.\n\nYou're right; I first implemented REINDEX() and when I later did CLUSTER(), I\ndid it better, so I went back and did REINDEX() that way, but it looks like I\nmaybe fixup!ed the wrong commit. Fixed now.\n\n> +/* XXX: reusing reindex_option_list */\n> +\t\t\t| CLUSTER opt_verbose '(' reindex_option_list ')' qualified_name\n> cluster_index_specification\n> \n> Could we actually simply reuse vac_analyze_option_list? From the first sight\n> it does just the right thing, excepting the special handling of spelling\n> ANALYZE/ANALYSE, but it does not seem to be a problem.\n\nHm, do you mean to let cluster.c reject the other options like \"analyze\" ?\nI'm not sure why that would be better than reusing reindex?\nI think the suggestion will probably be to just copy+paste the reindex option\nlist and rename it to cluster (possibly with the explanation that they're\nseparate and independant and so their behavior shouldn't be tied together).\n\n> > 0004 reduces duplicative error handling, as a separate commit so\n> > Alexey can review it and/or integrate it.\n> \n> ReindexRelationConcurrently is used for all cases, but it hits different\n> code paths in the case of database, table and index. I have not checked yet,\n> but are you sure it is safe removing these validations in the case of\n> REINDEX CONCURRENTLY?\n\nYou're right about the pg_global case, fixed. System catalogs can't be\nreindexed CONCURRENTLY, so they're already caught by that check.\n\n> > XXX: for cluster/vacuum, it might be more friendly to check before\n> > clustering\n> > the table, rather than after clustering and re-indexing.\n> \n> Yes, I think it would be much more user-friendly.\n\nI realized it's not needed or useful to check indexes in advance of clustering,\nsince 1) a mapped index will be on a mapped relation, which is already checked;\n2) a system index will be on a system relation. Right ?\n\n-- we already knew that\nts=# SELECT COUNT(1) FROM pg_index i JOIN pg_class a ON i.indrelid=a.oid JOIN pg_class b ON i.indexrelid=b.oid WHERE a.relnamespace!=b.relnamespace;\ncount | 0\n\n-- not true in general, but true here and true for system relations\nts=# SELECT COUNT(1) FROM pg_index i JOIN pg_class a ON i.indrelid=a.oid JOIN pg_class b ON i.indexrelid=b.oid WHERE a.reltablespace != b.reltablespace;\ncount | 0\n\n-- \nJustin",
"msg_date": "Mon, 6 Apr 2020 13:44:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-04-06 21:44, Justin Pryzby wrote:\n> On Mon, Apr 06, 2020 at 08:43:46PM +0300, Alexey Kondratov wrote:\n>> \n>> +/* XXX: reusing reindex_option_list */\n>> +\t\t\t| CLUSTER opt_verbose '(' reindex_option_list ')' qualified_name\n>> cluster_index_specification\n>> \n>> Could we actually simply reuse vac_analyze_option_list? From the first \n>> sight\n>> it does just the right thing, excepting the special handling of \n>> spelling\n>> ANALYZE/ANALYSE, but it does not seem to be a problem.\n> \n> Hm, do you mean to let cluster.c reject the other options like \n> \"analyze\" ?\n> I'm not sure why that would be better than reusing reindex?\n> I think the suggestion will probably be to just copy+paste the reindex \n> option\n> list and rename it to cluster (possibly with the explanation that \n> they're\n> separate and independant and so their behavior shouldn't be tied \n> together).\n> \n\nI mean to literally use vac_analyze_option_list for reindex and cluster \nas well. Please, check attached 0007. Now, vacuum, reindex and cluster \nfilter options list and reject everything that is not supported, so it \nseems completely fine to just reuse vac_analyze_option_list, doesn't it?\n\n>> \n>> ReindexRelationConcurrently is used for all cases, but it hits \n>> different\n>> code paths in the case of database, table and index. I have not \n>> checked yet,\n>> but are you sure it is safe removing these validations in the case of\n>> REINDEX CONCURRENTLY?\n> \n> You're right about the pg_global case, fixed. System catalogs can't be\n> reindexed CONCURRENTLY, so they're already caught by that check.\n> \n>> > XXX: for cluster/vacuum, it might be more friendly to check before\n>> > clustering\n>> > the table, rather than after clustering and re-indexing.\n>> \n>> Yes, I think it would be much more user-friendly.\n> \n> I realized it's not needed or useful to check indexes in advance of \n> clustering,\n> since 1) a mapped index will be on a mapped relation, which is already \n> checked;\n> 2) a system index will be on a system relation. Right ?\n> \n\nYes, it seems that you are right. I have tried to create user index on \nsystem relation with allow_system_table_mods=1, but this new index \nappeared to become system as well. That way, we do not have to check \nindexes in advance.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Tue, 07 Apr 2020 15:40:18 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Apr 07, 2020 at 03:40:18PM +0300, Alexey Kondratov wrote:\n> On 2020-04-06 21:44, Justin Pryzby wrote:\n> > On Mon, Apr 06, 2020 at 08:43:46PM +0300, Alexey Kondratov wrote:\n> > > \n> > > +/* XXX: reusing reindex_option_list */\n> > > +\t\t\t| CLUSTER opt_verbose '(' reindex_option_list ')' qualified_name\n> > > cluster_index_specification\n> > > \n> > > Could we actually simply reuse vac_analyze_option_list? From the first\n> > > sight it does just the right thing, excepting the special handling of\n> > > spelling ANALYZE/ANALYSE, but it does not seem to be a problem.\n> > \n> > Hm, do you mean to let cluster.c reject the other options like \"analyze\" ?\n> > I'm not sure why that would be better than reusing reindex? I think the\n> > suggestion will probably be to just copy+paste the reindex option list and\n> > rename it to cluster (possibly with the explanation that they're separate\n> > and independant and so their behavior shouldn't be tied together).\n> \n> I mean to literally use vac_analyze_option_list for reindex and cluster as\n> well. Please, check attached 0007. Now, vacuum, reindex and cluster filter\n> options list and reject everything that is not supported, so it seems\n> completely fine to just reuse vac_analyze_option_list, doesn't it?\n\nIt's fine with me :)\n\nPossibly we could rename vac_analyze_option_list as generic_option_list.\n\nI'm resending the patchset like that, and fixed cluster/index to handle not\njust \"VERBOSE\" but \"verbose OFF\", rather than just ignoring the argument.\n\nThat's the last known issue with the patch. I doubt anyone will elect to pick\nit up in the next 8 hours, but I think it's in very good shape for v14 :)\n\nBTW, if you resend a *.patch or *.diff file to a thread, it's best to also\ninclude all the previous patches. Otherwise the CF bot is likely to complain\nthat the patch \"doesn't apply\", or else it'll only test the one patch instead\nof the whole series.\nhttp://cfbot.cputube.org/alexey-kondratov.html\n\n-- \nJustin",
"msg_date": "Tue, 7 Apr 2020 15:44:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Apr 07, 2020 at 03:44:06PM -0500, Justin Pryzby wrote:\n> > I mean to literally use vac_analyze_option_list for reindex and cluster as\n> > well. Please, check attached 0007. Now, vacuum, reindex and cluster filter\n> > options list and reject everything that is not supported, so it seems\n> > completely fine to just reuse vac_analyze_option_list, doesn't it?\n> \n> It's fine with me :)\n> \n> Possibly we could rename vac_analyze_option_list as generic_option_list.\n> \n> I'm resending the patchset like that, and fixed cluster/index to handle not\n> just \"VERBOSE\" but \"verbose OFF\", rather than just ignoring the argument.\n> \n> That's the last known issue with the patch. I doubt anyone will elect to pick\n> it up in the next 8 hours, but I think it's in very good shape for v14 :)\n\nI tweaked some comments and docs and plan to mark this RfC.\n\n-- \nJustin",
"msg_date": "Sat, 11 Apr 2020 20:33:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Apr 11, 2020 at 08:33:52PM -0500, Justin Pryzby wrote:\n> On Tue, Apr 07, 2020 at 03:44:06PM -0500, Justin Pryzby wrote:\n>> That's the last known issue with the patch. I doubt anyone will elect to pick\n>> it up in the next 8 hours, but I think it's in very good shape for v14 :)\n> \n> I tweaked some comments and docs and plan to mark this RfC.\n\nYeah, unfortunately this will have to wait at least until v14 opens\nfor business :(\n--\nMichael",
"msg_date": "Sun, 12 Apr 2020 10:45:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Apr 11, 2020 at 08:33:52PM -0500, Justin Pryzby wrote:\n> > That's the last known issue with the patch. I doubt anyone will elect to pick\n> > it up in the next 8 hours, but I think it's in very good shape for v14 :)\n> \n> I tweaked some comments and docs and plan to mark this RfC.\n\nRebased onto d12bdba77b0fce9df818bc84ad8b1d8e7a96614b\n\nRestored two tests from Alexey's original patch which exposed issue with\nREINDEX DATABASE when allow_system_table_mods=off.\n\n-- \nJustin",
"msg_date": "Sun, 26 Apr 2020 12:56:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sun, Apr 26, 2020 at 12:56:14PM -0500, Justin Pryzby wrote:\n> Rebased onto d12bdba77b0fce9df818bc84ad8b1d8e7a96614b\n> \n> Restored two tests from Alexey's original patch which exposed issue with\n> REINDEX DATABASE when allow_system_table_mods=off.\n\nI have been looking at 0001 as a start, and your patch is incorrect on\na couple of aspects for the completion of REINDEX:\n- \"(\" is not proposed as a completion option after the initial\nREINDEX, and I think that it should.\n- completion gets incorrect for all the commands once a parenthesized\nlist of options is present, as CONCURRENTLY goes missing.\n\nThe presence of CONCURRENTLY makes the completion a bit more complex\nthan the other commands, as we need to add this keyword if still not\nspecified with the other objects of the wanted type to reindex, but it\ncan be done as the attached. What do you think?\n--\nMichael",
"msg_date": "Sun, 9 Aug 2020 20:02:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sun, Aug 09, 2020 at 08:02:52PM +0900, Michael Paquier wrote:\n> I have been looking at 0001 as a start, and your patch is incorrect on\n> a couple of aspects for the completion of REINDEX:\n> - \"(\" is not proposed as a completion option after the initial\n> REINDEX, and I think that it should.\n\nThat part of your patch handles REINDEX and REINDEX(*) differently than mine.\nYours is technically more correct/complete. But, I recall Tom objected a\ndifferent patch because of completing to a single char. I think the case is\narguable either way: if only some completions are shown, then it hides the\nothers..\nhttps://www.postgresql.org/message-id/14255.1536781029@sss.pgh.pa.us\n\n- else if (Matches(\"REINDEX\") || Matches(\"REINDEX\", \"(*)\"))\n+ else if (Matches(\"REINDEX\"))\n+ COMPLETE_WITH(\"TABLE\", \"INDEX\", \"SYSTEM\", \"SCHEMA\", \"DATABASE\", \"(\");\n+ else if (Matches(\"REINDEX\", \"(*)\"))\n COMPLETE_WITH(\"TABLE\", \"INDEX\", \"SYSTEM\", \"SCHEMA\", \"DATABASE\");\n\n> - completion gets incorrect for all the commands once a parenthesized\n> list of options is present, as CONCURRENTLY goes missing.\n\nThe rest of your patch looks fine. In my mind, REINDEX(CONCURRENTLY) was the\n\"new way\" to write things, and it's what's easy to support, so I think I didn't\nput special effort into making tab completion itself complete.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 9 Aug 2020 21:24:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sun, Aug 09, 2020 at 09:24:43PM -0500, Justin Pryzby wrote:\n> That part of your patch handles REINDEX and REINDEX(*) differently than mine.\n> Yours is technically more correct/complete. But, I recall Tom objected a\n> different patch because of completing to a single char. I think the case is\n> arguable either way: if only some completions are shown, then it hides the\n> others..\n> https://www.postgresql.org/message-id/14255.1536781029@sss.pgh.pa.us\n\nThanks for the reference. Indeed, I can see this argument going both\nways. Now showing \"(\" after typing REINDEX as a completion option\ndoes not let the user know that parenthesized options are supported,\nbut on the contrary this can also clutter the output. The latter\nmakes more sense now to be consistent with VACUUM and ANALYZE though,\nso I have removed that part, and applied the patch.\n\n> The rest of your patch looks fine. In my mind, REINDEX(CONCURRENTLY) was the\n> \"new way\" to write things, and it's what's easy to support, so I think I didn't\n> put special effort into making tab completion itself complete.\n\nThe grammar that has been committed was the one that for the most\nsupport, so we need to live with that. I wonder if we should simplify\nReindexStmt and move the \"concurrent\" flag to be under \"options\", but\nthat may not be worth the time spent on as long as we don't have\nCONCURRENTLY part of the parenthesized grammar.\n--\nMichael",
"msg_date": "Tue, 11 Aug 2020 14:39:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 02:39:45PM +0900, Michael Paquier wrote:\n> so I have removed that part, and applied the patch.\n\nThank you\n\n> > The rest of your patch looks fine. In my mind, REINDEX(CONCURRENTLY) was the\n> > \"new way\" to write things, and it's what's easy to support, so I think I didn't\n> > put special effort into making tab completion itself complete.\n> \n> The grammar that has been committed was the one that for the most\n> support, so we need to live with that. I wonder if we should simplify\n> ReindexStmt and move the \"concurrent\" flag to be under \"options\", but\n> that may not be worth the time spent on as long as we don't have\n> CONCURRENTLY part of the parenthesized grammar.\n\nI think it's kind of a good idea, since the next patch does exactly that\n(parenthesize (CONCURRENTLY)).\n\nI included that as a new 0002, but it doesn't save anything though, so maybe\nit's not a win.\n\n$ git diff --stat\n src/backend/commands/indexcmds.c | 20 +++++++++++---------\n src/backend/nodes/copyfuncs.c | 1 -\n src/backend/nodes/equalfuncs.c | 1 -\n src/backend/parser/gram.y | 16 ++++++++++++----\n src/backend/tcop/utility.c | 6 +++---\n src/include/nodes/parsenodes.h | 2 +-\n 6 files changed, 27 insertions(+), 19 deletions(-)\n\n-- \nJustin",
"msg_date": "Tue, 11 Aug 2020 02:09:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "This patch seems to be missing a call to RelationAssumeNewRelfilenode() in\nreindex_index().\n\nThat's maybe the related to the cause of the crashes I pointed out earlier this\nyear.\n\nAlexey's v4 patch changed RelationSetNewRelfilenode() to accept a tablespace\nparameter, but Michael seemed to object to that. However that seems cleaner\nand ~30 line shorter.\n\nMichael, would you comment on that ? The v4 patch and your comments are here.\nhttps://www.postgresql.org/message-id/attachment/105574/v4-0001-Allow-REINDEX-and-REINDEX-CONCURRENTLY-to-change-tablespace.patch\nhttps://www.postgresql.org/message-id/20191127035416.GG5435%40paquier.xyz\n\n> --- a/src/backend/catalog/index.c\n> +++ b/src/backend/catalog/index.c\n> @@ -3480,6 +3518,47 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,\n> \t */\n> \tCheckTableNotInUse(iRel, \"REINDEX INDEX\");\n> \n> +\tif (tablespaceOid == MyDatabaseTableSpace)\n> +\t\ttablespaceOid = InvalidOid;\n> +\n> +\t/*\n> +\t * Set the new tablespace for the relation. Do that only in the\n> +\t * case where the reindex caller wishes to enforce a new tablespace.\n> +\t */\n> +\tif (set_tablespace &&\n> +\t\ttablespaceOid != iRel->rd_rel->reltablespace)\n> +\t{\n> +\t\tRelation\t\tpg_class;\n> +\t\tForm_pg_class\trd_rel;\n> +\t\tHeapTuple\t\ttuple;\n> +\n> +\t\t/* First get a modifiable copy of the relation's pg_class row */\n> +\t\tpg_class = table_open(RelationRelationId, RowExclusiveLock);\n> +\n> +\t\ttuple = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(indexId));\n> +\t\tif (!HeapTupleIsValid(tuple))\n> +\t\t\telog(ERROR, \"cache lookup failed for relation %u\", indexId);\n> +\t\trd_rel = (Form_pg_class) GETSTRUCT(tuple);\n> +\n> +\t\t/*\n> +\t\t * Mark the relation as ready to be dropped at transaction commit,\n> +\t\t * before making visible the new tablespace change so as this won't\n> +\t\t * miss things.\n> +\t\t */\n> +\t\tRelationDropStorage(iRel);\n> +\n> +\t\t/* Update the pg_class row */\n> +\t\trd_rel->reltablespace = tablespaceOid;\n> +\t\tCatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\n> +\n> +\t\theap_freetuple(tuple);\n> +\n> +\t\ttable_close(pg_class, RowExclusiveLock);\n> +\n> +\t\t/* Make sure the reltablespace change is visible */\n> +\t\tCommandCounterIncrement();\n\n\n\n",
"msg_date": "Tue, 1 Sep 2020 05:12:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-09-01 13:12, Justin Pryzby wrote:\n> This patch seems to be missing a call to RelationAssumeNewRelfilenode() \n> in\n> reindex_index().\n> \n> That's maybe the related to the cause of the crashes I pointed out \n> earlier this\n> year.\n> \n> Alexey's v4 patch changed RelationSetNewRelfilenode() to accept a \n> tablespace\n> parameter, but Michael seemed to object to that. However that seems \n> cleaner\n> and ~30 line shorter.\n> \n> Michael, would you comment on that ? The v4 patch and your comments \n> are here.\n> https://www.postgresql.org/message-id/attachment/105574/v4-0001-Allow-REINDEX-and-REINDEX-CONCURRENTLY-to-change-tablespace.patch\n> https://www.postgresql.org/message-id/20191127035416.GG5435%40paquier.xyz\n> \n\nActually, the last time we discussed this point I only got the gut \nfeeling that this is a subtle place and it is very easy to break things \nwith these changes. However, it isn't clear for me how exactly. That \nway, I'd be glad if Michael could reword his explanation, so it'd more \nclear for me as well.\n\nBTW, I've started doing a review of the last patch set yesterday and \nwill try to post some comments later.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Tue, 01 Sep 2020 13:36:38 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Aug-11, Justin Pryzby wrote:\n> On Tue, Aug 11, 2020 at 02:39:45PM +0900, Michael Paquier wrote:\n\n> > The grammar that has been committed was the one that for the most\n> > support, so we need to live with that. I wonder if we should simplify\n> > ReindexStmt and move the \"concurrent\" flag to be under \"options\", but\n> > that may not be worth the time spent on as long as we don't have\n> > CONCURRENTLY part of the parenthesized grammar.\n> \n> I think it's kind of a good idea, since the next patch does exactly that\n> (parenthesize (CONCURRENTLY)).\n> \n> I included that as a new 0002, but it doesn't save anything though, so maybe\n> it's not a win.\n\nThe advantage of using a parenthesized option list is that you can add\n*further* options without making the new keywords reserved. Of course,\nwe already reserve CONCURRENTLY and VERBOSE pretty severely, so there's\nno change. If you wanted REINDEX FLUFFY then it wouldn't work without\nmaking that at least type_func_name_keyword I think; but REINDEX\n(FLUFFY) would work just fine. And of course the new feature at hand\ncan be implemented.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 1 Sep 2020 11:40:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 11:40:18AM -0400, Alvaro Herrera wrote:\n> On 2020-Aug-11, Justin Pryzby wrote:\n> > On Tue, Aug 11, 2020 at 02:39:45PM +0900, Michael Paquier wrote:\n> \n> > > The grammar that has been committed was the one that for the most\n> > > support, so we need to live with that. I wonder if we should simplify\n> > > ReindexStmt and move the \"concurrent\" flag to be under \"options\", but\n> > > that may not be worth the time spent on as long as we don't have\n> > > CONCURRENTLY part of the parenthesized grammar.\n> > \n> > I think it's kind of a good idea, since the next patch does exactly that\n> > (parenthesize (CONCURRENTLY)).\n> > \n> > I included that as a new 0002, but it doesn't save anything though, so maybe\n> > it's not a win.\n> \n> The advantage of using a parenthesized option list is that you can add\n> *further* options without making the new keywords reserved. Of course,\n> we already reserve CONCURRENTLY and VERBOSE pretty severely, so there's\n> no change. If you wanted REINDEX FLUFFY then it wouldn't work without\n> making that at least type_func_name_keyword I think; but REINDEX\n> (FLUFFY) would work just fine. And of course the new feature at hand\n> can be implemented.\n\nThe question isn't whether to use a parenthesized option list. I realized that\nlong ago (even though Alexey didn't initially like it). Check 0002, which gets\nrid of \"bool concurrent\" in favour of stmt->options&REINDEXOPT_CONCURRENT.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Sep 2020 10:43:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Sep-01, Justin Pryzby wrote:\n\n> On Tue, Sep 01, 2020 at 11:40:18AM -0400, Alvaro Herrera wrote:\n\n> > The advantage of using a parenthesized option list is that you can add\n> > *further* options without making the new keywords reserved. Of course,\n> > we already reserve CONCURRENTLY and VERBOSE pretty severely, so there's\n> > no change. If you wanted REINDEX FLUFFY then it wouldn't work without\n> > making that at least type_func_name_keyword I think; but REINDEX\n> > (FLUFFY) would work just fine. And of course the new feature at hand\n> > can be implemented.\n> \n> The question isn't whether to use a parenthesized option list. I realized that\n> long ago (even though Alexey didn't initially like it). Check 0002, which gets\n> rid of \"bool concurrent\" in favour of stmt->options&REINDEXOPT_CONCURRENT.\n\nAh! I see, sorry for the noise. Well, respectfully, having a separate\nboolean to store one option when you already have a bitmask for options\nis silly.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 1 Sep 2020 11:48:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 11:48:30AM -0400, Alvaro Herrera wrote:\n> On 2020-Sep-01, Justin Pryzby wrote:\n>> The question isn't whether to use a parenthesized option list. I realized that\n>> long ago (even though Alexey didn't initially like it). Check 0002, which gets\n>> rid of \"bool concurrent\" in favour of stmt->options&REINDEXOPT_CONCURRENT.\n> \n> Ah! I see, sorry for the noise. Well, respectfully, having a separate\n> boolean to store one option when you already have a bitmask for options\n> is silly.\n\nYeah, I am all for removing \"concurrent\" from ReindexStmt, but I don't\nthink that the proposed 0002 is that, because it is based on the\nassumption that we'd want more than just boolean-based options in\nthose statements, and this case is not justified yet except if it\nbecomes possible to enforce tablespaces. At this stage, I think that\nit is more sensible to just update gram.y and add a\nREINDEXOPT_CONCURRENTLY. I also think that it would also make sense\nto pass down \"options\" within ReindexIndexCallbackState() (for example\nimagine a SKIP_LOCKED for REINDEX).\n--\nMichael",
"msg_date": "Wed, 2 Sep 2020 10:00:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Sep-02, Michael Paquier wrote:\n\n> Yeah, I am all for removing \"concurrent\" from ReindexStmt, but I don't\n> think that the proposed 0002 is that, because it is based on the\n> assumption that we'd want more than just boolean-based options in\n> those statements, and this case is not justified yet except if it\n> becomes possible to enforce tablespaces. At this stage, I think that\n> it is more sensible to just update gram.y and add a\n> REINDEXOPT_CONCURRENTLY.\n\nYes, agreed. I had not seen the \"params\" business.\n\n> I also think that it would also make sense to pass down \"options\"\n> within ReindexIndexCallbackState() (for example imagine a SKIP_LOCKED\n> for REINDEX).\n\nSeems sensible, but only to be done when actually needed, right?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 1 Sep 2020 21:29:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 09:29:28PM -0400, Alvaro Herrera wrote:\n> Seems sensible, but only to be done when actually needed, right?\n\nOf course.\n--\nMichael",
"msg_date": "Wed, 2 Sep 2020 10:48:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Sep 02, 2020 at 10:00:12AM +0900, Michael Paquier wrote:\n> On Tue, Sep 01, 2020 at 11:48:30AM -0400, Alvaro Herrera wrote:\n> > On 2020-Sep-01, Justin Pryzby wrote:\n> >> The question isn't whether to use a parenthesized option list. I realized that\n> >> long ago (even though Alexey didn't initially like it). Check 0002, which gets\n> >> rid of \"bool concurrent\" in favour of stmt->options&REINDEXOPT_CONCURRENT.\n> > \n> > Ah! I see, sorry for the noise. Well, respectfully, having a separate\n> > boolean to store one option when you already have a bitmask for options\n> > is silly.\n> \n> Yeah, I am all for removing \"concurrent\" from ReindexStmt, but I don't\n> think that the proposed 0002 is that, because it is based on the\n> assumption that we'd want more than just boolean-based options in\n> those statements, and this case is not justified yet except if it\n> becomes possible to enforce tablespaces. At this stage, I think that\n> it is more sensible to just update gram.y and add a\n> REINDEXOPT_CONCURRENTLY. I also think that it would also make sense\n> to pass down \"options\" within ReindexIndexCallbackState() (for example\n> imagine a SKIP_LOCKED for REINDEX).\n\nUh, this whole thread is about implementing REINDEX (TABLESPACE foo), and the\npreliminary patch 0001 is to keep separate the tablespace parts of that\ncontent. 0002 is a minor tangent which I assume would be squished into 0001\nwhich cleans up historic cruft, using new params in favour of historic options.\n\nI think my change is probably incomplete, and ReindexStmt node should not have\nan int options. parse_reindex_params() would parse it into local int *options\nand char **tablespacename params.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Sep 2020 21:24:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 09:24:10PM -0500, Justin Pryzby wrote:\n> On Wed, Sep 02, 2020 at 10:00:12AM +0900, Michael Paquier wrote:\n> > On Tue, Sep 01, 2020 at 11:48:30AM -0400, Alvaro Herrera wrote:\n> > > On 2020-Sep-01, Justin Pryzby wrote:\n> > >> The question isn't whether to use a parenthesized option list. I realized that\n> > >> long ago (even though Alexey didn't initially like it). Check 0002, which gets\n> > >> rid of \"bool concurrent\" in favour of stmt->options&REINDEXOPT_CONCURRENT.\n> > > \n> > > Ah! I see, sorry for the noise. Well, respectfully, having a separate\n> > > boolean to store one option when you already have a bitmask for options\n> > > is silly.\n> > \n> > Yeah, I am all for removing \"concurrent\" from ReindexStmt, but I don't\n> > think that the proposed 0002 is that, because it is based on the\n> > assumption that we'd want more than just boolean-based options in\n> > those statements, and this case is not justified yet except if it\n> > becomes possible to enforce tablespaces. At this stage, I think that\n> > it is more sensible to just update gram.y and add a\n> > REINDEXOPT_CONCURRENTLY. I also think that it would also make sense\n> > to pass down \"options\" within ReindexIndexCallbackState() (for example\n> > imagine a SKIP_LOCKED for REINDEX).\n> \n> Uh, this whole thread is about implementing REINDEX (TABLESPACE foo), and the\n> preliminary patch 0001 is to keep separate the tablespace parts of that\n> content. 0002 is a minor tangent which I assume would be squished into 0001\n> which cleans up historic cruft, using new params in favour of historic options.\n> \n> I think my change is probably incomplete, and ReindexStmt node should not have\n> an int options. parse_reindex_params() would parse it into local int *options\n> and char **tablespacename params.\n\nDone in the attached, which is also rebased on 1d6541666.\n\nAnd added RelationAssumeNewRelfilenode() as I mentioned - but I'm hoping to\nhear from Michael about any reason not to call RelationSetNewRelfilenode()\ninstead of directly calling the things it would itself call.\n\n-- \nJustin",
"msg_date": "Tue, 1 Sep 2020 23:56:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-09-02 07:56, Justin Pryzby wrote:\n> \n> Done in the attached, which is also rebased on 1d6541666.\n> \n> And added RelationAssumeNewRelfilenode() as I mentioned - but I'm \n> hoping to\n> hear from Michael about any reason not to call \n> RelationSetNewRelfilenode()\n> instead of directly calling the things it would itself call.\n\nThe latest patch set immediately got a conflict with Michael's fixup \n01767533, so I've rebased it first of all.\n\n+ Prints a progress report as each table is clustered.\n+<!-- When specified within parenthesis, <literal>VERBOSE</literal> may \nbe followed by a boolean ...-->\n\nI think that we can remove this comment completely as it is already \nexplained in the docs later.\n\n+\t\t\t| CLUSTER opt_verbose '(' vac_analyze_option_list ')' qualified_name \ncluster_index_specification\n+\t\t\t\t{\n\nWhat's the point in allowing a mixture of old options with new \nparenthesized option list? VACUUM doesn't do so. I can understand it for \nREINDEX CONCURRENTLY, since parenthesized options were already there, \nbut not for CLUSTER.\n\nWith v23 it is possible to write:\n\nCLUSTER VERBOSE (VERBOSE) table USING ...\n\nwhich is untidy. Furthermore, 'CLUSTER VERBOSE (' is tab-completed to \n'CLUSTER VERBOSE (USING'. That way I propose to only allow either new \noptions or old similarly to the VACUUM. See attached 0002.\n\n-\t\t\tCOMPLETE_WITH(\"VERBOSE\");\n+\t\t\tCOMPLETE_WITH(\"TABLESPACE|VERBOSE\");\n\nTab completion in the CLUSTER was broken for parenthesized options, so \nI've fixed it in the 0005.\n\nAlso, I noticed that you used vac_analyze_option_list instead of \nreindex_option_list and I checked other option lists in the grammar. \nI've found that explain_option_list and vac_analyze_option_list are \nidentical, so it makes sense to leave just one of them and rename it to, \ne.g., common_option_list in order to use it everywhere needed (REINDEX, \nVACUUM, EXPLAIN, CLUSTER, ANALYZE). The whole grammar is already \ncomplicated enough to keep the exact duplicates and new options will be \nadded to the lists in the backend code, not parser. What do you think?\n\nIt is done in the 0007 attached. I think it should be applied altogether \nwith 0001 or before/after, but I put this as the last patch in the set \nin order to easier discard it if others would disagree.\n\nOtherwise, everything seems to be working fine. Cannot find any problems \nso far.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 03 Sep 2020 00:00:17 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 12:00:17AM +0300, Alexey Kondratov wrote:\n> On 2020-09-02 07:56, Justin Pryzby wrote:\n> > \n> > Done in the attached, which is also rebased on 1d6541666.\n> > \n> > And added RelationAssumeNewRelfilenode() as I mentioned - but I'm hoping\n> > to\n> > hear from Michael about any reason not to call\n> > RelationSetNewRelfilenode()\n> > instead of directly calling the things it would itself call.\n> \n> The latest patch set immediately got a conflict with Michael's fixup\n> 01767533, so I've rebased it first of all.\n\nOn my side, I've also rearranged function parameters to make the diff more\nreadable. And squishes your changes into the respective patches.\n\nMichael started a new thread about retiring ReindexStmt->concurrent, which I\nguess will cause more conflicts (although I don't see why we wouldn't implement\na generic List grammar now rather than only after a preliminary patch).\n\n-- \nJustin",
"msg_date": "Wed, 2 Sep 2020 18:07:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Sep 02, 2020 at 06:07:06PM -0500, Justin Pryzby wrote:\n> On my side, I've also rearranged function parameters to make the diff more\n> readable. And squishes your changes into the respective patches.\n\nThis resolves a breakage I failed to notice from a last-minute edit.\nAnd squishes two commits.\nAnd rebased on Michael's commit removing ReindexStmt->concurrent.\n\n-- \nJustin",
"msg_date": "Thu, 3 Sep 2020 21:43:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 09:43:51PM -0500, Justin Pryzby wrote:\n> And rebased on Michael's commit removing ReindexStmt->concurrent.\n\nRebased on a6642b3ae: Add support for partitioned tables and indexes in REINDEX\n\nSo now this includes the new functionality and test for reindexing a\npartitioned table onto a new tablespace. That part could use some additional\nreview.\n\nI guess this patch series will also conflict with the CLUSTER part of this\nother one. Once its CLUSTER patch is commited, this patch should to be updated\nto test clustering a partitioned table to a new tbspc.\nhttps://commitfest.postgresql.org/29/2584/\nREINDEX/CIC/CLUSTER of partitioned tables\n\n-- \nJustin",
"msg_date": "Tue, 8 Sep 2020 18:39:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Sep-08, Justin Pryzby wrote:\n\n> From 992e0121925c74d5c5a4e5b132cddb3d6b31da86 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 27 Mar 2020 17:50:46 -0500\n> Subject: [PATCH v27 1/5] Change REINDEX/CLUSTER to accept an option list..\n> \n> ..like EXPLAIN (..), VACUUM (..), and ANALYZE (..).\n> \n> Change docs in the style of VACUUM. See also: 52dcfda48778d16683c64ca4372299a099a15b96\n\nI don't understand why you change all options to DefElem instead of\nkeeping the bitmask for those options that can use it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 8 Sep 2020 21:02:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Sep 08, 2020 at 09:02:38PM -0300, Alvaro Herrera wrote:\n> On 2020-Sep-08, Justin Pryzby wrote:\n> \n> > From 992e0121925c74d5c5a4e5b132cddb3d6b31da86 Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Fri, 27 Mar 2020 17:50:46 -0500\n> > Subject: [PATCH v27 1/5] Change REINDEX/CLUSTER to accept an option list..\n> > \n> > ..like EXPLAIN (..), VACUUM (..), and ANALYZE (..).\n> > \n> > Change docs in the style of VACUUM. See also: 52dcfda48778d16683c64ca4372299a099a15b96\n> \n> I don't understand why you change all options to DefElem instead of\n> keeping the bitmask for those options that can use it.\n\nThat's originally how I did it, too.\n\nInitially I added List *params, and Michael suggested to retire\nReindexStmt->concurrent. I provided a patch to do so, initially by leaving int\noptions and then, after this, removing it to \"complete the thought\", and get\nrid of the remnants of the \"old way\" of doing it. This is also how vacuum and\nexplain are done.\nhttps://www.postgresql.org/message-id/20200902022410.GA20149%40telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 8 Sep 2020 19:17:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Sep 08, 2020 at 07:17:58PM -0500, Justin Pryzby wrote:\n> Initially I added List *params, and Michael suggested to retire\n> ReindexStmt->concurrent. I provided a patch to do so, initially by leaving int\n> options and then, after this, removing it to \"complete the thought\", and get\n> rid of the remnants of the \"old way\" of doing it. This is also how vacuum and\n> explain are done.\n> https://www.postgresql.org/message-id/20200902022410.GA20149%40telsasoft.com\n\nDefining a set of DefElem when parsing and then using the int\n\"options\" with bitmasks where necessary at the beginning of the\nexecution looks like a good balance to me. This way, you can extend\nthe grammar to use things like (verbose = true), etc.\n\nBy the way, skimming through the patch set, I was wondering if we\ncould do the refactoring of patch 0005 as a first step, until I\nnoticed this part:\n+common_option_name:\n NonReservedWord { $$ = $1; }\n\t| analyze_keyword { $$ = \"analyze\"; }\nThis is not a good idea as you make ANALYZE an option available for\nall the commands involved in the refactoring. A portion of that could\nbe considered though, like the use of common_option_arg.\n--\nMichael",
"msg_date": "Wed, 9 Sep 2020 21:22:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-09-09 15:22, Michael Paquier wrote:\n> On Tue, Sep 08, 2020 at 07:17:58PM -0500, Justin Pryzby wrote:\n>> Initially I added List *params, and Michael suggested to retire\n>> ReindexStmt->concurrent. I provided a patch to do so, initially by \n>> leaving int\n>> options and then, after this, removing it to \"complete the thought\", \n>> and get\n>> rid of the remnants of the \"old way\" of doing it. This is also how \n>> vacuum and\n>> explain are done.\n>> https://www.postgresql.org/message-id/20200902022410.GA20149%40telsasoft.com\n> \n> Defining a set of DefElem when parsing and then using the int\n> \"options\" with bitmasks where necessary at the beginning of the\n> execution looks like a good balance to me. This way, you can extend\n> the grammar to use things like (verbose = true), etc.\n> \n> By the way, skimming through the patch set, I was wondering if we\n> could do the refactoring of patch 0005 as a first step\n> \n\nYes, I did it with intention to put as a first patch, but wanted to get \nsome feedback. It's easier to refactor the last patch without rebasing \nothers.\n\n> \n> until I\n> noticed this part:\n> +common_option_name:\n> NonReservedWord { $$ = $1; }\n> \t| analyze_keyword { $$ = \"analyze\"; }\n> This is not a good idea as you make ANALYZE an option available for\n> all the commands involved in the refactoring. A portion of that could\n> be considered though, like the use of common_option_arg.\n> \n\n From the grammar perspective ANY option is available for any command \nthat uses parenthesized option list. All the checks and validations are \nperformed at the corresponding command code.\nThis analyze_keyword is actually doing only an ANALYZE word \nnormalization if it's used as an option. Why it could be harmful?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 09 Sep 2020 16:03:45 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Sep 09, 2020 at 09:22:00PM +0900, Michael Paquier wrote:\n> On Tue, Sep 08, 2020 at 07:17:58PM -0500, Justin Pryzby wrote:\n> > Initially I added List *params, and Michael suggested to retire\n> > ReindexStmt->concurrent. I provided a patch to do so, initially by leaving int\n> > options and then, after this, removing it to \"complete the thought\", and get\n> > rid of the remnants of the \"old way\" of doing it. This is also how vacuum and\n> > explain are done.\n> > https://www.postgresql.org/message-id/20200902022410.GA20149%40telsasoft.com\n> \n> Defining a set of DefElem when parsing and then using the int\n> \"options\" with bitmasks where necessary at the beginning of the\n> execution looks like a good balance to me. This way, you can extend\n> the grammar to use things like (verbose = true), etc.\n\nIt doesn't need to be extended - defGetBoolean already handles that. I don't\nsee what good can come from storing the information in two places in the same\nstructure.\n\n|postgres=# CLUSTER (VERBOSE on) pg_attribute USING pg_attribute_relid_attnum_index ;\n|INFO: clustering \"pg_catalog.pg_attribute\" using index scan on \"pg_attribute_relid_attnum_index\"\n|INFO: \"pg_attribute\": found 0 removable, 2968 nonremovable row versions in 55 pages\n|DETAIL: 0 dead row versions cannot be removed yet.\n|CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n|CLUSTER\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 9 Sep 2020 10:36:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-09-09 18:36, Justin Pryzby wrote:\n> Rebased on a6642b3ae: Add support for partitioned tables and indexes in \n> REINDEX\n> \n> So now this includes the new functionality and test for reindexing a\n> partitioned table onto a new tablespace. That part could use some \n> additional\n> review.\n> \n\nI have finally had a look on your changes regarding partitioned tables.\n\n+set_rel_tablespace(Oid indexid, char *tablespace)\n+{\n+\tOid tablespaceOid = tablespace ? get_tablespace_oid(tablespace, false) \n:\n+\t\tInvalidOid;\n\nYou pass a tablespace name to set_rel_tablespace(), but it is already \nparsed into the Oid before. So I do not see why we need this extra work \nhere instead of just passing Oid directly.\n\nAlso set_rel_tablespace() does not check for a no-op case, i.e. if \nrequested tablespace is the same as before.\n\n+\t/*\n+\t * Set the new tablespace for the relation. Do that only in the\n+\t * case where the reindex caller wishes to enforce a new tablespace.\n+\t */\n+\tif (set_tablespace &&\n+\t\ttablespaceOid != iRel->rd_rel->reltablespace)\n\nJust noticed that this check is not completely correct as well, since it \ndoes not check for MyDatabaseTableSpace (stored as InvalidOid) logic.\n\nI put these small fixes directly into the attached 0003.\n\nAlso, I thought about your comment above set_rel_tablespace() and did a \nbit 'extreme' refactoring, which is attached as a separated patch 0004. \nThe only one doubtful change IMO is reordering of RelationDropStorage() \noperation inside reindex_index(). However, it only schedules unlinking \nof physical storage at transaction commit, so this refactoring seems to \nbe safe.\n\nIf there will be no objections I would merge it with 0003.\n\nOn 2020-09-09 16:03, Alexey Kondratov wrote:\n> On 2020-09-09 15:22, Michael Paquier wrote:\n>> \n>> By the way, skimming through the patch set, I was wondering if we\n>> could do the refactoring of patch 0005 as a first step\n>> \n> \n> Yes, I did it with intention to put as a first patch, but wanted to\n> get some feedback. It's easier to refactor the last patch without\n> rebasing others.\n> \n>> \n>> until I\n>> noticed this part:\n>> +common_option_name:\n>> NonReservedWord { $$ = $1; }\n>> \t| analyze_keyword { $$ = \"analyze\"; }\n>> This is not a good idea as you make ANALYZE an option available for\n>> all the commands involved in the refactoring. A portion of that could\n>> be considered though, like the use of common_option_arg.\n>> \n> \n> From the grammar perspective ANY option is available for any command\n> that uses parenthesized option list. All the checks and validations\n> are performed at the corresponding command code.\n> This analyze_keyword is actually doing only an ANALYZE word\n> normalization if it's used as an option. Why it could be harmful?\n> \n\nMichael has not replied since then, but he was relatively positive about \n0005 initially, so I put it as a first patch now.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 23 Sep 2020 19:43:01 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Sep 23, 2020 at 07:43:01PM +0300, Alexey Kondratov wrote:\n> On 2020-09-09 18:36, Justin Pryzby wrote:\n> > Rebased on a6642b3ae: Add support for partitioned tables and indexes in\n> > REINDEX\n> > \n> > So now this includes the new functionality and test for reindexing a\n> > partitioned table onto a new tablespace. That part could use some\n> > additional\n> > review.\n> \n> I have finally had a look on your changes regarding partitioned tables.\n> \n> +set_rel_tablespace(Oid indexid, char *tablespace)\n> +{\n> +\tOid tablespaceOid = tablespace ? get_tablespace_oid(tablespace, false) :\n> +\t\tInvalidOid;\n> \n> You pass a tablespace name to set_rel_tablespace(), but it is already parsed\n> into the Oid before. So I do not see why we need this extra work here\n> instead of just passing Oid directly.\n> \n> Also set_rel_tablespace() does not check for a no-op case, i.e. if requested\n> tablespace is the same as before.\n> \n> +\t/*\n> +\t * Set the new tablespace for the relation. Do that only in the\n> +\t * case where the reindex caller wishes to enforce a new tablespace.\n> +\t */\n> +\tif (set_tablespace &&\n> +\t\ttablespaceOid != iRel->rd_rel->reltablespace)\n> \n> Just noticed that this check is not completely correct as well, since it\n> does not check for MyDatabaseTableSpace (stored as InvalidOid) logic.\n> \n> I put these small fixes directly into the attached 0003.\n> \n> Also, I thought about your comment above set_rel_tablespace() and did a bit\n> 'extreme' refactoring, which is attached as a separated patch 0004. The only\n> one doubtful change IMO is reordering of RelationDropStorage() operation\n> inside reindex_index(). However, it only schedules unlinking of physical\n> storage at transaction commit, so this refactoring seems to be safe.\n> \n> If there will be no objections I would merge it with 0003.\n> \n> On 2020-09-09 16:03, Alexey Kondratov wrote:\n> > On 2020-09-09 15:22, Michael Paquier wrote:\n> > > \n> > > By the way, skimming through the patch set, I was wondering if we\n> > > could do the refactoring of patch 0005 as a first step\n> > > \n> > \n> > Yes, I did it with intention to put as a first patch, but wanted to\n> > get some feedback. It's easier to refactor the last patch without\n> > rebasing others.\n> > \n> > > \n> > > until I\n> > > noticed this part:\n> > > +common_option_name:\n> > > NonReservedWord { $$ = $1; }\n> > > \t| analyze_keyword { $$ = \"analyze\"; }\n> > > This is not a good idea as you make ANALYZE an option available for\n> > > all the commands involved in the refactoring. A portion of that could\n> > > be considered though, like the use of common_option_arg.\n> > > \n> > \n> > From the grammar perspective ANY option is available for any command\n> > that uses parenthesized option list. All the checks and validations\n> > are performed at the corresponding command code.\n> > This analyze_keyword is actually doing only an ANALYZE word\n> > normalization if it's used as an option. Why it could be harmful?\n> > \n> \n> Michael has not replied since then, but he was relatively positive about\n> 0005 initially, so I put it as a first patch now.\n\nThanks. I rebased Alexey's latest patch on top of recent changes to cluster.c.\nThis puts the generic grammar changes first. I wasn't paying much attention to\nthat part, so still waiting for a committer review.\n\n-- \nJustin",
"msg_date": "Sat, 31 Oct 2020 13:36:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Oct 31, 2020 at 01:36:11PM -0500, Justin Pryzby wrote:\n> > > From the grammar perspective ANY option is available for any command\n> > > that uses parenthesized option list. All the checks and validations\n> > > are performed at the corresponding command code.\n> > > This analyze_keyword is actually doing only an ANALYZE word\n> > > normalization if it's used as an option. Why it could be harmful?\n> > \n> > Michael has not replied since then, but he was relatively positive about\n> > 0005 initially, so I put it as a first patch now.\n> \n> Thanks. I rebased Alexey's latest patch on top of recent changes to cluster.c.\n> This puts the generic grammar changes first. I wasn't paying much attention to\n> that part, so still waiting for a committer review.\n\n@cfbot: rebased",
"msg_date": "Tue, 24 Nov 2020 09:31:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 09:31:23AM -0600, Justin Pryzby wrote:\n> @cfbot: rebased\n\nCatching up with the activity here, I can see four different things in\nthe patch set attached:\n1) Refactoring of the grammar of CLUSTER, VACUUM, ANALYZE and REINDEX\nto support values in parameters.\n2) Tablespace change for REINDEX.\n3) Tablespace change for VACUUM FULL/CLUSTER.\n4) Tablespace change for indexes with VACUUM FULL/CLUSTER.\n\nI am not sure yet about the last three points, so let's begin with 1)\nthat is dealt with in 0001 and 0002. I have spent some time on 0001,\nrenaming the rule names to be less generic than \"common\", and applied\nit. 0002 looks to be in rather good shape, still there are a few\nthings that have caught my eyes. I'll look at that more closely\ntomorrow.\n--\nMichael",
"msg_date": "Mon, 30 Nov 2020 20:33:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-11-30 14:33, Michael Paquier wrote:\n> On Tue, Nov 24, 2020 at 09:31:23AM -0600, Justin Pryzby wrote:\n>> @cfbot: rebased\n> \n> Catching up with the activity here, I can see four different things in\n> the patch set attached:\n> 1) Refactoring of the grammar of CLUSTER, VACUUM, ANALYZE and REINDEX\n> to support values in parameters.\n> 2) Tablespace change for REINDEX.\n> 3) Tablespace change for VACUUM FULL/CLUSTER.\n> 4) Tablespace change for indexes with VACUUM FULL/CLUSTER.\n> \n> I am not sure yet about the last three points, so let's begin with 1)\n> that is dealt with in 0001 and 0002. I have spent some time on 0001,\n> renaming the rule names to be less generic than \"common\", and applied\n> it. 0002 looks to be in rather good shape, still there are a few\n> things that have caught my eyes. I'll look at that more closely\n> tomorrow.\n> \n\nThanks. I have rebased the remaining patches on top of 873ea9ee to use \n'utility_option_list' instead of 'common_option_list'.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 30 Nov 2020 17:12:42 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 05:12:42PM +0300, Alexey Kondratov wrote:\n> Thanks. I have rebased the remaining patches on top of 873ea9ee to use\n> 'utility_option_list' instead of 'common_option_list'.\n\nThanks, that helps a lot. I have gone through 0002, and tweaked it as\nthe attached (note that this patch is also interesting for another\nthing in development: backend-side reindex filtering of\ncollation-sensitive indexes). Does that look right to you?\n\nThese are mostly matters of consistency with the other commands using\nDefElem, but I think that it is important to get things right:\n- Having the list of options in parsenodes.h becomes incorrect,\nbecause these get now only used at execution time, like VACUUM. So I\nhave moved that to cluster.h and index.h.\n- Let's use an enum for REINDEX, like the others.\n- Having parse_reindex_params() in utility.c is wrong for something\naimed at being used only for REINDEX, so I have moved that to\nindexcmds.c, and renamed the routine to be more consistent with the\nrest. I think that we could more here by having an ExecReindex() that\ndoes all the work based on object types, but I have left that out for\nnow to keep the change minimal.\n- Switched one of the existing tests to stress CONCURRENTLY within\nparenthesis.\n- Indented the whole.\n\nA couple of extra things below.\n\n * CLUSTER [VERBOSE] <qualified_name> [ USING <index_name> ]\n+ * CLUSTER [VERBOSE] [(options)] <qualified_name> [ USING <index_name> ]\nThis line is wrong, and should be:\nCLUSTER [ (options) ] <qualified_name> [ USING <index_name> ]\n\n+CLUSTER [VERBOSE] [ ( <replaceable class=\"parameter\">option</replaceable>\n+CLUSTER [VERBOSE] [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ]\nThe docs in cluster.sgml are wrong as well, you can have VERBOSE as a\nsingle option or as a parenthesized option, but never both at the same\ntime. On the contrary, psql completion got that right. I was first a\nbit surprised that you would not allow the parenthesized set for the\ncase where a relation is not specified in the command, but I agree\nthat this does not seem worth the extra complexity now as this thread\naims at being able to use TABLESPACE which makes little sense\ndatabase-wide.\n\n- VERBOSE\n+ VERBOSE [ <replaceable class=\"parameter\">boolean</replaceable> ]\nForgot about CONCURRENTLY as an option here, as this becomes\npossible.\n--\nMichael",
"msg_date": "Tue, 1 Dec 2020 11:46:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 11:46:55AM +0900, Michael Paquier wrote:\n> On Mon, Nov 30, 2020 at 05:12:42PM +0300, Alexey Kondratov wrote:\n> > Thanks. I have rebased the remaining patches on top of 873ea9ee to use\n> > 'utility_option_list' instead of 'common_option_list'.\n> \n> Thanks, that helps a lot. I have gone through 0002, and tweaked it as\n> the attached (note that this patch is also interesting for another\n> thing in development: backend-side reindex filtering of\n> collation-sensitive indexes). Does that look right to you?\n\nI eyeballed the patch and rebased the rest of the series on top if it to play\nwith. Looks fine - thanks.\n\nFYI, the commit messages have the proper \"author\" for attribution. I proposed\nand implemented the grammar changes in 0002, and implemented INDEX_TABLESPACE,\nbut I'm a reviewer for the main patches.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 30 Nov 2020 23:43:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 11:43:08PM -0600, Justin Pryzby wrote:\n> I eyeballed the patch and rebased the rest of the series on top if it to play\n> with. Looks fine - thanks.\n\nCool, thanks.\n\n> FYI, the commit messages have the proper \"author\" for attribution. I proposed\n> and implemented the grammar changes in 0002, and implemented INDEX_TABLESPACE,\n> but I'm a reviewer for the main patches.\n\nWell, my impression is that both of you kept exchanging patches,\ntouching and reviewing each other's patch (note that Alexei has also\nsent a rebase of 0002 just yesterday), so I think that it is fair to\nsay that both of you should be listed as authors and credited as such\nin the release notes for this one.\n--\nMichael",
"msg_date": "Tue, 1 Dec 2020 15:10:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 03:10:13PM +0900, Michael Paquier wrote:\n> Well, my impression is that both of you kept exchanging patches,\n> touching and reviewing each other's patch (note that Alexei has also\n> sent a rebase of 0002 just yesterday), so I think that it is fair to\n> say that both of you should be listed as authors and credited as such\n> in the release notes for this one.\n\nOK, this one is now committed. As of this thread, I think that we are\ngoing to need to do a bit more once we add options that are not just\nbooleans for both commands (the grammar rules do not need to be\nchanged now):\n- Have a ReindexParams, similarly to VacuumParams except that we store\nthe results of the parsing in a single place. With the current HEAD,\nI did not see yet the point in doing so because we just need an\ninteger that maps to a bitmask made of ReindexOption.\n- The part related to ReindexStmt in utility.c is getting more and\nmore complicated, so we could move most of the execution into\nindexcmds.c with some sort of ExecReindex() doing the option parsing\njob, and go to the correct code path depending on the object type\ndealt with.\n--\nMichael",
"msg_date": "Thu, 3 Dec 2020 10:19:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 10:19:43AM +0900, Michael Paquier wrote:\n> OK, this one is now committed. As of this thread, I think that we are\n> going to need to do a bit more once we add options that are not just\n> booleans for both commands (the grammar rules do not need to be\n> changed now):\n> - Have a ReindexParams, similarly to VacuumParams except that we store\n> the results of the parsing in a single place. With the current HEAD,\n> I did not see yet the point in doing so because we just need an\n> integer that maps to a bitmask made of ReindexOption.\n> - The part related to ReindexStmt in utility.c is getting more and\n> more complicated, so we could move most of the execution into\n> indexcmds.c with some sort of ExecReindex() doing the option parsing\n> job, and go to the correct code path depending on the object type\n> dealt with.\n\nGood idea. I think you mean like this.\n\nI don't know where to put the struct.\nI thought maybe the lowlevel, integer options should live in the struct, in\naddition to bools, but it's not important.\n\n-- \nJustin",
"msg_date": "Wed, 2 Dec 2020 22:30:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 10:30:08PM -0600, Justin Pryzby wrote:\n> Good idea. I think you mean like this.\n\nYes, something like that. Thanks.\n\n> +typedef struct ReindexParams {\n> +\tbool concurrently;\n> +\tbool verbose;\n> +\tbool missingok;\n> +\n> +\tint options;\t/* bitmask of lowlevel REINDEXOPT_* */\n> +} ReindexParams;\n> +\n\nBy moving everything into indexcmds.c, keeping ReindexParams within it\nmakes sense to me. Now, there is no need for the three booleans\nbecause options stores the same information, no?\n\n> struct ReindexIndexCallbackState\n> {\n> -\tint\t\t\toptions;\t\t/* options from statement */\n> +\tbool\t\tconcurrently;\n> \tOid\t\t\tlocked_table_oid;\t/* tracks previously locked table */\n> };\n\nHere also, I think that we should just pass down the full\nReindexParams set.\n--\nMichael",
"msg_date": "Thu, 3 Dec 2020 16:12:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "A side comment on this patch: I think using enums as bit mask values is \nbad style. So changing this:\n\n-/* Reindex options */\n-#define REINDEXOPT_VERBOSE (1 << 0) /* print progress info */\n-#define REINDEXOPT_REPORT_PROGRESS (1 << 1) /* report pgstat progress */\n-#define REINDEXOPT_MISSING_OK (1 << 2) /* skip missing relations */\n-#define REINDEXOPT_CONCURRENTLY (1 << 3) /* concurrent mode */\n\nto this:\n\n+typedef enum ReindexOption\n+{\n+ REINDEXOPT_VERBOSE = 1 << 0, /* print progress info */\n+ REINDEXOPT_REPORT_PROGRESS = 1 << 1, /* report pgstat progress */\n+ REINDEXOPT_MISSING_OK = 1 << 2, /* skip missing relations */\n+ REINDEXOPT_CONCURRENTLY = 1 << 3 /* concurrent mode */\n+} ReindexOption;\n\nseems wrong.\n\nThere are a couple of more places like this, including the existing \nClusterOption that this patched moved around, but we should be removing \nthose.\n\nMy reasoning is that if you look at an enum value of this type, either \nsay in a switch statement or a debugger, the enum value might not be any \nof the defined symbols. So that way you lose all the type checking that \nan enum might give you.\n\nLet's just keep the #define's like it is done in almost all other places.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 20:46:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 04:12:53PM +0900, Michael Paquier wrote:\n> > +typedef struct ReindexParams {\n> > +\tbool concurrently;\n> > +\tbool verbose;\n> > +\tbool missingok;\n> > +\n> > +\tint options;\t/* bitmask of lowlevel REINDEXOPT_* */\n> > +} ReindexParams;\n> > +\n> \n> By moving everything into indexcmds.c, keeping ReindexParams within it\n> makes sense to me. Now, there is no need for the three booleans\n> because options stores the same information, no?\n\n I liked the bools, but dropped them so the patch is smaller.\n\n> > struct ReindexIndexCallbackState\n> > {\n> > -\tint\t\t\toptions;\t\t/* options from statement */\n> > +\tbool\t\tconcurrently;\n> > \tOid\t\t\tlocked_table_oid;\t/* tracks previously locked table */\n> > };\n> \n> Here also, I think that we should just pass down the full\n> ReindexParams set.\n\nOk.\n\nRegarding the REINDEX patch, I think this comment is misleading:\n\n| /*\n| * If the relation has a secondary toast rel, reindex that too while we\n| * still hold the lock on the main table.\n| */\n| if ((flags & REINDEX_REL_PROCESS_TOAST) && OidIsValid(toast_relid))\n| {\n| /*\n| * Note that this should fail if the toast relation is missing, so\n| * reset REINDEXOPT_MISSING_OK.\n|+ *\n|+ * Even if table was moved to new tablespace, normally toast cannot move.\n| */\n|+ Oid toasttablespaceOid = allowSystemTableMods ? tablespaceOid : InvalidOid;\n| result |= reindex_relation(toast_relid, flags,\n|- options & ~(REINDEXOPT_MISSING_OK));\n|+ options & ~(REINDEXOPT_MISSING_OK),\n|+ toasttablespaceOid);\n| }\n\nI think it ought to say \"Even if a table's indexes were moved to a new\ntablespace, its toast table's index is not normally moved\"\nRight ?\n\nAlso, I don't know whether we should check for GLOBALTABLESPACE_OID after\ncalling get_tablespace_oid(), or in the lowlevel routines. Note that\nreindex_relation is called during cluster/vacuum, and in the later patches, I\nmoved the test from from cluster() and ExecVacuum() to rebuild_relation().\n\n-- \nJustin",
"msg_date": "Thu, 3 Dec 2020 19:25:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 08:46:09PM +0100, Peter Eisentraut wrote:\n> There are a couple of more places like this, including the existing\n> ClusterOption that this patched moved around, but we should be removing\n> those.\n> \n> My reasoning is that if you look at an enum value of this type, either say\n> in a switch statement or a debugger, the enum value might not be any of the\n> defined symbols. So that way you lose all the type checking that an enum\n> might give you.\n\nVacuumOption does that since 6776142, and ClusterOption since 9ebe057,\nso switching ReindexOption to just match the two others still looks\nlike the most consistent move. Please note that there is more than\nthat, like ScanOptions, relopt_kind, RVROption, InstrumentOption,\nTableLikeOption.\n\nI would not mind changing that, though I am not sure that this\nimproves readability. And if we'd do it, it may make sense to extend\nthat even more to the places where it would apply like the places\nmentioned one paragraph above.\n--\nMichael",
"msg_date": "Fri, 4 Dec 2020 14:37:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-12-04 04:25, Justin Pryzby wrote:\n> On Thu, Dec 03, 2020 at 04:12:53PM +0900, Michael Paquier wrote:\n>> > +typedef struct ReindexParams {\n>> > +\tbool concurrently;\n>> > +\tbool verbose;\n>> > +\tbool missingok;\n>> > +\n>> > +\tint options;\t/* bitmask of lowlevel REINDEXOPT_* */\n>> > +} ReindexParams;\n>> > +\n>> \n>> By moving everything into indexcmds.c, keeping ReindexParams within it\n>> makes sense to me. Now, there is no need for the three booleans\n>> because options stores the same information, no?\n> \n> I liked the bools, but dropped them so the patch is smaller.\n> \n\nI had a look on 0001 and it looks mostly fine to me except some strange \nmixture of tabs/spaces in the ExecReindex(). There is also a couple of \nmeaningful comments:\n\n-\toptions =\n-\t\t(verbose ? REINDEXOPT_VERBOSE : 0) |\n-\t\t(concurrently ? REINDEXOPT_CONCURRENTLY : 0);\n+\tif (verbose)\n+\t\tparams.options |= REINDEXOPT_VERBOSE;\n\nWhy do we need this intermediate 'verbose' variable here? We only use it \nonce to set a bitmask. Maybe we can do it like this:\n\nparams.options |= defGetBoolean(opt) ?\n\tREINDEXOPT_VERBOSE : 0;\n\nSee also attached txt file with diff (I wonder can I trick cfbot this \nway, so it does not apply the diff).\n\n+\tint options;\t/* bitmask of lowlevel REINDEXOPT_* */\n\nI would prefer if the comment says '/* bitmask of ReindexOption */' as \nin the VacuumOptions, since citing the exact enum type make it easier to \nnavigate source code.\n\n> \n> Regarding the REINDEX patch, I think this comment is misleading:\n> \n> |+ * Even if table was moved to new tablespace,\n> normally toast cannot move.\n> | */\n> |+ Oid toasttablespaceOid = allowSystemTableMods ?\n> tablespaceOid : InvalidOid;\n> | result |= reindex_relation(toast_relid, flags,\n> \n> I think it ought to say \"Even if a table's indexes were moved to a new\n> tablespace, its toast table's index is not normally moved\"\n> Right ?\n> \n\nYes, I think so, we are dealing only with index tablespace changing \nhere. Thanks for noticing.\n\n> \n> Also, I don't know whether we should check for GLOBALTABLESPACE_OID \n> after\n> calling get_tablespace_oid(), or in the lowlevel routines. Note that\n> reindex_relation is called during cluster/vacuum, and in the later \n> patches, I\n> moved the test from from cluster() and ExecVacuum() to \n> rebuild_relation().\n> \n\nIIRC, I wanted to do GLOBALTABLESPACE_OID check as early as possible \n(just after getting Oid), since it does not make sense to proceed \nfurther if tablespace is set to that value. So initially there were a \nlot of duplicative GLOBALTABLESPACE_OID checks, since there were a lot \nof reindex entry-points (index, relation, concurrently, etc.). Now we \nare going to have ExecReindex(), so there are much less entry-points and \nin my opinion it is fine to keep this validation just after \nget_tablespace_oid().\n\nHowever, this is mostly a sanity check. I can hardly imagine a lot of \nusers trying to constantly move indexes to the global tablespace, so it \nis also OK to put this check deeper into guts.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Fri, 04 Dec 2020 21:40:31 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Dec-04, Michael Paquier wrote:\n\n> VacuumOption does that since 6776142, and ClusterOption since 9ebe057,\n> so switching ReindexOption to just match the two others still looks\n> like the most consistent move.\n\n9ebe057 goes to show why this is a bad idea, since it has this:\n\n+typedef enum ClusterOption\n+{\n+ CLUOPT_RECHECK, /* recheck relation state */\n+ CLUOPT_VERBOSE /* print progress info */\n+} ClusterOption;\n\nand then you do things like\n\n+ if ($2)\n+ n->options |= CLUOPT_VERBOSE;\n\nand then tests like\n\n+ if ((options & VACOPT_VERBOSE) != 0)\n\nNow if you were to ever define third and fourth values in that enum,\nthis would immediately start malfunctioning.\n\nFWIW I'm with Peter on this.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 16:28:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 09:40:31PM +0300, Alexey Kondratov wrote:\n> > I liked the bools, but dropped them so the patch is smaller.\n> \n> I had a look on 0001 and it looks mostly fine to me except some strange\n> mixture of tabs/spaces in the ExecReindex(). There is also a couple of\n> meaningful comments:\n> \n> -\toptions =\n> -\t\t(verbose ? REINDEXOPT_VERBOSE : 0) |\n> -\t\t(concurrently ? REINDEXOPT_CONCURRENTLY : 0);\n> +\tif (verbose)\n> +\t\tparams.options |= REINDEXOPT_VERBOSE;\n> \n> Why do we need this intermediate 'verbose' variable here? We only use it\n> once to set a bitmask. Maybe we can do it like this:\n> \n> params.options |= defGetBoolean(opt) ?\n> \tREINDEXOPT_VERBOSE : 0;\n\nThat allows *setting* REINDEXOPT_VERBOSE, but doesn't *unset* it if someone\nruns (VERBOSE OFF). So I kept the bools like Michael originally had rather\nthan writing \"else: params.options &= ~REINDEXOPT_VERBOSE\"\n\n> See also attached txt file with diff (I wonder can I trick cfbot this way,\n> so it does not apply the diff).\n\nYes, I think that works :)\nI believe it looks for *.diff and *.patch.\n\n> +\tint options;\t/* bitmask of lowlevel REINDEXOPT_* */\n> \n> I would prefer if the comment says '/* bitmask of ReindexOption */' as in\n> the VacuumOptions, since citing the exact enum type make it easier to\n> navigate source code.\n\nYes, thanks.\n\nThis also fixes some minor formatting and rebase issues, including broken doc/.\n\n-- \nJustin",
"msg_date": "Fri, 4 Dec 2020 13:54:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 04:28:26PM -0300, Alvaro Herrera wrote:\n> FWIW I'm with Peter on this.\n\nOkay, attached is a patch to adjust the enums for the set of utility\ncommands that is the set of things I have touched lately. Should that\nbe extended more? I have not done that as a lot of those structures\nexist as such for a long time.\n--\nMichael",
"msg_date": "Sat, 5 Dec 2020 10:30:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-12-05 02:30, Michael Paquier wrote:\n> On Fri, Dec 04, 2020 at 04:28:26PM -0300, Alvaro Herrera wrote:\n>> FWIW I'm with Peter on this.\n> \n> Okay, attached is a patch to adjust the enums for the set of utility\n> commands that is the set of things I have touched lately. Should that\n> be extended more? I have not done that as a lot of those structures\n> exist as such for a long time.\n\nI think this patch is good.\n\nI have in the meantime committed a similar patch for cleaning up this \nissue in pg_dump.\n\n\n",
"msg_date": "Fri, 11 Dec 2020 19:17:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "By the way-- What did you think of the idea of explictly marking the\ntypes used for bitmasks using types bits32 and friends, instead of plain\nint, which is harder to spot?\n\n\n",
"msg_date": "Fri, 11 Dec 2020 17:27:03 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 05:27:03PM -0300, Alvaro Herrera wrote:\n> By the way-- What did you think of the idea of explictly marking the\n> types used for bitmasks using types bits32 and friends, instead of plain\n> int, which is harder to spot?\n\nRight, we could just do that while the area is changed. It is worth\nnoting that all the REINDEX_REL_* handling could be brushed. Another\npoint that has been raised on a recent thread by Peter was that people\npreferred an hex style for the declarations rather than bit shifts.\nWhat do you think?\n--\nMichael",
"msg_date": "Sat, 12 Dec 2020 09:16:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-12-11 21:27, Alvaro Herrera wrote:\n> By the way-- What did you think of the idea of explictly marking the\n> types used for bitmasks using types bits32 and friends, instead of plain\n> int, which is harder to spot?\n\nIf we want to make it clearer, why not turn the thing into a struct, as \nin the attached patch, and avoid the bit fiddling altogether.",
"msg_date": "Sat, 12 Dec 2020 09:20:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 09:20:35AM +0100, Peter Eisentraut wrote:\n> On 2020-12-11 21:27, Alvaro Herrera wrote:\n> > By the way-- What did you think of the idea of explictly marking the\n> > types used for bitmasks using types bits32 and friends, instead of plain\n> > int, which is harder to spot?\n> \n> If we want to make it clearer, why not turn the thing into a struct, as in\n> the attached patch, and avoid the bit fiddling altogether.\n\nI like this.\nIt's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\nIn my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\nwith an \"int options\" bitmask which is passed to reindex_index() et al. Your\npatch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\nwhich I think is good.\n\nSo I've rebased this branch on your patch.\n\nSome thoughts:\n\n - what about removing the REINDEXOPT_* prefix ?\n - You created local vars with initialization like \"={}\". But I thought it's\n needed to include at least one struct member like \"={false}\", or else\n they're not guaranteed to be zerod ?\n - You passed the structure across function calls. The usual convention is to\n pass a pointer.\n\nI also changed the errcode and detail for this one.\n\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\t\terrmsg(\"incompatible TABLESPACE option\"),\n\t\terrdetail(\"TABLESPACE can only be used with VACUUM FULL.\")));\n\n-- \nJustin",
"msg_date": "Sat, 12 Dec 2020 13:45:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> On Sat, Dec 12, 2020 at 09:20:35AM +0100, Peter Eisentraut wrote:\n> > On 2020-12-11 21:27, Alvaro Herrera wrote:\n> > > By the way-- What did you think of the idea of explictly marking the\n> > > types used for bitmasks using types bits32 and friends, instead of plain\n> > > int, which is harder to spot?\n> > \n> > If we want to make it clearer, why not turn the thing into a struct, as in\n> > the attached patch, and avoid the bit fiddling altogether.\n> \n> I like this.\n> It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\n> In my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\n> with an \"int options\" bitmask which is passed to reindex_index() et al. Your\n> patch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\n> which I think is good.\n> \n> So I've rebased this branch on your patch.\n\nAlso, the cfbot's windows VS compilation failed due to \"compound literal\",\nwhich I don't think is used anywhere else.\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.120108\n\n src/backend/commands/cluster.c(1580): warning C4133: 'return' : incompatible types - from 'List *' to 'int *' [C:\\projects\\postgresql\\postgres.vcxproj]\n\"C:\\projects\\postgresql\\pgsql.sln\" (default target) (1) ->\n\"C:\\projects\\postgresql\\cyrillic_and_mic.vcxproj\" (default target) (5) ->\n\"C:\\projects\\postgresql\\postgres.vcxproj\" (default target) (6) ->\n(ClCompile target) ->\n src/backend/commands/cluster.c(1415): error C2059: syntax error : '}' [C:\\projects\\postgresql\\postgres.vcxproj]\n src/backend/commands/cluster.c(1534): error C2143: syntax error : missing '{' before '*' [C:\\projects\\postgresql\\postgres.vcxproj]\n src/backend/commands/cluster.c(1536): error C2371: 'get_tables_to_cluster' : redefinition; different basic types [C:\\projects\\postgresql\\postgres.vcxproj]\n src/backend/commands/indexcmds.c(2462): error C2059: syntax error : '}' [C:\\projects\\postgresql\\postgres.vcxproj]\n src/backend/commands/tablecmds.c(1894): error C2059: syntax error : '}' [C:\\projects\\postgresql\\postgres.vcxproj]\n\nMy fix! patch resolves that.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 12 Dec 2020 14:20:17 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> On Sat, Dec 12, 2020 at 09:20:35AM +0100, Peter Eisentraut wrote:\n>> On 2020-12-11 21:27, Alvaro Herrera wrote:\n>>> By the way-- What did you think of the idea of explictly marking the\n>>> types used for bitmasks using types bits32 and friends, instead of plain\n>>> int, which is harder to spot?\n>> \n>> If we want to make it clearer, why not turn the thing into a struct, as in\n>> the attached patch, and avoid the bit fiddling altogether.\n\n- reindex_relation(OIDOldHeap, reindex_flags, 0);\n+ reindex_relation(OIDOldHeap, reindex_flags, (ReindexOptions){});\nThis is not common style in the PG code. Usually we would do that\nwith memset(0) or similar.\n\n+ bool REINDEXOPT_VERBOSE; /* print progress info */\n+ bool REINDEXOPT_REPORT_PROGRESS; /* report pgstat progress */\n+ bool REINDEXOPT_MISSING_OK; /* skip missing relations */\n+ bool REINDEXOPT_CONCURRENTLY; /* concurrent mode */\n+} ReindexOptions\nNeither is this one to use upper-case characters for variable names.\n\nNow, we will need a ReindexOptions in the long-term to store all those\noptions and there would be much more coming that booleans here (this\nthread talks about tablespaces, there is another thread about\ncollation filtering). Between using bits32 with some hex flags or\njust a set of booleans within a structure, I would choose the former\nas a matter of habit but yours has the advantage to make debugging a\nno-brainer, which is good. For any approach taken, it seems to me\nthat the same style should be applied to ClusterOption and\nVacuum{Option,Params}.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 13:33:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> On Sat, Dec 12, 2020 at 09:20:35AM +0100, Peter Eisentraut wrote:\n> > On 2020-12-11 21:27, Alvaro Herrera wrote:\n> > > By the way-- What did you think of the idea of explictly marking the\n> > > types used for bitmasks using types bits32 and friends, instead of plain\n> > > int, which is harder to spot?\n> > \n> > If we want to make it clearer, why not turn the thing into a struct, as in\n> > the attached patch, and avoid the bit fiddling altogether.\n> \n> I like this.\n> It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\n> In my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\n> with an \"int options\" bitmask which is passed to reindex_index() et al. Your\n> patch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\n> which I think is good.\n> \n> So I've rebased this branch on your patch.\n> \n> Some thoughts:\n> \n> - what about removing the REINDEXOPT_* prefix ?\n> - You created local vars with initialization like \"={}\". But I thought it's\n> needed to include at least one struct member like \"={false}\", or else\n> they're not guaranteed to be zerod ?\n> - You passed the structure across function calls. The usual convention is to\n> pass a pointer.\n\nI think maybe Michael missed this message (?)\nI had applied some changes on top of Peter's patch.\n\nI squished those commits now, and also handled ClusterOption and VacuumOption\nin the same style.\n\nSome more thoughts:\n - should the structures be named in plural ? \"ReindexOptions\" etc. Since they\n define *all* the options, not just a single bit.\n - For vacuum, do we even need a separate structure, or should the members be\n directly within VacuumParams ? It's a bit odd to write\n params.options.verbose. Especially since there's also ternary options which\n are directly within params.\n - Then, for cluster, I think it should be called ClusterParams, and eventually\n include the tablespaceOid, like what we're doing for Reindex.\n\nI am awaiting feedback on these before going further since I've done too much\nrebasing with these ideas going back and forth and back.\n\n-- \nJustin",
"msg_date": "Mon, 14 Dec 2020 18:14:18 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-12-15 03:14, Justin Pryzby wrote:\n> On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n>> On Sat, Dec 12, 2020 at 09:20:35AM +0100, Peter Eisentraut wrote:\n>> > On 2020-12-11 21:27, Alvaro Herrera wrote:\n>> > > By the way-- What did you think of the idea of explictly marking the\n>> > > types used for bitmasks using types bits32 and friends, instead of plain\n>> > > int, which is harder to spot?\n>> >\n>> > If we want to make it clearer, why not turn the thing into a struct, as in\n>> > the attached patch, and avoid the bit fiddling altogether.\n>> \n>> I like this.\n>> It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and \n>> ReindexParams\n>> In my v31 patch, I moved ReindexOptions to a private structure in \n>> indexcmds.c,\n>> with an \"int options\" bitmask which is passed to reindex_index() et \n>> al. Your\n>> patch keeps/puts ReindexOptions index.h, so it also applies to \n>> reindex_index,\n>> which I think is good.\n>> \n>> So I've rebased this branch on your patch.\n>> \n>> Some thoughts:\n>> \n>> - what about removing the REINDEXOPT_* prefix ?\n>> - You created local vars with initialization like \"={}\". But I \n>> thought it's\n>> needed to include at least one struct member like \"={false}\", or \n>> else\n>> they're not guaranteed to be zerod ?\n>> - You passed the structure across function calls. The usual \n>> convention is to\n>> pass a pointer.\n> \n> I think maybe Michael missed this message (?)\n> I had applied some changes on top of Peter's patch.\n> \n> I squished those commits now, and also handled ClusterOption and \n> VacuumOption\n> in the same style.\n> \n> Some more thoughts:\n> - should the structures be named in plural ? \"ReindexOptions\" etc. \n> Since they\n> define *all* the options, not just a single bit.\n> - For vacuum, do we even need a separate structure, or should the \n> members be\n> directly within VacuumParams ? It's a bit odd to write\n> params.options.verbose. Especially since there's also ternary \n> options which\n> are directly within params.\n\nThis is exactly what I have thought after looking on Peter's patch. \nWriting 'params.options.some_option' looks just too verbose. I even \nstarted moving all vacuum options into VacuumParams on my own and was in \nthe middle of the process when realized that there are some places that \ndo not fit well, like:\n\nif (!vacuum_is_relation_owner(RelationGetRelid(onerel),\n\tonerel->rd_rel,\n\tparams->options & VACOPT_ANALYZE))\n\nHere we do not want to set option permanently, but rather to trigger \nsome additional code path in the vacuum_is_relation_owner(), IIUC. With \nunified VacuumParams we should do:\n\nbool opt_analyze = params->analyze;\n...\nparams->analyze = true;\nif (!vacuum_is_relation_owner(RelationGetRelid(onerel),\n\tonerel->rd_rel, params))\n...\nparams->analyze = opt_analyze;\n\nto achieve the same, but it does not look good to me, or change the \nwhole interface. I have found at least one other place like that so far \n--- vacuum_open_relation() in the analyze_rel().\n\nNot sure how we could better cope with such logic.\n\n> - Then, for cluster, I think it should be called ClusterParams, and \n> eventually\n> include the tablespaceOid, like what we're doing for Reindex.\n> \n> I am awaiting feedback on these before going further since I've done \n> too much\n> rebasing with these ideas going back and forth and back.\n\nYes, we have moved a bit from my original patch set in the thread with \nall this refactoring. However, all the consequent patches are heavily \ndepend on this interface, so let us decide first on the proposed \nrefactoring.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Tue, 15 Dec 2020 13:34:35 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 06:14:17PM -0600, Justin Pryzby wrote:\n> On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> > On Sat, Dec 12, 2020 at 09:20:35AM +0100, Peter Eisentraut wrote:\n> > > On 2020-12-11 21:27, Alvaro Herrera wrote:\n> > > > By the way-- What did you think of the idea of explictly marking the\n> > > > types used for bitmasks using types bits32 and friends, instead of plain\n> > > > int, which is harder to spot?\n> > > \n> > > If we want to make it clearer, why not turn the thing into a struct, as in\n> > > the attached patch, and avoid the bit fiddling altogether.\n> > \n> > I like this.\n> > It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\n> > In my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\n> > with an \"int options\" bitmask which is passed to reindex_index() et al. Your\n> > patch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\n> > which I think is good.\n> > \n> > So I've rebased this branch on your patch.\n> > \n> > Some thoughts:\n> > \n> > - what about removing the REINDEXOPT_* prefix ?\n> > - You created local vars with initialization like \"={}\". But I thought it's\n> > needed to include at least one struct member like \"={false}\", or else\n> > they're not guaranteed to be zerod ?\n> > - You passed the structure across function calls. The usual convention is to\n> > pass a pointer.\n> \n> I think maybe Michael missed this message (?)\n> I had applied some changes on top of Peter's patch.\n> \n> I squished those commits now, and also handled ClusterOption and VacuumOption\n> in the same style.\n> \n> Some more thoughts:\n> - should the structures be named in plural ? \"ReindexOptions\" etc. Since they\n> define *all* the options, not just a single bit.\n> - For vacuum, do we even need a separate structure, or should the members be\n> directly within VacuumParams ? It's a bit odd to write\n> params.options.verbose. Especially since there's also ternary options which\n> are directly within params.\n> - Then, for cluster, I think it should be called ClusterParams, and eventually\n> include the tablespaceOid, like what we're doing for Reindex.\n> \n> I am awaiting feedback on these before going further since I've done too much\n> rebasing with these ideas going back and forth and back.\n\nWith Alexey's agreement, I propose something like this.\n\nI've merged some commits and kept separate the ones which are more likely to be\ndisputed/amended. But it may be best to read the series as a single commit,\nlike \"git diff origin..\"\n\n-- \nJustin",
"msg_date": "Tue, 15 Dec 2020 17:58:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Dec-12, Peter Eisentraut wrote:\n\n> On 2020-12-11 21:27, Alvaro Herrera wrote:\n> > By the way-- What did you think of the idea of explictly marking the\n> > types used for bitmasks using types bits32 and friends, instead of plain\n> > int, which is harder to spot?\n> \n> If we want to make it clearer, why not turn the thing into a struct, as in\n> the attached patch, and avoid the bit fiddling altogether.\n\nI don't like this idea too much, because adding an option causes an ABI\nbreak. I don't think we commonly add options in backbranches, but it\nhas happened. The bitmask is much easier to work with in that regard.\n\n\n\n",
"msg_date": "Tue, 15 Dec 2020 21:45:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 09:45:17PM -0300, Alvaro Herrera wrote:\n> I don't like this idea too much, because adding an option causes an ABI\n> break. I don't think we commonly add options in backbranches, but it\n> has happened. The bitmask is much easier to work with in that regard.\n\nABI flexibility is a good point here. I did not consider this point\nof view. Thanks!\n--\nMichael",
"msg_date": "Wed, 16 Dec 2020 10:01:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 10:01:11AM +0900, Michael Paquier wrote:\n> On Tue, Dec 15, 2020 at 09:45:17PM -0300, Alvaro Herrera wrote:\n> > I don't like this idea too much, because adding an option causes an ABI\n> > break. I don't think we commonly add options in backbranches, but it\n> > has happened. The bitmask is much easier to work with in that regard.\n> \n> ABI flexibility is a good point here. I did not consider this point\n> of view. Thanks!\n\nFWIW, I have taken a shot at this part of the patch, and finished with\nthe attached. This uses bits32 for the bitmask options and an hex\nstyle for the bitmask params, while bundling all the flags into\ndedicated structures for all the options that can be extended for the\ntablespace case (or some filtering for REINDEX).\n--\nMichael",
"msg_date": "Tue, 22 Dec 2020 15:47:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 03:47:57PM +0900, Michael Paquier wrote:\n> On Wed, Dec 16, 2020 at 10:01:11AM +0900, Michael Paquier wrote:\n> > On Tue, Dec 15, 2020 at 09:45:17PM -0300, Alvaro Herrera wrote:\n> > > I don't like this idea too much, because adding an option causes an ABI\n> > > break. I don't think we commonly add options in backbranches, but it\n> > > has happened. The bitmask is much easier to work with in that regard.\n> > \n> > ABI flexibility is a good point here. I did not consider this point\n> > of view. Thanks!\n> \n> FWIW, I have taken a shot at this part of the patch, and finished with\n> the attached. This uses bits32 for the bitmask options and an hex\n> style for the bitmask params, while bundling all the flags into\n> dedicated structures for all the options that can be extended for the\n> tablespace case (or some filtering for REINDEX).\n\nSeems fine, but why do you do memcpy() instead of a structure assignment ?\n\n> @@ -3965,8 +3965,11 @@ reindex_relation(Oid relid, int flags, int options)\n> \t\t * Note that this should fail if the toast relation is missing, so\n> \t\t * reset REINDEXOPT_MISSING_OK.\n> \t\t */\n> -\t\tresult |= reindex_relation(toast_relid, flags,\n> -\t\t\t\t\t\t\t\t options & ~(REINDEXOPT_MISSING_OK));\n> +\t\tReindexOptions newoptions;\n> +\n> +\t\tmemcpy(&newoptions, options, sizeof(ReindexOptions));\n> +\t\tnewoptions.flags &= ~(REINDEXOPT_MISSING_OK);\n> +\t\tresult |= reindex_relation(toast_relid, flags, &newoptions);\n\nCould be newoptions = *options;\n\nAlso, this one is going to be subsumed by ExecReindex(), so the palloc will go\naway (otherwise I would ask to pass it in from the caller):\n\n> +ReindexOptions *\n> ReindexParseOptions(ParseState *pstate, ReindexStmt *stmt)\n> {\n> \tListCell *lc;\n> -\tint\t\t\toptions = 0;\n> +\tReindexOptions *options;\n> \tbool\t\tconcurrently = false;\n> \tbool\t\tverbose = false;\n> \n> +\toptions = (ReindexOptions *) palloc0(sizeof(ReindexOptions));\n> +\n\n-- \nJustin \n\n\n",
"msg_date": "Tue, 22 Dec 2020 02:32:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 02:32:05AM -0600, Justin Pryzby wrote:\n> Also, this one is going to be subsumed by ExecReindex(), so the palloc will go\n> away (otherwise I would ask to pass it in from the caller):\n\nYeah, maybe. Still you need to be very careful if you have any\nallocated variables like a tablespace or a path which requires to be\nin the private context used by ReindexMultipleInternal() or even\nReindexRelationConcurrently(), so I am not sure you can avoid that\ncompletely. For now, we could choose the option to still use a\npalloc(), and then save the options in the private contexts. Forgot\nthat in the previous version actually.\n--\nMichael",
"msg_date": "Tue, 22 Dec 2020 18:57:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 06:57:41PM +0900, Michael Paquier wrote:\n> On Tue, Dec 22, 2020 at 02:32:05AM -0600, Justin Pryzby wrote:\n> > Also, this one is going to be subsumed by ExecReindex(), so the palloc will go\n> > away (otherwise I would ask to pass it in from the caller):\n> \n> Yeah, maybe. Still you need to be very careful if you have any\n> allocated variables like a tablespace or a path which requires to be\n> in the private context used by ReindexMultipleInternal() or even\n> ReindexRelationConcurrently(), so I am not sure you can avoid that\n> completely. For now, we could choose the option to still use a\n> palloc(), and then save the options in the private contexts. Forgot\n> that in the previous version actually.\n\nI can't see why this still uses memset instead of structure assignment.\n\nNow, I really think utility.c ought to pass in a pointer to a local\nReindexOptions variable to avoid all the memory context, which is unnecessary\nand prone to error.\n\nExecReindex() will set options.tablesapceOid, not a pointer. Like this.\n\nI also changed the callback to be a ReindexOptions and not a pointer.\n\n-- \nJustin",
"msg_date": "Tue, 22 Dec 2020 15:15:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Justin:\nFor reindex_index() :\n\n+ if (options->tablespaceOid == MyDatabaseTableSpace)\n+ options->tablespaceOid = InvalidOid;\n...\n+ if (set_tablespace &&\n+ (options->tablespaceOid != oldTablespaceOid ||\n+ (options->tablespaceOid == MyDatabaseTableSpace &&\nOidIsValid(oldTablespaceOid))))\n\nI wonder why the options->tablespaceOid == MyDatabaseTableSpace clause\nappears again in the second if statement.\nSince the first if statement would assign InvalidOid\nto options->tablespaceOid when the first if condition is satisfied.\n\nCheers\n\n\nOn Tue, Dec 22, 2020 at 1:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Dec 22, 2020 at 06:57:41PM +0900, Michael Paquier wrote:\n> > On Tue, Dec 22, 2020 at 02:32:05AM -0600, Justin Pryzby wrote:\n> > > Also, this one is going to be subsumed by ExecReindex(), so the palloc\n> will go\n> > > away (otherwise I would ask to pass it in from the caller):\n> >\n> > Yeah, maybe. Still you need to be very careful if you have any\n> > allocated variables like a tablespace or a path which requires to be\n> > in the private context used by ReindexMultipleInternal() or even\n> > ReindexRelationConcurrently(), so I am not sure you can avoid that\n> > completely. For now, we could choose the option to still use a\n> > palloc(), and then save the options in the private contexts. Forgot\n> > that in the previous version actually.\n>\n> I can't see why this still uses memset instead of structure assignment.\n>\n> Now, I really think utility.c ought to pass in a pointer to a local\n> ReindexOptions variable to avoid all the memory context, which is\n> unnecessary\n> and prone to error.\n>\n> ExecReindex() will set options.tablesapceOid, not a pointer. Like this.\n>\n> I also changed the callback to be a ReindexOptions and not a pointer.\n>\n> --\n> Justin\n>\n\nJustin:For reindex_index() :+ if (options->tablespaceOid == MyDatabaseTableSpace)+ options->tablespaceOid = InvalidOid;...+ if (set_tablespace &&+ (options->tablespaceOid != oldTablespaceOid ||+ (options->tablespaceOid == MyDatabaseTableSpace && OidIsValid(oldTablespaceOid))))I wonder why the options->tablespaceOid == MyDatabaseTableSpace clause appears again in the second if statement.Since the first if statement would assign InvalidOid to options->tablespaceOid when the first if condition is satisfied.CheersOn Tue, Dec 22, 2020 at 1:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Dec 22, 2020 at 06:57:41PM +0900, Michael Paquier wrote:\n> On Tue, Dec 22, 2020 at 02:32:05AM -0600, Justin Pryzby wrote:\n> > Also, this one is going to be subsumed by ExecReindex(), so the palloc will go\n> > away (otherwise I would ask to pass it in from the caller):\n> \n> Yeah, maybe. Still you need to be very careful if you have any\n> allocated variables like a tablespace or a path which requires to be\n> in the private context used by ReindexMultipleInternal() or even\n> ReindexRelationConcurrently(), so I am not sure you can avoid that\n> completely. For now, we could choose the option to still use a\n> palloc(), and then save the options in the private contexts. Forgot\n> that in the previous version actually.\n\nI can't see why this still uses memset instead of structure assignment.\n\nNow, I really think utility.c ought to pass in a pointer to a local\nReindexOptions variable to avoid all the memory context, which is unnecessary\nand prone to error.\n\nExecReindex() will set options.tablesapceOid, not a pointer. Like this.\n\nI also changed the callback to be a ReindexOptions and not a pointer.\n\n-- \nJustin",
"msg_date": "Tue, 22 Dec 2020 15:22:19 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 03:22:19PM -0800, Zhihong Yu wrote:\n> Justin:\n> For reindex_index() :\n> \n> + if (options->tablespaceOid == MyDatabaseTableSpace)\n> + options->tablespaceOid = InvalidOid;\n> ...\n> + oldTablespaceOid = iRel->rd_rel->reltablespace;\n> + if (set_tablespace &&\n> + (options->tablespaceOid != oldTablespaceOid ||\n> + (options->tablespaceOid == MyDatabaseTableSpace &&\n> OidIsValid(oldTablespaceOid))))\n> \n> I wonder why the options->tablespaceOid == MyDatabaseTableSpace clause\n> appears again in the second if statement.\n> Since the first if statement would assign InvalidOid\n> to options->tablespaceOid when the first if condition is satisfied.\n\nGood question. Alexey mentioned on Sept 23 that he added the first stanza. to\navoid storing the DB's tablespace OID (rather than InvalidOid).\n\nI think the 2nd half of the \"or\" is unnecessary since that was added setting to\noptions->tablespaceOid = InvalidOid.\nIf requesting to move to the DB's default tablespace, it'll now hit the first\npart of the OR:\n\n> + (options->tablespaceOid != oldTablespaceOid ||\n\nWithout the first stanza setting, it would've hit the 2nd condition:\n\n> + (options->tablespaceOid == MyDatabaseTableSpace && OidIsValid(oldTablespaceOid))))\n\nwhich means: \"user requested to move a table to the DB's default tblspace, and\nit was previously on a nondefault space\".\n\nSo I think we can drop the 2nd half of the OR. Thanks for noticing.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 22 Dec 2020 23:22:00 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 03:15:37PM -0600, Justin Pryzby wrote:\n> Now, I really think utility.c ought to pass in a pointer to a local\n> ReindexOptions variable to avoid all the memory context, which is unnecessary\n> and prone to error.\n\nYeah, it sounds right to me to just bite the bullet and do this \nrefactoring, limiting the manipulations of the options as much as\npossible across contexts. So +1 from me to merge 0001 and 0002\ntogether.\n\nI have adjusted a couple of comments and simplified a bit more the\ncode in utility.c. I think that this is commitable, but let's wait\nmore than a couple of days for Alvaro and Peter first. This is a\nperiod of vacations for a lot of people, and there is no point to\napply something that would need more work at the end. Using hexas for\nthe flags with bitmasks is the right conclusion IMO, but we are not\nalone.\n\n> ExecReindex() will set options.tablespaceOid, not a pointer. Like\n> this.\n\nOK. Good to know, I have not looked at this part of the patch yet.\n--\nMichael",
"msg_date": "Wed, 23 Dec 2020 16:38:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-12-23 08:22, Justin Pryzby wrote:\n> On Tue, Dec 22, 2020 at 03:22:19PM -0800, Zhihong Yu wrote:\n>> Justin:\n>> For reindex_index() :\n>> \n>> + if (options->tablespaceOid == MyDatabaseTableSpace)\n>> + options->tablespaceOid = InvalidOid;\n>> ...\n>> + oldTablespaceOid = iRel->rd_rel->reltablespace;\n>> + if (set_tablespace &&\n>> + (options->tablespaceOid != oldTablespaceOid ||\n>> + (options->tablespaceOid == MyDatabaseTableSpace &&\n>> OidIsValid(oldTablespaceOid))))\n>> \n>> I wonder why the options->tablespaceOid == MyDatabaseTableSpace clause\n>> appears again in the second if statement.\n>> Since the first if statement would assign InvalidOid\n>> to options->tablespaceOid when the first if condition is satisfied.\n> \n> Good question. Alexey mentioned on Sept 23 that he added the first \n> stanza. to\n> avoid storing the DB's tablespace OID (rather than InvalidOid).\n> \n> I think the 2nd half of the \"or\" is unnecessary since that was added \n> setting to\n> options->tablespaceOid = InvalidOid.\n> If requesting to move to the DB's default tablespace, it'll now hit the \n> first\n> part of the OR:\n> \n>> + (options->tablespaceOid != oldTablespaceOid ||\n> \n> Without the first stanza setting, it would've hit the 2nd condition:\n> \n>> + (options->tablespaceOid == MyDatabaseTableSpace && \n>> OidIsValid(oldTablespaceOid))))\n> \n> which means: \"user requested to move a table to the DB's default \n> tblspace, and\n> it was previously on a nondefault space\".\n> \n> So I think we can drop the 2nd half of the OR. Thanks for noticing.\n\nYes, I have not noticed that we would have already assigned \ntablespaceOid to InvalidOid in this case. Back to the v7 we were doing \nthis assignment a bit later, so this could make sense, but now it seems \nto be redundant. For some reason I have mixed these refactorings \nseparated by a dozen of versions...\n\n\nThanks\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 23 Dec 2020 19:12:11 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-12-23 10:38, Michael Paquier wrote:\n> On Tue, Dec 22, 2020 at 03:15:37PM -0600, Justin Pryzby wrote:\n>> Now, I really think utility.c ought to pass in a pointer to a local\n>> ReindexOptions variable to avoid all the memory context, which is \n>> unnecessary\n>> and prone to error.\n> \n> Yeah, it sounds right to me to just bite the bullet and do this\n> refactoring, limiting the manipulations of the options as much as\n> possible across contexts. So +1 from me to merge 0001 and 0002\n> together.\n> \n> I have adjusted a couple of comments and simplified a bit more the\n> code in utility.c. I think that this is commitable, but let's wait\n> more than a couple of days for Alvaro and Peter first. This is a\n> period of vacations for a lot of people, and there is no point to\n> apply something that would need more work at the end. Using hexas for\n> the flags with bitmasks is the right conclusion IMO, but we are not\n> alone.\n> \n\nAfter eyeballing the patch I can add that we should alter this comment:\n\n\tint\toptions;\t/* bitmask of VacuumOption */\n\nas you are going to replace VacuumOption with VACOPT_* defs. So this \nshould say:\n\n/* bitmask of VACOPT_* */\n\nAlso I have found naming to be a bit inconsistent:\n\n * we have ReindexOptions, but VacuumParams\n * and ReindexOptions->flags, but VacuumParams->options\n\nAnd the last one, you have used bits32 for Cluster/ReindexOptions, but \nleft VacuumParams->options as int. Maybe we should also change it to \nbits32 for consistency?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 23 Dec 2020 19:30:35 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Dec-23, Michael Paquier wrote:\n\n> bool\n> -reindex_relation(Oid relid, int flags, int options)\n> +reindex_relation(Oid relid, int flags, ReindexOptions *options)\n> {\n> \tRelation\trel;\n> \tOid\t\t\ttoast_relid;\n\nWait a minute. reindex_relation has 'flags' and *also* 'options' with\nan embedded 'flags' member? Surely that's not right. I see that they\ncarry orthogonal sets of options ... but why aren't they a single\nbitmask instead of two separate ones? This looks weird and confusing.\n\n\nAlso: it seems a bit weird to me to put the flags inside the options\nstruct. I would keep them separate -- so initially the options struct\nwould only have the tablespace OID, on API cleanliness grounds:\n\nstruct ReindexOptions\n{\n\ttablepaceOid\toid;\n};\nextern bool\nreindex_relation(Oid relid, bits32 flags, ReindexOptions *options);\n\nI guess you could argue that it's more performance to set up only two\narguments to the function call instead of three .. but I doubt that's\nmeasurable for anything in DDL-land.\n\nBut also, are we really envisioning that these routines would have all\nthat many additional options? Maybe it is sufficient to do just\n\nextern bool\nreindex_relation(Oid relid, bits32 flags, tablespaceOid Oid);\n\n\n",
"msg_date": "Wed, 23 Dec 2020 19:22:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 07:22:05PM -0300, Alvaro Herrera wrote:\n> Also: it seems a bit weird to me to put the flags inside the options\n> struct. I would keep them separate -- so initially the options struct\n> would only have the tablespace OID, on API cleanliness grounds:\n\nI don't see why they'd be separate or why it's cleaner ?\n\nIf the user says REINDEX (CONCURRENTLY, VERBOSE, TABLESPACE ts) , why would we\npass around the boolean flags separately from the other options ?\n\n> struct ReindexOptions\n> {\n> \ttablepaceOid\toid;\n> };\n> extern bool\n> reindex_relation(Oid relid, bits32 flags, ReindexOptions *options);\n\n\n> But also, are we really envisioning that these routines would have all\n> that many additional options? Maybe it is sufficient to do just\n> \n> extern bool\n> reindex_relation(Oid relid, bits32 flags, tablespaceOid Oid);\n\nThat's what we did initially, and Michael suggested to put it into a struct.\nWhich makes the tablespace patches cleaner for each of REINDEX, CLUSTER,\nVACUUM, since it doesn't require modifying the signature of 5-10 functions.\nAnd future patches get to reap the benefit.\n\nThese are intended to be like VacuumParams. Consider that ClusterOptions is\nproposed to get not just a tablespaceOid but also an idxtablespaceOid.\n\nThis was getting ugly:\n\nextern void reindex_index(Oid indexId, bool skip_constraint_checks, \n char relpersistence, int options, Oid tablespaceOid); \n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 23 Dec 2020 16:47:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2020-Dec-23, Justin Pryzby wrote:\n\n> This was getting ugly:\n> \n> extern void reindex_index(Oid indexId, bool skip_constraint_checks,\n> char relpersistence, int options, Oid tablespaceOid)Z\n\nIs this what I suggested?\n\n\n",
"msg_date": "Wed, 23 Dec 2020 21:14:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 09:14:18PM -0300, Alvaro Herrera wrote:\n> On 2020-Dec-23, Justin Pryzby wrote:\n> \n> > This was getting ugly:\n> > \n> > extern void reindex_index(Oid indexId, bool skip_constraint_checks,\n> > char relpersistence, int options, Oid tablespaceOid)Z\n> \n> Is this what I suggested?\n\nNo ; that was from an earlier revision of the patch adding the tablespace,\nbefore Michael suggested a ReindexOptions struct, which subsumes 'options' and\n'tablespaceOid'.\n\nI see now that 'skip_constraint_checks' is from REINDEX_REL_CHECK_CONSTRAINTS.\nIt seems liek that should be a REINDEXOPT_* flag, rather than REINDEX_REL_*,\nso doesn't need to be a separate boolean. See also: 2d3320d3d.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 23 Dec 2020 19:07:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 07:07:54PM -0600, Justin Pryzby wrote:\n> On Wed, Dec 23, 2020 at 09:14:18PM -0300, Alvaro Herrera wrote:\n>> On 2020-Dec-23, Justin Pryzby wrote: \n>>> This was getting ugly:\n>>> \n>>> extern void reindex_index(Oid indexId, bool skip_constraint_checks,\n>>> char relpersistence, int options, Oid tablespaceOid)\n>> \n>> Is this what I suggested?\n\nNo idea if this is what you suggested, but it would be annoying to\nhave to change this routine's signature each time we need to pass down\na new option.\n\n> No ; that was from an earlier revision of the patch adding the tablespace,\n> before Michael suggested a ReindexOptions struct, which subsumes 'options' and\n> 'tablespaceOid'.\n> \n> I see now that 'skip_constraint_checks' is from REINDEX_REL_CHECK_CONSTRAINTS.\n> It seems like that should be a REINDEXOPT_* flag, rather than REINDEX_REL_*,\n> so doesn't need to be a separate boolean. See also: 2d3320d3d.\n\nFWIW, it still makes the most sense to me to keep the options that are\nextracted from the grammar or things that apply to all the\nsub-routines of REINDEX to be tracked in a single structure, so this\nshould include only the REINDEXOPT_* set for now, with the tablespace\nOID as of this thread, and also the reindex filtering options.\nREINDEX_REL_* is in my opinion of a different family because they only\napply to reindex_relation(), and partially to reindex_index(), so they\nare very localized. In short, anything in need of only\nreindex_relation() has no need to know about the whole ReindexOption\nbusiness.\n--\nMichael",
"msg_date": "Thu, 24 Dec 2020 10:50:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 10:50:34AM +0900, Michael Paquier wrote:\n> FWIW, it still makes the most sense to me to keep the options that are\n> extracted from the grammar or things that apply to all the\n> sub-routines of REINDEX to be tracked in a single structure, so this\n> should include only the REINDEXOPT_* set for now, with the tablespace\n> OID as of this thread, and also the reindex filtering options.\n> REINDEX_REL_* is in my opinion of a different family because they only\n> apply to reindex_relation(), and partially to reindex_index(), so they\n> are very localized. In short, anything in need of only\n> reindex_relation() has no need to know about the whole ReindexOption\n> business.\n\nI need more coffee here.. reindex_relation() knows about\nReindexOptions. Still it would be weird to track REINDEX_REL_* at a\nglobal level as ExecReindex(), ReindexTable(), ReindexMultipleTables()\nand the like don't need to know about that.\n--\nMichael",
"msg_date": "Thu, 24 Dec 2020 11:18:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 07:30:35PM +0300, Alexey Kondratov wrote:\n> After eyeballing the patch I can add that we should alter this comment:\n> \n> \tint\toptions;\t/* bitmask of VacuumOption */\n> \n> as you are going to replace VacuumOption with VACOPT_* defs. So this should\n> say:\n> \n> /* bitmask of VACOPT_* */\n\nCheck.\n\n> \n> Also I have found naming to be a bit inconsistent:\n> * we have ReindexOptions, but VacuumParams\n> * and ReindexOptions->flags, but VacuumParams->options\n\nCheck. As ReindexOptions and ClusterOptions are the new members of\nthe family here, we could change them to use Params instead with\n\"options\" as bits32 internally.\n\n> And the last one, you have used bits32 for Cluster/ReindexOptions, but left\n> VacuumParams->options as int. Maybe we should also change it to bits32 for\n> consistency?\n\nYeah, that makes sense. I'll send an updated patch based on that.\n--\nMichael",
"msg_date": "Wed, 13 Jan 2021 17:22:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 05:22:49PM +0900, Michael Paquier wrote:\n> Yeah, that makes sense. I'll send an updated patch based on that.\n\nAnd here you go as per the attached. I don't think that there was\nanything remaining on my radar. This version still needs to be\nindented properly though.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 13 Jan 2021 20:34:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the flyome comments from Alexey about the inconsistencies of the structures"
},
{
"msg_contents": "On 2021-01-13 14:34, Michael Paquier wrote:\n> On Wed, Jan 13, 2021 at 05:22:49PM +0900, Michael Paquier wrote:\n>> Yeah, that makes sense. I'll send an updated patch based on that.\n> \n> And here you go as per the attached. I don't think that there was\n> anything remaining on my radar. This version still needs to be\n> indented properly though.\n> \n> Thoughts?\n> \n\nThanks.\n\n+\tbits32\t\toptions;\t\t\t/* bitmask of CLUSTEROPT_* */\n\nThis should say '/* bitmask of CLUOPT_* */', I guess, since there are \nonly CLUOPT's defined. Otherwise, everything looks as per discussed \nupthread.\n\nBy the way, something went wrong with the last email subject, so I have \nchanged it back to the original in this response. I also attached your \npatch (with only this CLUOPT_* correction) to keep it in the thread for \nsure. Although, postgresql.org's web archive is clever enough to link \nyour email to the same thread even with different subject.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 13 Jan 2021 16:39:40 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 04:39:40PM +0300, Alexey Kondratov wrote:\n> +\tbits32\t\toptions;\t\t\t/* bitmask of CLUSTEROPT_* */\n> \n> This should say '/* bitmask of CLUOPT_* */', I guess, since there are only\n> CLUOPT's defined. Otherwise, everything looks as per discussed upthread.\n\nIndeed. Let's first wait a couple of days and see if others have any\ncomments or objections about this v6.\n\n> By the way, something went wrong with the last email subject, so I have\n> changed it back to the original in this response. I also attached your patch\n> (with only this CLUOPT_* correction) to keep it in the thread for sure.\n> Although, postgresql.org's web archive is clever enough to link your email\n> to the same thread even with different subject.\n\nOops. Not sure what went wrong here. Thanks.\n--\nMichael",
"msg_date": "Thu, 14 Jan 2021 14:18:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 02:18:45PM +0900, Michael Paquier wrote:\n> Indeed. Let's first wait a couple of days and see if others have any\n> comments or objections about this v6.\n\nHearing nothing, I have looked at that again this morning and applied\nv6 after a reindent and some adjustments in the comments.\n--\nMichael",
"msg_date": "Mon, 18 Jan 2021 14:12:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\n> In my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\n> with an \"int options\" bitmask which is passed to reindex_index() et al. Your\n> patch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\n> which I think is good.\n\na3dc926 is an equivalent of 0001~0003 merged together. 0008 had\nbetter be submitted into a separate thread if there is value to it.\n0004~0007 are the pieces remaining. Could it be possible to rebase\nthings on HEAD and put the tablespace bits into the structures \n{Vacuum,Reindex,Cluster}Params?\n--\nMichael",
"msg_date": "Mon, 18 Jan 2021 14:18:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 02:18:44PM +0900, Michael Paquier wrote:\n> On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> > It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\n> > In my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\n> > with an \"int options\" bitmask which is passed to reindex_index() et al. Your\n> > patch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\n> > which I think is good.\n> \n> a3dc926 is an equivalent of 0001~0003 merged together. 0008 had\n> better be submitted into a separate thread if there is value to it.\n> 0004~0007 are the pieces remaining. Could it be possible to rebase\n> things on HEAD and put the tablespace bits into the structures \n> {Vacuum,Reindex,Cluster}Params?\n\nAttached. I will re-review these myself tomorrow.\n\n-- \nJustin",
"msg_date": "Mon, 18 Jan 2021 02:37:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi,\nFor 0001-Allow-REINDEX-to-change-tablespace.patch :\n\n+ * InvalidOid, use the tablespace in-use instead.\n\n'in-use' seems a bit redundant in the sentence.\nHow about : InvalidOid, use the tablespace of the index instead.\n\nCheers\n\nOn Mon, Jan 18, 2021 at 12:38 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Jan 18, 2021 at 02:18:44PM +0900, Michael Paquier wrote:\n> > On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> > > It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and\n> ReindexParams\n> > > In my v31 patch, I moved ReindexOptions to a private structure in\n> indexcmds.c,\n> > > with an \"int options\" bitmask which is passed to reindex_index() et\n> al. Your\n> > > patch keeps/puts ReindexOptions index.h, so it also applies to\n> reindex_index,\n> > > which I think is good.\n> >\n> > a3dc926 is an equivalent of 0001~0003 merged together. 0008 had\n> > better be submitted into a separate thread if there is value to it.\n> > 0004~0007 are the pieces remaining. Could it be possible to rebase\n> > things on HEAD and put the tablespace bits into the structures\n> > {Vacuum,Reindex,Cluster}Params?\n>\n> Attached. I will re-review these myself tomorrow.\n>\n> --\n> Justin\n>\n\nHi,For 0001-Allow-REINDEX-to-change-tablespace.patch :+ * InvalidOid, use the tablespace in-use instead.'in-use' seems a bit redundant in the sentence.How about : InvalidOid, use the tablespace of the index instead.CheersOn Mon, Jan 18, 2021 at 12:38 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jan 18, 2021 at 02:18:44PM +0900, Michael Paquier wrote:\n> On Sat, Dec 12, 2020 at 01:45:26PM -0600, Justin Pryzby wrote:\n> > It's a lot like what I wrote as [PATCH v31 1/5] ExecReindex and ReindexParams\n> > In my v31 patch, I moved ReindexOptions to a private structure in indexcmds.c,\n> > with an \"int options\" bitmask which is passed to reindex_index() et al. Your\n> > patch keeps/puts ReindexOptions index.h, so it also applies to reindex_index,\n> > which I think is good.\n> \n> a3dc926 is an equivalent of 0001~0003 merged together. 0008 had\n> better be submitted into a separate thread if there is value to it.\n> 0004~0007 are the pieces remaining. Could it be possible to rebase\n> things on HEAD and put the tablespace bits into the structures \n> {Vacuum,Reindex,Cluster}Params?\n\nAttached. I will re-review these myself tomorrow.\n\n-- \nJustin",
"msg_date": "Mon, 18 Jan 2021 07:57:04 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 02:37:57AM -0600, Justin Pryzby wrote:\n> Attached. I will re-review these myself tomorrow.\n\nI have begun looking at 0001 and 0002...\n\n+/*\n+ * This is mostly duplicating ATExecSetTableSpaceNoStorage,\n+ * which should maybe be factored out to a library function.\n+ */\nWouldn't it be better to do first the refactoring of 0002 and then\n0001 so as REINDEX can use the new routine, instead of putting that\ninto a comment?\n\n+ This specifies that indexes will be rebuilt on a new tablespace.\n+ Cannot be used with \"mapped\" relations. If <literal>SCHEMA</literal>,\n+ <literal>DATABASE</literal> or <literal>SYSTEM</literal> is specified, then\n+ all unsuitable relations will be skipped and a single <literal>WARNING</literal>\n+ will be generated.\nWhat is an unsuitable relation? How can the end user know that?\n\nThis is missing ACL checks when moving the index into a new location,\nso this requires some pg_tablespace_aclcheck() calls, and the other\npatches share the same issue.\n\n+ else if (partkind == RELKIND_PARTITIONED_TABLE)\n+ {\n+ Relation rel = table_open(partoid, ShareLock);\n+ List *indexIds = RelationGetIndexList(rel);\n+ ListCell *lc;\n+\n+ table_close(rel, NoLock);\n+ foreach (lc, indexIds)\n+ {\n+ Oid indexid = lfirst_oid(lc);\n+ (void) set_rel_tablespace(indexid, params->tablespaceOid);\n+ }\n+ }\nThis is really a good question. ReindexPartitions() would trigger one\ntransaction per leaf to work on. Changing the tablespace of the\npartitioned table(s) before doing any work has the advantage to tell\nany new partition to use the new tablespace. Now, I see a struggling\npoint here: what should we do if the processing fails in the middle of\nthe move, leaving a portion of the leaves in the previous tablespace?\nOn a follow-up reindex with the same command, should the command force\na reindex even on the partitions that have been moved? Or could there\nbe a point in skipping the partitions that are already on the new\ntablespace and only process the ones on the previous tablespace? It\nseems to me that the first scenario makes the most sense as currently\na REINDEX works on all the relations defined, though there could be\nuse cases for the second case. This should be documented, I think.\n\nThere are no tests for partitioned tables, aka we'd want to make sure\nthat the new partitioned index is on the correct tablespace, as well\nas all its leaves. It may be better to have at least two levels of\npartitioned tables, as well as a partitioned table with no leaves in\nthe cases dealt with.\n\n+ *\n+ * Even if a table's indexes were moved to a new tablespace, the index\n+ * on its toast table is not normally moved.\n */\nStill, REINDEX (TABLESPACE) TABLE should move all of them to be\nconsistent with ALTER TABLE SET TABLESPACE, but that's not the case\nwith this code, no? This requires proper test coverage, but there is\nnothing of the kind in this patch.\n--\nMichael",
"msg_date": "Wed, 20 Jan 2021 20:53:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-Jan-20, Michael Paquier wrote:\n\n> +/*\n> + * This is mostly duplicating ATExecSetTableSpaceNoStorage,\n> + * which should maybe be factored out to a library function.\n> + */\n> Wouldn't it be better to do first the refactoring of 0002 and then\n> 0001 so as REINDEX can use the new routine, instead of putting that\n> into a comment?\n\nI think merging 0001 and 0002 into a single commit is a reasonable\napproach. I don't oppose an initial refactoring commit if you want to\ndo that, but it doesn't seem necessary.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 20 Jan 2021 12:47:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-Jan-20, Alvaro Herrera wrote:\n\n> On 2021-Jan-20, Michael Paquier wrote:\n> \n> > +/*\n> > + * This is mostly duplicating ATExecSetTableSpaceNoStorage,\n> > + * which should maybe be factored out to a library function.\n> > + */\n> > Wouldn't it be better to do first the refactoring of 0002 and then\n> > 0001 so as REINDEX can use the new routine, instead of putting that\n> > into a comment?\n> \n> I think merging 0001 and 0002 into a single commit is a reasonable\n> approach.\n\n... except it doesn't make a lot of sense to have set_rel_tablespace in\neither indexcmds.c or index.c. I think tablecmds.c is a better place\nfor it. (I would have thought catalog/storage.c, but that one's not the\nright abstraction level it seems.)\n\nBut surely ATExecSetTableSpaceNoStorage should be using this new\nroutine. (I first thought 0002 was doing that, since that commit is\ncalling itself a \"refactoring\", but now that I look closer, it's not.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"On the other flipper, one wrong move and we're Fatal Exceptions\"\n(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)\n\n\n",
"msg_date": "Wed, 20 Jan 2021 12:54:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-20 18:54, Alvaro Herrera wrote:\n> On 2021-Jan-20, Alvaro Herrera wrote:\n> \n>> On 2021-Jan-20, Michael Paquier wrote:\n>> \n>> > +/*\n>> > + * This is mostly duplicating ATExecSetTableSpaceNoStorage,\n>> > + * which should maybe be factored out to a library function.\n>> > + */\n>> > Wouldn't it be better to do first the refactoring of 0002 and then\n>> > 0001 so as REINDEX can use the new routine, instead of putting that\n>> > into a comment?\n>> \n>> I think merging 0001 and 0002 into a single commit is a reasonable\n>> approach.\n> \n> ... except it doesn't make a lot of sense to have set_rel_tablespace in\n> either indexcmds.c or index.c. I think tablecmds.c is a better place\n> for it. (I would have thought catalog/storage.c, but that one's not \n> the\n> right abstraction level it seems.)\n> \n\nI did a refactoring of ATExecSetTableSpaceNoStorage() in the 0001. New \nfunction SetRelTablesapce() is placed into the tablecmds.c. Following \n0002 gets use of it. Is it close to what you and Michael suggested?\n\n> \n> But surely ATExecSetTableSpaceNoStorage should be using this new\n> routine. (I first thought 0002 was doing that, since that commit is\n> calling itself a \"refactoring\", but now that I look closer, it's not.)\n> \n\nYeah, this 'refactoring' was initially referring to refactoring of what \nJustin added to one of the previous 0001. And it was meant to be merged \nwith 0001, once agreed, but we got distracted by other stuff.\n\nI have not yet addressed Michael's concerns regarding reindex of \npartitions. I am going to look closer on it tomorrow.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 20 Jan 2021 21:08:11 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-20 21:08, Alexey Kondratov wrote:\n> On 2021-01-20 18:54, Alvaro Herrera wrote:\n>> On 2021-Jan-20, Alvaro Herrera wrote:\n>> \n>>> On 2021-Jan-20, Michael Paquier wrote:\n>>> \n>>> > +/*\n>>> > + * This is mostly duplicating ATExecSetTableSpaceNoStorage,\n>>> > + * which should maybe be factored out to a library function.\n>>> > + */\n>>> > Wouldn't it be better to do first the refactoring of 0002 and then\n>>> > 0001 so as REINDEX can use the new routine, instead of putting that\n>>> > into a comment?\n>>> \n>>> I think merging 0001 and 0002 into a single commit is a reasonable\n>>> approach.\n>> \n>> ... except it doesn't make a lot of sense to have set_rel_tablespace \n>> in\n>> either indexcmds.c or index.c. I think tablecmds.c is a better place\n>> for it. (I would have thought catalog/storage.c, but that one's not \n>> the\n>> right abstraction level it seems.)\n>> \n> \n> I did a refactoring of ATExecSetTableSpaceNoStorage() in the 0001. New\n> function SetRelTablesapce() is placed into the tablecmds.c. Following\n> 0002 gets use of it. Is it close to what you and Michael suggested?\n> \n\nUgh, forgot to attach the patches. Here they are.\n\n-- \nAlexey",
"msg_date": "Wed, 20 Jan 2021 21:10:14 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-Jan-20, Alexey Kondratov wrote:\n\n> On 2021-01-20 21:08, Alexey Kondratov wrote:\n> > \n> > I did a refactoring of ATExecSetTableSpaceNoStorage() in the 0001. New\n> > function SetRelTablesapce() is placed into the tablecmds.c. Following\n> > 0002 gets use of it. Is it close to what you and Michael suggested?\n> \n> Ugh, forgot to attach the patches. Here they are.\n\nYeah, looks reasonable.\n\n> +\t/* No work if no change in tablespace. */\n> +\toldTablespaceOid = rd_rel->reltablespace;\n> +\tif (tablespaceOid != oldTablespaceOid ||\n> +\t\t(tablespaceOid == MyDatabaseTableSpace && OidIsValid(oldTablespaceOid)))\n> +\t{\n> +\t\t/* Update the pg_class row. */\n> +\t\trd_rel->reltablespace = (tablespaceOid == MyDatabaseTableSpace) ?\n> +\t\t\tInvalidOid : tablespaceOid;\n> +\t\tCatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\n> +\n> +\t\tchanged = true;\n> +\t}\n> +\n> +\tif (changed)\n> +\t\t/* Record dependency on tablespace */\n> +\t\tchangeDependencyOnTablespace(RelationRelationId,\n> +\t\t\t\t\t\t\t\t\t reloid, rd_rel->reltablespace);\n\nWhy have a separate \"if (changed)\" block here instead of merging with\nthe above?\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 20 Jan 2021 15:34:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 03:34:39PM -0300, Alvaro Herrera wrote:\n> On 2021-Jan-20, Alexey Kondratov wrote:\n>> Ugh, forgot to attach the patches. Here they are.\n> \n> Yeah, looks reasonable.\n\nPatch 0002 still has a whole set of issues as I pointed out a couple\nof hours ago, but if we agree on 0001 as being useful if done\nindependently, I'd rather get that done first. This way or just\nmerging both things in a single commit is not a big deal seeing the\namount of code, so I am fine with any approach. It may be possible\nthat 0001 requires more changes depending on the work to-be-done for\n0002 though?\n\n>> +\t/* No work if no change in tablespace. */\n>> +\toldTablespaceOid = rd_rel->reltablespace;\n>> +\tif (tablespaceOid != oldTablespaceOid ||\n>> +\t\t(tablespaceOid == MyDatabaseTableSpace && OidIsValid(oldTablespaceOid)))\n>> +\t{\n>> +\t\t/* Update the pg_class row. */\n>> +\t\trd_rel->reltablespace = (tablespaceOid == MyDatabaseTableSpace) ?\n>> +\t\t\tInvalidOid : tablespaceOid;\n>> +\t\tCatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\n>> +\n>> +\t\tchanged = true;\n>> +\t}\n>> +\n>> +\tif (changed)\n>> +\t\t/* Record dependency on tablespace */\n>> +\t\tchangeDependencyOnTablespace(RelationRelationId,\n>> +\t\t\t\t\t\t\t\t\t reloid, rd_rel->reltablespace);\n> \n> Why have a separate \"if (changed)\" block here instead of merging with\n> the above?\n\nYep.\n\n+ if (SetRelTablespace(reloid, newTableSpace))\n+ /* Make sure the reltablespace change is visible */\n+ CommandCounterIncrement();\nAt quick glance, I am wondering why you just don't do a CCI within\nSetRelTablespace().\n--\nMichael",
"msg_date": "Thu, 21 Jan 2021 10:41:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-21 04:41, Michael Paquier wrote:\n> On Wed, Jan 20, 2021 at 03:34:39PM -0300, Alvaro Herrera wrote:\n>> On 2021-Jan-20, Alexey Kondratov wrote:\n>>> Ugh, forgot to attach the patches. Here they are.\n>> \n>> Yeah, looks reasonable.\n> \n>>> +\n>>> +\tif (changed)\n>>> +\t\t/* Record dependency on tablespace */\n>>> +\t\tchangeDependencyOnTablespace(RelationRelationId,\n>>> +\t\t\t\t\t\t\t\t\t reloid, rd_rel->reltablespace);\n>> \n>> Why have a separate \"if (changed)\" block here instead of merging with\n>> the above?\n> \n> Yep.\n> \n\nSure, this is a refactoring artifact.\n\n> + if (SetRelTablespace(reloid, newTableSpace))\n> + /* Make sure the reltablespace change is visible */\n> + CommandCounterIncrement();\n> At quick glance, I am wondering why you just don't do a CCI within\n> SetRelTablespace().\n> \n\nI did it that way for a better readability at first, since it looks more \nnatural when you do some change (SetRelTablespace) and then make them \nvisible with CCI. Second argument was that in the case of \nreindex_index() we have to also call RelationAssumeNewRelfilenode() and \nRelationDropStorage() before doing CCI and making the new tablespace \nvisible. And this part is critical, I guess.\n\n> \n> + This specifies that indexes will be rebuilt on a new tablespace.\n> + Cannot be used with \"mapped\" relations. If \n> <literal>SCHEMA</literal>,\n> + <literal>DATABASE</literal> or <literal>SYSTEM</literal> is\n> specified, then\n> + all unsuitable relations will be skipped and a single\n> <literal>WARNING</literal>\n> + will be generated.\n> What is an unsuitable relation? How can the end user know that?\n> \n\nThis was referring to mapped relations mentioned in the previous \nsentence. I have tried to rewrite this part and make it more specific in \nmy current version. Also added Justin's changes to the docs and comment.\n\n> This is missing ACL checks when moving the index into a new location,\n> so this requires some pg_tablespace_aclcheck() calls, and the other\n> patches share the same issue.\n> \n\nI added proper pg_tablespace_aclcheck()'s into the reindex_index() and \nReindexPartitions().\n\n> + else if (partkind == RELKIND_PARTITIONED_TABLE)\n> + {\n> + Relation rel = table_open(partoid, ShareLock);\n> + List *indexIds = RelationGetIndexList(rel);\n> + ListCell *lc;\n> +\n> + table_close(rel, NoLock);\n> + foreach (lc, indexIds)\n> + {\n> + Oid indexid = lfirst_oid(lc);\n> + (void) set_rel_tablespace(indexid, \n> params->tablespaceOid);\n> + }\n> + }\n> This is really a good question. ReindexPartitions() would trigger one\n> transaction per leaf to work on. Changing the tablespace of the\n> partitioned table(s) before doing any work has the advantage to tell\n> any new partition to use the new tablespace. Now, I see a struggling\n> point here: what should we do if the processing fails in the middle of\n> the move, leaving a portion of the leaves in the previous tablespace?\n> On a follow-up reindex with the same command, should the command force\n> a reindex even on the partitions that have been moved? Or could there\n> be a point in skipping the partitions that are already on the new\n> tablespace and only process the ones on the previous tablespace? It\n> seems to me that the first scenario makes the most sense as currently\n> a REINDEX works on all the relations defined, though there could be\n> use cases for the second case. This should be documented, I think.\n> \n\nI agree that follow-up REINDEX should also reindex moved partitions, \nsince REINDEX (TABLESPACE ...) is still reindex at first. I will try to \nput something about this part into the docs. Also I think that we cannot \nbe sure that nothing happened with already reindexed partitions between \ntwo consequent REINDEX calls.\n\n> There are no tests for partitioned tables, aka we'd want to make sure\n> that the new partitioned index is on the correct tablespace, as well\n> as all its leaves. It may be better to have at least two levels of\n> partitioned tables, as well as a partitioned table with no leaves in\n> the cases dealt with.\n> \n\nYes, sure, it makes sense.\n\n> + *\n> + * Even if a table's indexes were moved to a new tablespace, \n> the index\n> + * on its toast table is not normally moved.\n> */\n> Still, REINDEX (TABLESPACE) TABLE should move all of them to be\n> consistent with ALTER TABLE SET TABLESPACE, but that's not the case\n> with this code, no? This requires proper test coverage, but there is\n> nothing of the kind in this patch.\n\nYou are right, we do not move TOAST indexes now, since \nIsSystemRelation() is true for TOAST indexes, so I thought that we \nshould not allow moving them without allow_system_table_mods=true. Now I \nwonder why ALTER TABLE does that.\n\nI am going to attach the new version of patch set today or tomorrow.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Thu, 21 Jan 2021 17:06:06 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-21 17:06, Alexey Kondratov wrote:\n> On 2021-01-21 04:41, Michael Paquier wrote:\n> \n>> There are no tests for partitioned tables, aka we'd want to make sure\n>> that the new partitioned index is on the correct tablespace, as well\n>> as all its leaves. It may be better to have at least two levels of\n>> partitioned tables, as well as a partitioned table with no leaves in\n>> the cases dealt with.\n>> \n> \n> Yes, sure, it makes sense.\n> \n>> + *\n>> + * Even if a table's indexes were moved to a new tablespace, \n>> the index\n>> + * on its toast table is not normally moved.\n>> */\n>> Still, REINDEX (TABLESPACE) TABLE should move all of them to be\n>> consistent with ALTER TABLE SET TABLESPACE, but that's not the case\n>> with this code, no? This requires proper test coverage, but there is\n>> nothing of the kind in this patch.\n> \n> You are right, we do not move TOAST indexes now, since\n> IsSystemRelation() is true for TOAST indexes, so I thought that we\n> should not allow moving them without allow_system_table_mods=true. Now\n> I wonder why ALTER TABLE does that.\n> \n> I am going to attach the new version of patch set today or tomorrow.\n> \n\nAttached is a new patch set of first two patches, that should resolve \nall the issues raised before (ACL, docs, tests) excepting TOAST. Double \nthanks for suggestion to add more tests with nested partitioning. I have \nfound and squashed a huge bug related to the returning back to the \ndefault tablespace using newly added tests.\n\nRegarding TOAST. Now we skip moving toast indexes or throw error if \nsomeone wants to move TOAST index directly. I had a look on ALTER TABLE \nSET TABLESPACE and it has a bit complicated logic:\n\n1) You cannot move TOAST table directly.\n2) But if you move basic relation that TOAST table belongs to, then they \nare moved altogether.\n3) Same logic as 2) happens if one does ALTER TABLE ALL IN TABLESPACE \n...\n\nThat way, ALTER TABLE allows moving TOAST tables (with indexes) \nimplicitly, but does not allow doing that explicitly. In the same time I \nfound docs to be vague about such behavior it only says:\n\n All tables in the current database in a tablespace can be moved\n by using the ALL IN TABLESPACE ... Note that system catalogs are\n not moved by this command\n\n Changing any part of a system catalog table is not permitted.\n\nSo actually ALTER TABLE treats TOAST relations as system sometimes, but \nsometimes not.\n\n From the end user perspective it makes sense to move TOAST with main \ntable when doing ALTER TABLE SET TABLESPACE. But should we touch indexes \non TOAST table with REINDEX? We cannot move TOAST relation itself, since \nwe are doing only a reindex, so we end up in the state when TOAST table \nand its index are placed in the different tablespaces. This state is not \nreachable with ALTER TABLE/INDEX, so it seem we should not allow it with \nREINDEX as well, should we?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 21 Jan 2021 23:48:08 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 11:48:08PM +0300, Alexey Kondratov wrote:\n> Attached is a new patch set of first two patches, that should resolve all\n> the issues raised before (ACL, docs, tests) excepting TOAST. Double thanks\n> for suggestion to add more tests with nested partitioning. I have found and\n> squashed a huge bug related to the returning back to the default tablespace\n> using newly added tests.\n> \n> Regarding TOAST. Now we skip moving toast indexes or throw error if someone\n> wants to move TOAST index directly. I had a look on ALTER TABLE SET\n> TABLESPACE and it has a bit complicated logic:\n> \n> 1) You cannot move TOAST table directly.\n> 2) But if you move basic relation that TOAST table belongs to, then they are\n> moved altogether.\n> 3) Same logic as 2) happens if one does ALTER TABLE ALL IN TABLESPACE ...\n> \n> That way, ALTER TABLE allows moving TOAST tables (with indexes) implicitly,\n> but does not allow doing that explicitly. In the same time I found docs to\n> be vague about such behavior it only says:\n> \n> All tables in the current database in a tablespace can be moved\n> by using the ALL IN TABLESPACE ... Note that system catalogs are\n> not moved by this command\n> \n> Changing any part of a system catalog table is not permitted.\n> \n> So actually ALTER TABLE treats TOAST relations as system sometimes, but\n> sometimes not.\n> \n> From the end user perspective it makes sense to move TOAST with main table\n> when doing ALTER TABLE SET TABLESPACE. But should we touch indexes on TOAST\n> table with REINDEX? We cannot move TOAST relation itself, since we are doing\n> only a reindex, so we end up in the state when TOAST table and its index are\n> placed in the different tablespaces. This state is not reachable with ALTER\n> TABLE/INDEX, so it seem we should not allow it with REINDEX as well, should\n> we?\n\n> +\t\t * Even if a table's indexes were moved to a new tablespace, the index\n> +\t\t * on its toast table is not normally moved.\n> \t\t */\n> \t\tReindexParams newparams = *params;\n> \n> \t\tnewparams.options &= ~(REINDEXOPT_MISSING_OK);\n> +\t\tif (!allowSystemTableMods)\n> +\t\t\tnewparams.tablespaceOid = InvalidOid;\n\nI think you're right. So actually TOAST should never move, even if\nallowSystemTableMods, right ?\n\n> @@ -292,7 +315,11 @@ REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n> with <command>REINDEX INDEX</command> or <command>REINDEX TABLE</command>,\n> respectively. Each partition of the specified partitioned relation is\n> reindexed in a separate transaction. Those commands cannot be used inside\n> - a transaction block when working on a partitioned table or index.\n> + a transaction block when working on a partitioned table or index. If\n> + <command>REINDEX</command> with <literal>TABLESPACE</literal> executed\n> + on partitioned relation fails it may have moved some partitions to the new\n> + tablespace. Repeated command will still reindex all partitions even if they\n> + are already in the new tablespace.\n\nMinor corrections here:\n\nIf a <command>REINDEX</command> command fails when run on a partitioned\nrelation, and <literal>TABLESPACE</literal> was specified, then it may have\nmoved indexes on some partitions to the new tablespace. Re-running the command\nwill reindex all partitions and move previously-unprocessed indexes to the new\ntablespace.\n\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jan 2021 15:26:51 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-22 00:26, Justin Pryzby wrote:\n> On Thu, Jan 21, 2021 at 11:48:08PM +0300, Alexey Kondratov wrote:\n>> Attached is a new patch set of first two patches, that should resolve \n>> all\n>> the issues raised before (ACL, docs, tests) excepting TOAST. Double \n>> thanks\n>> for suggestion to add more tests with nested partitioning. I have \n>> found and\n>> squashed a huge bug related to the returning back to the default \n>> tablespace\n>> using newly added tests.\n>> \n>> Regarding TOAST. Now we skip moving toast indexes or throw error if \n>> someone\n>> wants to move TOAST index directly. I had a look on ALTER TABLE SET\n>> TABLESPACE and it has a bit complicated logic:\n>> \n>> 1) You cannot move TOAST table directly.\n>> 2) But if you move basic relation that TOAST table belongs to, then \n>> they are\n>> moved altogether.\n>> 3) Same logic as 2) happens if one does ALTER TABLE ALL IN TABLESPACE \n>> ...\n>> \n>> That way, ALTER TABLE allows moving TOAST tables (with indexes) \n>> implicitly,\n>> but does not allow doing that explicitly. In the same time I found \n>> docs to\n>> be vague about such behavior it only says:\n>> \n>> All tables in the current database in a tablespace can be moved\n>> by using the ALL IN TABLESPACE ... Note that system catalogs are\n>> not moved by this command\n>> \n>> Changing any part of a system catalog table is not permitted.\n>> \n>> So actually ALTER TABLE treats TOAST relations as system sometimes, \n>> but\n>> sometimes not.\n>> \n>> From the end user perspective it makes sense to move TOAST with main \n>> table\n>> when doing ALTER TABLE SET TABLESPACE. But should we touch indexes on \n>> TOAST\n>> table with REINDEX? We cannot move TOAST relation itself, since we are \n>> doing\n>> only a reindex, so we end up in the state when TOAST table and its \n>> index are\n>> placed in the different tablespaces. This state is not reachable with \n>> ALTER\n>> TABLE/INDEX, so it seem we should not allow it with REINDEX as well, \n>> should\n>> we?\n> \n>> +\t\t * Even if a table's indexes were moved to a new tablespace, the \n>> index\n>> +\t\t * on its toast table is not normally moved.\n>> \t\t */\n>> \t\tReindexParams newparams = *params;\n>> \n>> \t\tnewparams.options &= ~(REINDEXOPT_MISSING_OK);\n>> +\t\tif (!allowSystemTableMods)\n>> +\t\t\tnewparams.tablespaceOid = InvalidOid;\n> \n> I think you're right. So actually TOAST should never move, even if\n> allowSystemTableMods, right ?\n> \n\nI think so. I would prefer to do not move TOAST indexes implicitly at \nall during reindex.\n\n> \n>> @@ -292,7 +315,11 @@ REINDEX [ ( <replaceable \n>> class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n>> with <command>REINDEX INDEX</command> or <command>REINDEX \n>> TABLE</command>,\n>> respectively. Each partition of the specified partitioned relation \n>> is\n>> reindexed in a separate transaction. Those commands cannot be used \n>> inside\n>> - a transaction block when working on a partitioned table or index.\n>> + a transaction block when working on a partitioned table or index. \n>> If\n>> + <command>REINDEX</command> with <literal>TABLESPACE</literal> \n>> executed\n>> + on partitioned relation fails it may have moved some partitions to \n>> the new\n>> + tablespace. Repeated command will still reindex all partitions \n>> even if they\n>> + are already in the new tablespace.\n> \n> Minor corrections here:\n> \n> If a <command>REINDEX</command> command fails when run on a partitioned\n> relation, and <literal>TABLESPACE</literal> was specified, then it may \n> have\n> moved indexes on some partitions to the new tablespace. Re-running the \n> command\n> will reindex all partitions and move previously-unprocessed indexes to \n> the new\n> tablespace.\n\nSounds good to me.\n\nI have updated patches accordingly and also simplified tablespaceOid \nchecks and assignment in the newly added SetRelTableSpace(). Result is \nattached as two separate patches for an ease of review, but no \nobjections to merge them and apply at once if everything is fine.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Fri, 22 Jan 2021 17:07:02 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 05:07:02PM +0300, Alexey Kondratov wrote:\n> I have updated patches accordingly and also simplified tablespaceOid checks\n> and assignment in the newly added SetRelTableSpace(). Result is attached as\n> two separate patches for an ease of review, but no objections to merge them\n> and apply at once if everything is fine.\n\n extern void SetRelationHasSubclass(Oid relationId, bool relhassubclass);\n+extern bool SetRelTableSpace(Oid reloid, Oid tablespaceOid);\nSeeing SetRelationHasSubclass(), wouldn't it be more consistent to use\nSetRelationTableSpace() as routine name?\n\nI think that we should document that the caller of this routine had\nbetter do a CCI once done to make the tablespace chage visible.\nExcept for those two nits, the patch needs an indentation run and some\nstyle tweaks but its logic looks fine. So I'll apply that first\npiece.\n \n+SELECT relname FROM pg_class\n+WHERE reltablespace=(SELECT oid FROM pg_tablespace WHERE spcname='regress_tblspace');\n[...]\n+-- first, check a no-op case\n+REINDEX (TABLESPACE pg_default) INDEX regress_tblspace_test_tbl_idx;\n+REINDEX (TABLESPACE pg_default) TABLE regress_tblspace_test_tbl;\nReindexing means that the relfilenodes are changed, so the tests\nshould track the original and new relfilenodes and compare them, no?\nIn short, this set of regression tests does not make sure that a\nREINDEX actually happens or not, and this may read as a reindex not\nhappening at all for those tests. For single units, these could be\nsaved in a variable and compared afterwards. create_index.sql does\nthat a bit with REINDEX SCHEMA for a set of relations.\n\n+INSERT INTO regress_tblspace_test_tbl (num1, num2, t)\n+ SELECT round(random()*100), random(), repeat('text', 1000000)\n+ FROM generate_series(1, 10) s(i);\nRepeating 1M times a text value is too costly for such a test. And as\neven for empty tables there is one page created for toast indexes,\nthere is no need for that?\n\nThis patch is introducing three new checks for system catalogs:\n- don't use tablespace for mapped relations.\n- don't use tablespace for system relations, except if\nallowSystemTableMods.\n- don't move non-shared relation to global tablespace.\nFor the non-concurrent case, all three checks are in reindex_index().\nFor the concurrent case, the two first checks are in\nReindexMultipleTables() and the third one is in\nReindexRelationConcurrently(). That's rather tricky to follow because\nCONCURRENTLY is not allowed on system relations. I am wondering if it\nwould be worth an extra comment effort, or if there is a way to\nconsolidate that better.\n\n typedef struct ReindexParams\n {\n bits32 options; /* bitmask of REINDEXOPT_* */\n+ Oid tablespaceOid; /* tablespace to rebuild index */\n } ReindexParams;\nFor DDL commands, InvalidOid on a tablespace means to usually use the\nsystem's default. However, for REINDEX, it means that the same\ntablespace as the origin would be used. I think that this had better\nbe properly documented in this structure.\n\n- indexRelation->rd_rel->reltablespace,\n+ OidIsValid(tablespaceOid) ?\n+ tablespaceOid : indexRelation->rd_rel->reltablespace,\nLet's remove this logic from index_concurrently_create_copy() and let\nthe caller directly decide the tablespace to use, without a dependency\non InvalidOid in the inner routine. A share update exclusive lock is\nalready hold on the old index when creating the concurrent copy, so\nthere won't be concurrent schema changes.\n\n+ * \"tablespaceOid\" is the new tablespace to use for this index.\n+ * If InvalidOid, use the current tablespace.\n[...]\n+ * See comments of reindex_relation() for details about \"tablespaceOid\".\nThose comments are wrong as the tablespace OID is not part of\nReindexParams.\n\nThere is no documentation about the behavior of toast tables with\nTABLESPACE. In this case, the patch mentions that the option will not\nwork directly on system catalogs unless allow_system_table_mods is\ntrue, but it forgets to tell that it does not move toast indexes,\nstill these are getting reindexed.\n\nThere are no regression tests stressing the tablespace ACL check for\nthe concurrent *and* the non-concurrent cases.\n\nThere is one ACL check in ReindexPartitions(), and a second one in\nreindex_index(), but it seems to me that you are missing the path for\nconcurrent indexes. It would be tempting to have the check directly\nin ExecReindex() to look after everything at the earliest stage\npossible, but we still need to worry about the multi-transaction case.\n--\nMichael",
"msg_date": "Mon, 25 Jan 2021 17:07:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 05:07:29PM +0900, Michael Paquier wrote:\n> On Fri, Jan 22, 2021 at 05:07:02PM +0300, Alexey Kondratov wrote:\n> > I have updated patches accordingly and also simplified tablespaceOid checks\n> > and assignment in the newly added SetRelTableSpace(). Result is attached as\n> > two separate patches for an ease of review, but no objections to merge them\n> > and apply at once if everything is fine.\n...\n> +SELECT relname FROM pg_class\n> +WHERE reltablespace=(SELECT oid FROM pg_tablespace WHERE spcname='regress_tblspace');\n> [...]\n> +-- first, check a no-op case\n> +REINDEX (TABLESPACE pg_default) INDEX regress_tblspace_test_tbl_idx;\n> +REINDEX (TABLESPACE pg_default) TABLE regress_tblspace_test_tbl;\n> Reindexing means that the relfilenodes are changed, so the tests\n> should track the original and new relfilenodes and compare them, no?\n> In short, this set of regression tests does not make sure that a\n> REINDEX actually happens or not, and this may read as a reindex not\n> happening at all for those tests. For single units, these could be\n> saved in a variable and compared afterwards. create_index.sql does\n> that a bit with REINDEX SCHEMA for a set of relations.\n\nYou might also check my \"CLUSTER partitioned\" patch for another way to do that.\n\nhttps://www.postgresql.org/message-id/20210118183459.GJ8560%40telsasoft.com\nhttps://www.postgresql.org/message-id/attachment/118126/v6-0002-Implement-CLUSTER-of-partitioned-table.patch\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 25 Jan 2021 06:58:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-25 11:07, Michael Paquier wrote:\n> On Fri, Jan 22, 2021 at 05:07:02PM +0300, Alexey Kondratov wrote:\n>> I have updated patches accordingly and also simplified tablespaceOid \n>> checks\n>> and assignment in the newly added SetRelTableSpace(). Result is \n>> attached as\n>> two separate patches for an ease of review, but no objections to merge \n>> them\n>> and apply at once if everything is fine.\n> \n> extern void SetRelationHasSubclass(Oid relationId, bool \n> relhassubclass);\n> +extern bool SetRelTableSpace(Oid reloid, Oid tablespaceOid);\n> Seeing SetRelationHasSubclass(), wouldn't it be more consistent to use\n> SetRelationTableSpace() as routine name?\n> \n> I think that we should document that the caller of this routine had\n> better do a CCI once done to make the tablespace chage visible.\n> Except for those two nits, the patch needs an indentation run and some\n> style tweaks but its logic looks fine. So I'll apply that first\n> piece.\n> \n\nI updated comment with CCI info, did pgindent run and renamed new \nfunction to SetRelationTableSpace(). New patch is attached.\n\n> +INSERT INTO regress_tblspace_test_tbl (num1, num2, t)\n> + SELECT round(random()*100), random(), repeat('text', 1000000)\n> + FROM generate_series(1, 10) s(i);\n> Repeating 1M times a text value is too costly for such a test. And as\n> even for empty tables there is one page created for toast indexes,\n> there is no need for that?\n> \n\nYes, TOAST relation is created anyway. I just wanted to put some data \ninto a TOAST index, so REINDEX did some meaningful work there, not only \na new relfilenode creation. However you are right and this query \nincreases tablespace tests execution for more for more than 2 times on \nmy machine. I think that it is not really required.\n\n> \n> This patch is introducing three new checks for system catalogs:\n> - don't use tablespace for mapped relations.\n> - don't use tablespace for system relations, except if\n> allowSystemTableMods.\n> - don't move non-shared relation to global tablespace.\n> For the non-concurrent case, all three checks are in reindex_index().\n> For the concurrent case, the two first checks are in\n> ReindexMultipleTables() and the third one is in\n> ReindexRelationConcurrently(). That's rather tricky to follow because\n> CONCURRENTLY is not allowed on system relations. I am wondering if it\n> would be worth an extra comment effort, or if there is a way to\n> consolidate that better.\n> \n\nYeah, all these checks we complicated from the beginning. I will try to \nfind a better place tomorrow or put more info into the comments at \nleast.\n\nI am also going to check/fix the remaining points regarding 002 \ntomorrow.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 25 Jan 2021 23:11:38 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 11:11:38PM +0300, Alexey Kondratov wrote:\n> I updated comment with CCI info, did pgindent run and renamed new function\n> to SetRelationTableSpace(). New patch is attached.\n>\n> [...]\n>\n> Yeah, all these checks we complicated from the beginning. I will try to find\n> a better place tomorrow or put more info into the comments at least.\n\nI was reviewing that, and I think that we can do a better\nconsolidation on several points that will also help the features\ndiscussed on this thread for VACUUM, CLUSTER and REINDEX.\n\nIf you look closely, ATExecSetTableSpace() uses the same logic as the\ncode modified here to check if a relation can be moved to a new\ntablespace, with extra checks for mapped relations,\nGLOBALTABLESPACE_OID or if attempting to manipulate a temp relation\nfrom another session. There are two differences though:\n- Custom actions are taken between the phase where we check if a\nrelation can be moved to a new tablespace, and the update of\npg_class.\n- ATExecSetTableSpace() needs to be able to set a given relation\nrelfilenode on top of reltablespace, the newly-created one.\n\nSo I think that the heart of the problem is made of two things here:\n- We should have one common routine for the existing code paths and\nthe new code paths able to check if a tablespace move can be done or\nnot. The case of a cluster, reindex or vacuum on a list of relations\nextracted from pg_class would still require a different handling\nas incorrect relations have to be skipped, but the case of individual\nrelations can reuse the refactoring pieces done here\n(see CheckRelationTableSpaceMove() in the attached).\n- We need to have a second routine able to update reltablespace and\noptionally relfilenode for a given relation's pg_class entry, once the\ncaller has made sure that CheckRelationTableSpaceMove() validates a\ntablespace move.\n\nPlease note that was a bug in your previous patch 0002: shared\ndependencies need to be registered if reltablespace is updated of\ncourse, but also iff the relation has no physical storage. So\nchangeDependencyOnTablespace() requires a check based on\nRELKIND_HAS_STORAGE(), or REINDEX would have registered shared\ndependencies even for relations with storage, something we don't\nwant per the recent work done by Alvaro in ebfe2db.\n--\nMichael",
"msg_date": "Tue, 26 Jan 2021 15:58:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-26 09:58, Michael Paquier wrote:\n> On Mon, Jan 25, 2021 at 11:11:38PM +0300, Alexey Kondratov wrote:\n>> I updated comment with CCI info, did pgindent run and renamed new \n>> function\n>> to SetRelationTableSpace(). New patch is attached.\n>> \n>> [...]\n>> \n>> Yeah, all these checks we complicated from the beginning. I will try \n>> to find\n>> a better place tomorrow or put more info into the comments at least.\n> \n> I was reviewing that, and I think that we can do a better\n> consolidation on several points that will also help the features\n> discussed on this thread for VACUUM, CLUSTER and REINDEX.\n> \n> If you look closely, ATExecSetTableSpace() uses the same logic as the\n> code modified here to check if a relation can be moved to a new\n> tablespace, with extra checks for mapped relations,\n> GLOBALTABLESPACE_OID or if attempting to manipulate a temp relation\n> from another session. There are two differences though:\n> - Custom actions are taken between the phase where we check if a\n> relation can be moved to a new tablespace, and the update of\n> pg_class.\n> - ATExecSetTableSpace() needs to be able to set a given relation\n> relfilenode on top of reltablespace, the newly-created one.\n> \n> So I think that the heart of the problem is made of two things here:\n> - We should have one common routine for the existing code paths and\n> the new code paths able to check if a tablespace move can be done or\n> not. The case of a cluster, reindex or vacuum on a list of relations\n> extracted from pg_class would still require a different handling\n> as incorrect relations have to be skipped, but the case of individual\n> relations can reuse the refactoring pieces done here\n> (see CheckRelationTableSpaceMove() in the attached).\n> - We need to have a second routine able to update reltablespace and\n> optionally relfilenode for a given relation's pg_class entry, once the\n> caller has made sure that CheckRelationTableSpaceMove() validates a\n> tablespace move.\n> \n\nI think that I got your idea. One comment:\n\n+bool\n+CheckRelationTableSpaceMove(Relation rel, Oid newTableSpaceId)\n+{\n+\tOid\t\t\toldTableSpaceId;\n+\tOid\t\t\treloid = RelationGetRelid(rel);\n+\n+\t/*\n+\t * No work if no change in tablespace. Note that MyDatabaseTableSpace\n+\t * is stored as 0.\n+\t */\n+\toldTableSpaceId = rel->rd_rel->reltablespace;\n+\tif (newTableSpaceId == oldTableSpaceId ||\n+\t\t(newTableSpaceId == MyDatabaseTableSpace && oldTableSpaceId == 0))\n+\t{\n+\t\tInvokeObjectPostAlterHook(RelationRelationId, reloid, 0);\n+\t\treturn false;\n+\t}\n\nCheckRelationTableSpaceMove() does not feel like a right place for \ninvoking post alter hooks. It is intended only to check for tablespace \nchange possibility. Anyway, ATExecSetTableSpace() and \nATExecSetTableSpaceNoStorage() already do that in the no-op case.\n\n> Please note that was a bug in your previous patch 0002: shared\n> dependencies need to be registered if reltablespace is updated of\n> course, but also iff the relation has no physical storage. So\n> changeDependencyOnTablespace() requires a check based on\n> RELKIND_HAS_STORAGE(), or REINDEX would have registered shared\n> dependencies even for relations with storage, something we don't\n> want per the recent work done by Alvaro in ebfe2db.\n> \n\nYes, thanks.\n\nI have removed this InvokeObjectPostAlterHook() from your 0001 and made \n0002 to work on top of it. I think that now it should look closer to \nwhat you described above.\n\nIn the new 0002 I moved ACL check to the upper level, i.e. \nExecReindex(), and removed expensive text generation in test. Not \ntouched yet some of your previously raised concerns. Also, you made \nSetRelationTableSpace() to accept Relation instead of Oid, so now we \nhave to open/close indexes in the ReindexPartitions(), I am not sure \nthat I use proper locking there, but it works.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 27 Jan 2021 01:00:50 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 01:00:50AM +0300, Alexey Kondratov wrote:\n> CheckRelationTableSpaceMove() does not feel like a right place for invoking\n> post alter hooks. It is intended only to check for tablespace change\n> possibility. Anyway, ATExecSetTableSpace() and\n> ATExecSetTableSpaceNoStorage() already do that in the no-op case.\n>\n> I have removed this InvokeObjectPostAlterHook() from your 0001 and made 0002\n> to work on top of it. I think that now it should look closer to what you\n> described above.\n\nYeah, I was a bit hesitating about this part as those new routines\nwould not be used by ALTER-related commands in the next steps. Your\npatch got that midway in-between though, by adding the hook to\nSetRelationTableSpace but not to CheckRelationTableSpaceMove(). For\nnow, I have kept the hook out of those new routines because using an\nALTER hook for a utility command is inconsistent. Perhaps we'd want a\nseparate hook type dedicated to utility commands in objectaccess.c.\n\nI have double-checked the code, and applied it after a few tweaks.\n\n> In the new 0002 I moved ACL check to the upper level, i.e. ExecReindex(),\n> and removed expensive text generation in test. Not touched yet some of your\n> previously raised concerns. Also, you made SetRelationTableSpace() to accept\n> Relation instead of Oid, so now we have to open/close indexes in the\n> ReindexPartitions(), I am not sure that I use proper locking there, but it\n> works.\n\nPassing down Relation to the new routines makes the most sense to me\nbecause we force the callers to think about the level of locking\nthat's required when doing any tablespace moves.\n\n+ Relation iRel = index_open(partoid, ShareLock);\n+\n+ if (CheckRelationTableSpaceMove(iRel, params->tablespaceOid))\n+ SetRelationTableSpace(iRel,\n+ params->tablespaceOid,\n+ InvalidOid);\nSpeaking of which, this breaks the locking assumptions of\nSetRelationTableSpace(). I feel that we should think harder about\nthis part for partitioned indexes and tables because this looks rather\nunsafe in terms of locking assumptions with partition trees. If we\ncannot come up with a safe solution, I would be fine with disallowing\nTABLESPACE in this case, as a first step. Not all problems have to be\nsolved at once, and even without this part the feature is still\nuseful.\n\n+ /* It's not a shared catalog, so refuse to move it to shared tablespace */\n+ if (params->tablespaceOid == GLOBALTABLESPACE_OID)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot move non-shared relation to tablespace \\\"%s\\\"\",\n+ get_tablespace_name(params->tablespaceOid))));\nWhy is that needed if CheckRelationTableSpaceMove() is used?\n\n bits32 options; /* bitmask of REINDEXOPT_* */\n+ Oid tablespaceOid; /* tablespace to rebuild index */\n} ReindexParams;\nMentioned upthread, but here I think that we should tell that\nInvalidOid => keep the existing tablespace.\n--\nMichael",
"msg_date": "Wed, 27 Jan 2021 12:14:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-27 06:14, Michael Paquier wrote:\n> On Wed, Jan 27, 2021 at 01:00:50AM +0300, Alexey Kondratov wrote:\n>> In the new 0002 I moved ACL check to the upper level, i.e. \n>> ExecReindex(),\n>> and removed expensive text generation in test. Not touched yet some of \n>> your\n>> previously raised concerns. Also, you made SetRelationTableSpace() to \n>> accept\n>> Relation instead of Oid, so now we have to open/close indexes in the\n>> ReindexPartitions(), I am not sure that I use proper locking there, \n>> but it\n>> works.\n> \n> Passing down Relation to the new routines makes the most sense to me\n> because we force the callers to think about the level of locking\n> that's required when doing any tablespace moves.\n> \n> + Relation iRel = index_open(partoid, ShareLock);\n> +\n> + if (CheckRelationTableSpaceMove(iRel, \n> params->tablespaceOid))\n> + SetRelationTableSpace(iRel,\n> + params->tablespaceOid,\n> + InvalidOid);\n> Speaking of which, this breaks the locking assumptions of\n> SetRelationTableSpace(). I feel that we should think harder about\n> this part for partitioned indexes and tables because this looks rather\n> unsafe in terms of locking assumptions with partition trees. If we\n> cannot come up with a safe solution, I would be fine with disallowing\n> TABLESPACE in this case, as a first step. Not all problems have to be\n> solved at once, and even without this part the feature is still\n> useful.\n> \n\nI have read more about lock levels and ShareLock should prevent any kind \nof physical modification of indexes. We already hold ShareLock doing \nfind_all_inheritors(), which is higher than ShareUpdateExclusiveLock, so \nusing ShareLock seems to be safe here, but I will look on it closer.\n\n> \n> + /* It's not a shared catalog, so refuse to move it to shared \n> tablespace */\n> + if (params->tablespaceOid == GLOBALTABLESPACE_OID)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot move non-shared relation to tablespace \n> \\\"%s\\\"\",\n> + get_tablespace_name(params->tablespaceOid))));\n> Why is that needed if CheckRelationTableSpaceMove() is used?\n> \n\nThis is from ReindexRelationConcurrently() where we do not use \nCheckRelationTableSpaceMove(). For me it makes sense to add only this \nGLOBALTABLESPACE_OID check there, since before we already check for \nsystem catalogs and after for temp relations, so adding \nCheckRelationTableSpaceMove() will be a double-check.\n\n> \n> - indexRelation->rd_rel->reltablespace,\n> + OidIsValid(tablespaceOid) ?\n> + tablespaceOid :\n> indexRelation->rd_rel->reltablespace,\n> Let's remove this logic from index_concurrently_create_copy() and let\n> the caller directly decide the tablespace to use, without a dependency\n> on InvalidOid in the inner routine. A share update exclusive lock is\n> already hold on the old index when creating the concurrent copy, so\n> there won't be concurrent schema changes.\n> \n\nChanged.\n\nAlso added tests for ACL checks, relfilenode changes. Added ACL recheck \nfor multi-transactional case. Added info about TOAST index reindexing. \nChanged some comments.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 28 Jan 2021 00:19:06 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Thanks for updating the patch. I have just a couple comments on the new (and\nold) language.\n\nOn Thu, Jan 28, 2021 at 12:19:06AM +0300, Alexey Kondratov wrote:\n> Also added tests for ACL checks, relfilenode changes. Added ACL recheck for\n> multi-transactional case. Added info about TOAST index reindexing. Changed\n> some comments.\n\n> + Specifies that indexes will be rebuilt on a new tablespace.\n> + Cannot be used with \"mapped\" and system (unless <varname>allow_system_table_mods</varname>\n\nsay mapped *or* system relations\nOr maybe:\nmapped or (unless >allow_system_table_mods<) system relations.\n\n> + is set to <literal>TRUE</literal>) relations. If <literal>SCHEMA</literal>,\n> + <literal>DATABASE</literal> or <literal>SYSTEM</literal> are specified,\n> + then all \"mapped\" and system relations will be skipped and a single\n> + <literal>WARNING</literal> will be generated. Indexes on TOAST tables\n> + are reindexed, but not moved the new tablespace.\n\nmoved *to* the new tablespace.\nI don't know if that needs to be said at all. We talked about it a lot to\narrive at the current behavior, but I think that's only due to the difficulty\nof correcting the initial mistake.\n\n> +\t/*\n> +\t * Set the new tablespace for the relation. Do that only in the\n> +\t * case where the reindex caller wishes to enforce a new tablespace.\n\nI'd say just \"/* Set new tablespace, if requested */\nYou wrote something similar in an earlier revision of your refactoring patch.\n\n> +\t\t * Mark the relation as ready to be dropped at transaction commit,\n> +\t\t * before making visible the new tablespace change so as this won't\n> +\t\t * miss things.\n\nThis comment is vague. I think Michael first wrote this comment about a year\nago. Does it mean \"so the new tablespace won't be missed\" ? Missed by what ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 27 Jan 2021 15:35:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-Jan-28, Alexey Kondratov wrote:\n\n> I have read more about lock levels and ShareLock should prevent any kind of\n> physical modification of indexes. We already hold ShareLock doing\n> find_all_inheritors(), which is higher than ShareUpdateExclusiveLock, so\n> using ShareLock seems to be safe here, but I will look on it closer.\n\nYou can look at lock.c where LockConflicts[] is; that would tell you\nthat ShareLock indeed conflicts with ShareUpdateExclusiveLock ... but it\ndoes not conflict with itself! So it would be possible to have more\nthan one process doing this thing at the same time, which surely makes\nno sense.\n\nI didn't look at the patch closely enough to understand why you're\ntrying to do something like CLUSTER, VACUUM FULL or REINDEX without\nholding full AccessExclusiveLock on the relation. But do keep in mind\nthat once you hold a lock on a relation, trying to grab a weaker lock\nafterwards is pretty pointless.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"E pur si muove\" (Galileo Galilei)\n\n\n",
"msg_date": "Wed, 27 Jan 2021 18:36:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-28 00:36, Alvaro Herrera wrote:\n> On 2021-Jan-28, Alexey Kondratov wrote:\n> \n>> I have read more about lock levels and ShareLock should prevent any \n>> kind of\n>> physical modification of indexes. We already hold ShareLock doing\n>> find_all_inheritors(), which is higher than ShareUpdateExclusiveLock, \n>> so\n>> using ShareLock seems to be safe here, but I will look on it closer.\n> \n> You can look at lock.c where LockConflicts[] is; that would tell you\n> that ShareLock indeed conflicts with ShareUpdateExclusiveLock ... but \n> it\n> does not conflict with itself! So it would be possible to have more\n> than one process doing this thing at the same time, which surely makes\n> no sense.\n> \n\nThanks for the explanation and pointing me to the LockConflicts[]. This \nis a good reference.\n\n> \n> I didn't look at the patch closely enough to understand why you're\n> trying to do something like CLUSTER, VACUUM FULL or REINDEX without\n> holding full AccessExclusiveLock on the relation. But do keep in mind\n> that once you hold a lock on a relation, trying to grab a weaker lock\n> afterwards is pretty pointless.\n> \n\nNo, you are right, we are doing REINDEX with AccessExclusiveLock as it \nwas before. This part is a more specific one. It only applies to \npartitioned indexes, which do not hold any data, so we do not reindex \nthem directly, only their leafs. However, if we are doing a TABLESPACE \nchange, we have to record it in their pg_class entry, so all future leaf \npartitions were created in the proper tablespace.\n\nThat way, we open partitioned index relation only for a reference, i.e. \nread-only, but modify pg_class entry under a proper lock \n(RowExclusiveLock). That's why I thought that ShareLock will be enough.\n\nIIUC, 'ALTER TABLE ... SET TABLESPACE' uses AccessExclusiveLock even for \nrelations with no storage, since AlterTableGetLockLevel() chooses it if \nAT_SetTableSpace is met. This is very similar to our case, so probably \nwe should do the same?\n\nActually it is not completely clear for me why ShareUpdateExclusiveLock \nis sufficient for newly added SetRelationTableSpace() as Michael wrote \nin the comment.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Thu, 28 Jan 2021 14:42:40 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-28 14:42, Alexey Kondratov wrote:\n> On 2021-01-28 00:36, Alvaro Herrera wrote:\n\n>> I didn't look at the patch closely enough to understand why you're\n>> trying to do something like CLUSTER, VACUUM FULL or REINDEX without\n>> holding full AccessExclusiveLock on the relation. But do keep in mind\n>> that once you hold a lock on a relation, trying to grab a weaker lock\n>> afterwards is pretty pointless.\n>> \n> \n> No, you are right, we are doing REINDEX with AccessExclusiveLock as it\n> was before. This part is a more specific one. It only applies to\n> partitioned indexes, which do not hold any data, so we do not reindex\n> them directly, only their leafs. However, if we are doing a TABLESPACE\n> change, we have to record it in their pg_class entry, so all future\n> leaf partitions were created in the proper tablespace.\n> \n> That way, we open partitioned index relation only for a reference,\n> i.e. read-only, but modify pg_class entry under a proper lock\n> (RowExclusiveLock). That's why I thought that ShareLock will be\n> enough.\n> \n> IIUC, 'ALTER TABLE ... SET TABLESPACE' uses AccessExclusiveLock even\n> for relations with no storage, since AlterTableGetLockLevel() chooses\n> it if AT_SetTableSpace is met. This is very similar to our case, so\n> probably we should do the same?\n> \n> Actually it is not completely clear for me why\n> ShareUpdateExclusiveLock is sufficient for newly added\n> SetRelationTableSpace() as Michael wrote in the comment.\n> \n\nChanged patch to use AccessExclusiveLock in this part for now. This is \nwhat 'ALTER TABLE/INDEX ... SET TABLESPACE' and 'REINDEX' usually do. \nAnyway, all real leaf partitions are processed in the independent \ntransactions later.\n\nAlso changed some doc/comment parts Justin pointed me to.\n\n>> + then all \"mapped\" and system relations will be skipped and a \n>> single\n>> + <literal>WARNING</literal> will be generated. Indexes on TOAST \n>> tables\n>> + are reindexed, but not moved the new tablespace.\n> \n> moved *to* the new tablespace.\n> \n\nFixed.\n\n> \n> I don't know if that needs to be said at all. We talked about it a lot \n> to\n> arrive at the current behavior, but I think that's only due to the \n> difficulty\n> of correcting the initial mistake.\n> \n\nI do not think that it will be a big deal to move indexes on TOAST \ntables as well. I just thought that since 'ALTER TABLE/INDEX ... SET \nTABLESPACE' only moves them together with host table, we also should not \ndo that. Yet, I am ready to change this logic if requested.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Fri, 29 Jan 2021 20:56:47 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Fri, Jan 29, 2021 at 08:56:47PM +0300, Alexey Kondratov wrote:\n> On 2021-01-28 14:42, Alexey Kondratov wrote:\n>> No, you are right, we are doing REINDEX with AccessExclusiveLock as it\n>> was before. This part is a more specific one. It only applies to\n>> partitioned indexes, which do not hold any data, so we do not reindex\n>> them directly, only their leafs. However, if we are doing a TABLESPACE\n>> change, we have to record it in their pg_class entry, so all future\n>> leaf partitions were created in the proper tablespace.\n>> \n>> That way, we open partitioned index relation only for a reference,\n>> i.e. read-only, but modify pg_class entry under a proper lock\n>> (RowExclusiveLock). That's why I thought that ShareLock will be\n>> enough.\n>> \n>> IIUC, 'ALTER TABLE ... SET TABLESPACE' uses AccessExclusiveLock even\n>> for relations with no storage, since AlterTableGetLockLevel() chooses\n>> it if AT_SetTableSpace is met. This is very similar to our case, so\n>> probably we should do the same?\n>> \n>> Actually it is not completely clear for me why\n>> ShareUpdateExclusiveLock is sufficient for newly added\n>> SetRelationTableSpace() as Michael wrote in the comment.\n\nNay, it was not fine. That's something Alvaro has mentioned, leading\nto 2484329. This also means that the main patch of this thread should\nrefresh the comments at the top of CheckRelationTableSpaceMove() and\nSetRelationTableSpace() to mention that this is used by REINDEX\nCONCURRENTLY with a lower lock.\n\n> Changed patch to use AccessExclusiveLock in this part for now. This is what\n> 'ALTER TABLE/INDEX ... SET TABLESPACE' and 'REINDEX' usually do. Anyway, all\n> real leaf partitions are processed in the independent transactions later.\n\n+ if (partkind == RELKIND_PARTITIONED_INDEX)\n+ {\n+ Relation iRel = index_open(partoid, AccessExclusiveLock);\n+\n+ if (CheckRelationTableSpaceMove(iRel, params->tablespaceOid))\n+ SetRelationTableSpace(iRel,\n+ params->tablespaceOid,\n+ InvalidOid);\n+ index_close(iRel, NoLock);\nAre you sure that this does not represent a risk of deadlocks as EAL\nis not taken consistently across all the partitions? A second issue\nhere is that this breaks the assumption of REINDEX CONCURRENTLY kicked\non partitioned relations that should use ShareUpdateExclusiveLock for\nall its steps. This would make the first transaction invasive for the\nuser, but we don't want that.\n\nThis makes me really wonder if we would not be better to restrict this\noperation for partitioned relation as part of REINDEX as a first step.\nAnother thing, mentioned upthread, is that we could do this part of\nthe switch at the last transaction, or we could silently *not* do the\nswitch for partitioned indexes in the flow of REINDEX, letting users\nhandle that with an extra ALTER TABLE SET TABLESPACE once REINDEX has\nfinished on all the partitions, cascading the command only on the\npartitioned relation of a tree. It may be interesting to look as well\nat if we could lower the lock used for partitioned relations with\nALTER TABLE SET TABLESPACE from AEL to SUEL, choosing AEL only if at\nleast one partition with storage is involved in the command,\nCheckRelationTableSpaceMove() discarding anything that has no need to\nchange.\n--\nMichael",
"msg_date": "Sat, 30 Jan 2021 11:23:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-01-30 05:23, Michael Paquier wrote:\n> On Fri, Jan 29, 2021 at 08:56:47PM +0300, Alexey Kondratov wrote:\n>> On 2021-01-28 14:42, Alexey Kondratov wrote:\n>>> No, you are right, we are doing REINDEX with AccessExclusiveLock as \n>>> it\n>>> was before. This part is a more specific one. It only applies to\n>>> partitioned indexes, which do not hold any data, so we do not reindex\n>>> them directly, only their leafs. However, if we are doing a \n>>> TABLESPACE\n>>> change, we have to record it in their pg_class entry, so all future\n>>> leaf partitions were created in the proper tablespace.\n>>> \n>>> That way, we open partitioned index relation only for a reference,\n>>> i.e. read-only, but modify pg_class entry under a proper lock\n>>> (RowExclusiveLock). That's why I thought that ShareLock will be\n>>> enough.\n>>> \n>>> IIUC, 'ALTER TABLE ... SET TABLESPACE' uses AccessExclusiveLock even\n>>> for relations with no storage, since AlterTableGetLockLevel() chooses\n>>> it if AT_SetTableSpace is met. This is very similar to our case, so\n>>> probably we should do the same?\n>>> \n>>> Actually it is not completely clear for me why\n>>> ShareUpdateExclusiveLock is sufficient for newly added\n>>> SetRelationTableSpace() as Michael wrote in the comment.\n> \n> Nay, it was not fine. That's something Alvaro has mentioned, leading\n> to 2484329. This also means that the main patch of this thread should\n> refresh the comments at the top of CheckRelationTableSpaceMove() and\n> SetRelationTableSpace() to mention that this is used by REINDEX\n> CONCURRENTLY with a lower lock.\n> \n\nHm, IIUC, REINDEX CONCURRENTLY doesn't use either of them. It directly \nuses index_create() with a proper tablespaceOid instead of \nSetRelationTableSpace(). And its checks structure is more restrictive \neven without tablespace change, so it doesn't use \nCheckRelationTableSpaceMove().\n\n>> Changed patch to use AccessExclusiveLock in this part for now. This is \n>> what\n>> 'ALTER TABLE/INDEX ... SET TABLESPACE' and 'REINDEX' usually do. \n>> Anyway, all\n>> real leaf partitions are processed in the independent transactions \n>> later.\n> \n> + if (partkind == RELKIND_PARTITIONED_INDEX)\n> + {\n> + Relation iRel = index_open(partoid, AccessExclusiveLock);\n> +\n> + if (CheckRelationTableSpaceMove(iRel, \n> params->tablespaceOid))\n> + SetRelationTableSpace(iRel,\n> + params->tablespaceOid,\n> + InvalidOid);\n> + index_close(iRel, NoLock);\n> Are you sure that this does not represent a risk of deadlocks as EAL\n> is not taken consistently across all the partitions? A second issue\n> here is that this breaks the assumption of REINDEX CONCURRENTLY kicked\n> on partitioned relations that should use ShareUpdateExclusiveLock for\n> all its steps. This would make the first transaction invasive for the\n> user, but we don't want that.\n> \n> This makes me really wonder if we would not be better to restrict this\n> operation for partitioned relation as part of REINDEX as a first step.\n> Another thing, mentioned upthread, is that we could do this part of\n> the switch at the last transaction, or we could silently *not* do the\n> switch for partitioned indexes in the flow of REINDEX, letting users\n> handle that with an extra ALTER TABLE SET TABLESPACE once REINDEX has\n> finished on all the partitions, cascading the command only on the\n> partitioned relation of a tree. It may be interesting to look as well\n> at if we could lower the lock used for partitioned relations with\n> ALTER TABLE SET TABLESPACE from AEL to SUEL, choosing AEL only if at\n> least one partition with storage is involved in the command,\n> CheckRelationTableSpaceMove() discarding anything that has no need to\n> change.\n> \n\nI am not sure right now, so I split previous patch into two parts:\n\n0001: Adds TABLESPACE into REINDEX with tests, doc and all the stuff we \ndid before with the only exception that it doesn't move partitioned \nindexes into the new tablespace.\n\nBasically, it implements this option \"we could silently *not* do the \nswitch for partitioned indexes in the flow of REINDEX, letting users \nhandle that with an extra ALTER TABLE SET TABLESPACE once REINDEX has \nfinished\". It probably makes sense, since we are doing tablespace change \naltogether with index relation rewrite and don't touch relations without \nstorage. Doing ALTER INDEX ... SET TABLESPACE will be almost cost-less \non them, since they do not hold any data.\n\n0002: Implements the remaining part where pg_class entry is also changed \nfor partitioned indexes. I think that we should think more about it, \nmaybe it is not so dangerous and proper locking strategy could be \nachieved.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 01 Feb 2021 18:28:57 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Feb 01, 2021 at 06:28:57PM +0300, Alexey Kondratov wrote:\n> On 2021-01-30 05:23, Michael Paquier wrote:\n> > This makes me really wonder if we would not be better to restrict this\n> > operation for partitioned relation as part of REINDEX as a first step.\n> > Another thing, mentioned upthread, is that we could do this part of\n> > the switch at the last transaction, or we could silently *not* do the\n> > switch for partitioned indexes in the flow of REINDEX, letting users\n> > handle that with an extra ALTER TABLE SET TABLESPACE once REINDEX has\n> > finished on all the partitions, cascading the command only on the\n> > partitioned relation of a tree. \n\nI suggest that it'd be un-intuitive to skip partitioned rels , silently\nrequiring a user to also run \"ALTER .. SET TABLESPACE\". \n\nBut I think it'd be okay if REINDEX(TABLESPACE) didn't support partitioned\ntables/indexes at first. I think it'd be better as an ERROR.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 1 Feb 2021 09:47:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Mon, Feb 01, 2021 at 06:28:57PM +0300, Alexey Kondratov wrote:\n> Hm, IIUC, REINDEX CONCURRENTLY doesn't use either of them. It directly uses\n> index_create() with a proper tablespaceOid instead of\n> SetRelationTableSpace(). And its checks structure is more restrictive even\n> without tablespace change, so it doesn't use CheckRelationTableSpaceMove().\n\nSure. I have not checked the patch in details, but even with that it\nwould be much safer to me if we apply the same sanity checks\neverywhere. That's less potential holes to worry about.\n\n> Basically, it implements this option \"we could silently *not* do the switch\n> for partitioned indexes in the flow of REINDEX, letting users handle that\n> with an extra ALTER TABLE SET TABLESPACE once REINDEX has finished\". It\n> probably makes sense, since we are doing tablespace change altogether with\n> index relation rewrite and don't touch relations without storage. Doing\n> ALTER INDEX ... SET TABLESPACE will be almost cost-less on them, since they\n> do not hold any data.\n\nYeah, they'd still need an AEL for a short time on the partitioned\nbits with what's on HEAD. I'll keep in mind to look at the\npossibility to downgrade this lock if cascading only on partitioned\ntables. The main take is that AlterTableGetLockLevel() cannot select\na lock type based on the table meta-data. Tricky problem it is if\ntaken as a whole, but I guess that we should be able to tweak ALTER\nTABLE ONLY on a partitioned table/index pretty easily (inh = false).\n--\nMichael",
"msg_date": "Tue, 2 Feb 2021 10:32:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Tue, Feb 02, 2021 at 10:32:19AM +0900, Michael Paquier wrote:\n> On Mon, Feb 01, 2021 at 06:28:57PM +0300, Alexey Kondratov wrote:\n> > Hm, IIUC, REINDEX CONCURRENTLY doesn't use either of them. It directly uses\n> > index_create() with a proper tablespaceOid instead of\n> > SetRelationTableSpace(). And its checks structure is more restrictive even\n> > without tablespace change, so it doesn't use CheckRelationTableSpaceMove().\n> \n> Sure. I have not checked the patch in details, but even with that it\n> would be much safer to me if we apply the same sanity checks\n> everywhere. That's less potential holes to worry about.\n\nThanks Alexey for the new patch. I have been looking at the main\npatch in details.\n\n /*\n- * Don't allow reindex on temp tables of other backends ... their local\n- * buffer manager is not going to cope.\n+ * We don't support moving system relations into different tablespaces\n+ * unless allow_system_table_mods=1.\n */\nIf you remove the check on RELATION_IS_OTHER_TEMP() in\nreindex_index(), you would allow the reindex of a temp relation owned\nby a different session if its tablespace is not changed, so this\ncannot be removed.\n\n+ !allowSystemTableMods && IsSystemRelation(iRel))\n ereport(ERROR,\n- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n- errmsg(\"cannot reindex temporary tables of other sessions\")));\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"permission denied: \\\"%s\\\" is a system catalog\",\n+ RelationGetRelationName(iRel))));\nIndeed, a system relation with a relfilenode should be allowed to move\nunder allow_system_table_mods. I think that we had better move this\ncheck into CheckRelationTableSpaceMove() instead of reindex_index() to \ncentralize the logic. ALTER TABLE does this business in\nRangeVarCallbackForAlterRelation(), but our code path opening the\nrelation is different for the non-concurrent case.\n\n+ if (OidIsValid(params->tablespaceOid) &&\n+ IsSystemClass(relid, classtuple))\n+ {\n+ if (!allowSystemTableMods)\n+ {\n+ /* Skip all system relations, if not allowSystemTableMods *\nI don't see the need for having two warnings here to say the same\nthing if a relation is mapped or not mapped, so let's keep it simple.\n\n+REINDEX (TABLESPACE regress_tblspace) SYSTEM CONCURRENTLY postgres; -- fail\n+ERROR: cannot reindex system catalogs concurrently\n[...]\n+REINDEX (TABLESPACE regress_tblspace) DATABASE regression; -- ok with warning\n+WARNING: cannot change tablespace of indexes on system relations, skipping all\n+REINDEX (TABLESPACE pg_default) DATABASE regression; -- ok with warning\n+WARNING: cannot change tablespace of indexes on system relations, skipping all\nThose tests are costly by design, so let's drop them. They have been\nuseful to check the patch, but if tests are changed with objects\nremaining around this would cost a lot of resources.\n\n+ /* It's not a shared catalog, so refuse to move it to shared tablespace */\n+ if (params->tablespaceOid == GLOBALTABLESPACE_OID)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot move non-shared relation totablespace \\\"%s\\\"\",\n+ get_tablespace_name(params->tablespaceOid))));\nThere is no test coverage for this case with REINDEX CONCURRENTLY, and\nthat's easy enough to stress. So I have added one.\n\nI have found that the test suite was rather messy in its\norganization. Table creations were done first with a set of tests not\nreally ordered, so that was really hard to follow. This has also led\nto a set of tests that were duplicated, while other tests have been\nmissed, mainly some cross checks for the concurrent and non-concurrent\nbehaviors. I have reordered the whole so as tests on catalogs, normal\ntables and partitions are done separately with relations created and\ndropped for each set. Partitions use a global check for tablespaces\nand relfilenodes after one concurrent reindex (didn't see the point in\ndoubling with the non-concurrent case as the same code path to select\nthe relations from the partition tree is taken). An ACL test has been\nadded at the end.\n\nThe case of partitioned indexes was kind of interesting and I thought\nabout that a couple of days, and I took the decision to ignore\nrelations that have no storage as you did, documenting that ALTER\nTABLE can be used to update the references of the partitioned\nrelations. The command is still useful with this behavior, and the\ntests I have added track that.\n\nFinally, I have reworked the docs, separating the limitations related\nto system catalogs and partitioned relations, to be more consistent\nwith the notes at the end of the page.\n--\nMichael",
"msg_date": "Wed, 3 Feb 2021 15:37:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Feb 03, 2021 at 03:37:39PM +0900, Michael Paquier wrote:\n> index 627b36300c..4ee3951ca0 100644\n> --- a/doc/src/sgml/ref/reindex.sgml\n> +++ b/doc/src/sgml/ref/reindex.sgml\n> @@ -293,8 +311,30 @@ REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n> respectively. Each partition of the specified partitioned relation is\n> reindexed in a separate transaction. Those commands cannot be used inside\n> a transaction block when working on a partitioned table or index.\n> + If a <command>REINDEX</command> command fails when run on a partitioned\n> + relation, and <literal>TABLESPACE</literal> was specified, then it may not\n> + have moved all indexes to the new tablespace. Re-running the command\n> + will rebuild again all the partitions and move previously-unprocessed\n\nremove \"again\"\n\n> + indexes to the new tablespace.\n> + </para>\n> + \n> + <para>\n> + When using the <literal>TABLESPACE</literal> clause with\n> + <command>REINDEX</command> on a partitioned index or table, only the\n> + tablespace references of the partitions are updated. As partitioned indexes\n\nI think you should say \"of the LEAF partitions ..\". The intermediate,\npartitioned tables are also \"partitions\" (partitioned partitions if you like).\n\n> + are not updated, it is recommended to separately use\n> + <command>ALTER TABLE ONLY</command> on them to achieve that.\n\nMaybe say: \"..to set the default tablespace of any new partitions created in\nthe future\".\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 3 Feb 2021 00:53:42 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On 2021-02-03 09:37, Michael Paquier wrote:\n> On Tue, Feb 02, 2021 at 10:32:19AM +0900, Michael Paquier wrote:\n>> On Mon, Feb 01, 2021 at 06:28:57PM +0300, Alexey Kondratov wrote:\n>> > Hm, IIUC, REINDEX CONCURRENTLY doesn't use either of them. It directly uses\n>> > index_create() with a proper tablespaceOid instead of\n>> > SetRelationTableSpace(). And its checks structure is more restrictive even\n>> > without tablespace change, so it doesn't use CheckRelationTableSpaceMove().\n>> \n>> Sure. I have not checked the patch in details, but even with that it\n>> would be much safer to me if we apply the same sanity checks\n>> everywhere. That's less potential holes to worry about.\n> \n> Thanks Alexey for the new patch. I have been looking at the main\n> patch in details.\n> \n> /*\n> - * Don't allow reindex on temp tables of other backends ... their \n> local\n> - * buffer manager is not going to cope.\n> + * We don't support moving system relations into different \n> tablespaces\n> + * unless allow_system_table_mods=1.\n> */\n> If you remove the check on RELATION_IS_OTHER_TEMP() in\n> reindex_index(), you would allow the reindex of a temp relation owned\n> by a different session if its tablespace is not changed, so this\n> cannot be removed.\n> \n> + !allowSystemTableMods && IsSystemRelation(iRel))\n> ereport(ERROR,\n> - (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> - errmsg(\"cannot reindex temporary tables of other \n> sessions\")));\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"permission denied: \\\"%s\\\" is a system \n> catalog\",\n> + RelationGetRelationName(iRel))));\n> Indeed, a system relation with a relfilenode should be allowed to move\n> under allow_system_table_mods. I think that we had better move this\n> check into CheckRelationTableSpaceMove() instead of reindex_index() to\n> centralize the logic. ALTER TABLE does this business in\n> RangeVarCallbackForAlterRelation(), but our code path opening the\n> relation is different for the non-concurrent case.\n> \n> + if (OidIsValid(params->tablespaceOid) &&\n> + IsSystemClass(relid, classtuple))\n> + {\n> + if (!allowSystemTableMods)\n> + {\n> + /* Skip all system relations, if not \n> allowSystemTableMods *\n> I don't see the need for having two warnings here to say the same\n> thing if a relation is mapped or not mapped, so let's keep it simple.\n> \n\nYeah, I just wanted to separate mapped and system relations, but \nprobably it is too complicated.\n\n> \n> I have found that the test suite was rather messy in its\n> organization. Table creations were done first with a set of tests not\n> really ordered, so that was really hard to follow. This has also led\n> to a set of tests that were duplicated, while other tests have been\n> missed, mainly some cross checks for the concurrent and non-concurrent\n> behaviors. I have reordered the whole so as tests on catalogs, normal\n> tables and partitions are done separately with relations created and\n> dropped for each set. Partitions use a global check for tablespaces\n> and relfilenodes after one concurrent reindex (didn't see the point in\n> doubling with the non-concurrent case as the same code path to select\n> the relations from the partition tree is taken). An ACL test has been\n> added at the end.\n> \n> The case of partitioned indexes was kind of interesting and I thought\n> about that a couple of days, and I took the decision to ignore\n> relations that have no storage as you did, documenting that ALTER\n> TABLE can be used to update the references of the partitioned\n> relations. The command is still useful with this behavior, and the\n> tests I have added track that.\n> \n> Finally, I have reworked the docs, separating the limitations related\n> to system catalogs and partitioned relations, to be more consistent\n> with the notes at the end of the page.\n> \n\nThanks for working on this.\n\n+\tif (tablespacename != NULL)\n+\t{\n+\t\tparams.tablespaceOid = get_tablespace_oid(tablespacename, false);\n+\n+\t\t/* Check permissions except when moving to database's default */\n+\t\tif (OidIsValid(params.tablespaceOid) &&\n\nThis check for OidIsValid() seems to be excessive, since you moved the \nwhole ACL check under 'if (tablespacename != NULL)' here.\n\n+\t\t\tparams.tablespaceOid != MyDatabaseTableSpace)\n+\t\t{\n+\t\t\tAclResult\taclresult;\n\n\n+CREATE INDEX regress_tblspace_test_tbl_idx ON regress_tblspace_test_tbl \n(num1);\n+-- move to global tablespace move fails\n\nMaybe 'move to global tablespace, fail', just to match a style of the \nprevious comments.\n\n+REINDEX (TABLESPACE pg_global) INDEX regress_tblspace_test_tbl_idx;\n\n\n+SELECT relid, parentrelid, level FROM \npg_partition_tree('tbspace_reindex_part_index')\n+ ORDER BY relid, level;\n+SELECT relid, parentrelid, level FROM \npg_partition_tree('tbspace_reindex_part_index')\n+ ORDER BY relid, level;\n\nWhy do you do the same twice in a row? It looks like a typo. Maybe it \nwas intended to be called for partitioned table AND index.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 03 Feb 2021 13:35:26 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Feb 03, 2021 at 12:53:42AM -0600, Justin Pryzby wrote:\n> On Wed, Feb 03, 2021 at 03:37:39PM +0900, Michael Paquier wrote:\n>> index 627b36300c..4ee3951ca0 100644\n>> --- a/doc/src/sgml/ref/reindex.sgml\n>> +++ b/doc/src/sgml/ref/reindex.sgml\n>> @@ -293,8 +311,30 @@ REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n>> respectively. Each partition of the specified partitioned relation is\n>> reindexed in a separate transaction. Those commands cannot be used inside\n>> a transaction block when working on a partitioned table or index.\n>> + If a <command>REINDEX</command> command fails when run on a partitioned\n>> + relation, and <literal>TABLESPACE</literal> was specified, then it may not\n>> + have moved all indexes to the new tablespace. Re-running the command\n>> + will rebuild again all the partitions and move previously-unprocessed\n> \n> remove \"again\"\n\nOkay.\n\n>> + indexes to the new tablespace.\n>> + </para>\n>> + \n>> + <para>\n>> + When using the <literal>TABLESPACE</literal> clause with\n>> + <command>REINDEX</command> on a partitioned index or table, only the\n>> + tablespace references of the partitions are updated. As partitioned indexes\n> \n> I think you should say \"of the LEAF partitions ..\". The intermediate,\n> partitioned tables are also \"partitions\" (partitioned partitions if you like).\n\nIndeed, I can see how that's confusing.\n\n>> + are not updated, it is recommended to separately use\n>> + <command>ALTER TABLE ONLY</command> on them to achieve that.\n> \n> Maybe say: \"..to set the default tablespace of any new partitions created in\n> the future\".\n\nNot sure I like that. Here is a proposal:\n\"it is recommended to separately use ALTER TABLE ONLY on them so as\nany new partitions attached inherit the new tablespace value.\"\n--\nMichael",
"msg_date": "Wed, 3 Feb 2021 19:54:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Feb 03, 2021 at 01:35:26PM +0300, Alexey Kondratov wrote:\n> This check for OidIsValid() seems to be excessive, since you moved the whole\n> ACL check under 'if (tablespacename != NULL)' here.\n\nThat's more consistent with ATPrepSetTableSpace().\n\n> +SELECT relid, parentrelid, level FROM\n> pg_partition_tree('tbspace_reindex_part_index')\n> + ORDER BY relid, level;\n> +SELECT relid, parentrelid, level FROM\n> pg_partition_tree('tbspace_reindex_part_index')\n> + ORDER BY relid, level;\n> \n> Why do you do the same twice in a row? It looks like a typo. Maybe it was\n> intended to be called for partitioned table AND index.\n\nYes, my intention was to show the tree of the set of tables. It is\nnot really interesting for this test anyway, so let's just remove this\nextra query.\n--\nMichael",
"msg_date": "Wed, 3 Feb 2021 20:01:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Wed, Feb 03, 2021 at 07:54:42PM +0900, Michael Paquier wrote:\n> Not sure I like that. Here is a proposal:\n> \"it is recommended to separately use ALTER TABLE ONLY on them so as\n> any new partitions attached inherit the new tablespace value.\"\n\nSo, I have done more work on this stuff today, and applied that as of\nc5b2860. While reviewing my changes, I have noticed that I have\nmanaged to break ALTER TABLE SET TABLESPACE which would have failed\nwhen cascading to a toast relation, the extra check placed previously\nin CheckRelationTableSpaceMove() being incorrect. The most surprising\npart was that we had zero in-core tests to catch this mistake, so I\nhave added an extra test to cover this scenario while on it.\n\nA second thing I have come back to is allow_system_table_mods for\ntoast relations, and decided to just forbid TABLESPACE if attempting\nto use it directly on a system table even if allow_system_table_mods\nis true. This was leading to inconsistent behaviors and weirdness in\nthe concurrent case because all the indexes are processed in series\nafter building a list. As we want to ignore the move of toast indexes\nwhen moving the indexes of the parent table, this was leading to extra\nconditions that are not really worth supporting after thinking about\nit. One other issue was the lack of consistency when using pg_global\nthat was a no-op for the concurrent case but failed in the \nnon-concurrent case. I have put in place more regression tests for\nall that.\n\nRegarding the VACUUM and CLUSTER cases, I am not completely sure if\ngoing through these for a tablespace case is the best move we can do,\nas ALTER TABLE is able to mix multiple operations and all of them\nrequire already an AEL to work. REINDEX was different thanks to the\ncase of CONCURRENTLY. Anyway, as a lot of work has been done here\nalready, I would recommend to create new threads about those two\ntopics. I am also closing this patch in the CF app.\n--\nMichael",
"msg_date": "Thu, 4 Feb 2021 15:38:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Thu, Feb 04, 2021 at 03:38:39PM +0900, Michael Paquier wrote:\n> On Wed, Feb 03, 2021 at 07:54:42PM +0900, Michael Paquier wrote:\n> > Not sure I like that. Here is a proposal:\n> > \"it is recommended to separately use ALTER TABLE ONLY on them so as\n> > any new partitions attached inherit the new tablespace value.\"\n> \n> So, I have done more work on this stuff today, and applied that as of\n> c5b2860.\n\n> A second thing I have come back to is allow_system_table_mods for\n> toast relations, and decided to just forbid TABLESPACE if attempting\n> to use it directly on a system table even if allow_system_table_mods\n> is true. This was leading to inconsistent behaviors and weirdness in\n> the concurrent case because all the indexes are processed in series\n> after building a list. As we want to ignore the move of toast indexes\n> when moving the indexes of the parent table, this was leading to extra\n> conditions that are not really worth supporting after thinking about\n> it. One other issue was the lack of consistency when using pg_global\n> that was a no-op for the concurrent case but failed in the \n> non-concurrent case. I have put in place more regression tests for\n> all that.\n\nIsn't this dead code ?\n\npostgres=# REINDEX (CONCURRENTLY, TABLESPACE pg_global) TABLE pg_class;\nERROR: 0A000: cannot reindex system catalogs concurrently\nLOCATION: ReindexRelationConcurrently, indexcmds.c:3276\n\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex 127ba7835d..c77a9b2563 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -3260,73 +3260,66 @@ ReindexRelationConcurrently(Oid relationOid, ReindexParams *params)\n \t\t\t{\n \t\t\t\tif (IsCatalogRelationOid(relationOid))\n \t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n \t\t\t\t\t\t\t errmsg(\"cannot reindex system catalogs concurrently\")));\n \n...\n \n-\t\t\t\tif (OidIsValid(params->tablespaceOid) &&\n-\t\t\t\t\tIsSystemRelation(heapRelation))\n-\t\t\t\t\tereport(ERROR,\n-\t\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n-\t\t\t\t\t\t\t errmsg(\"cannot move system relation \\\"%s\\\"\",\n-\t\t\t\t\t\t\t\t\tRelationGetRelationName(heapRelation))));\n-\n@@ -3404,73 +3397,66 @@ ReindexRelationConcurrently(Oid relationOid, ReindexParams *params)\n if (IsCatalogRelationOid(heapId))\n \t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n \t\t\t\t\t\t\t errmsg(\"cannot reindex system catalogs concurrently\")));\n... \n \n-\t\t\t\tif (OidIsValid(params->tablespaceOid) &&\n-\t\t\t\t\tIsSystemRelation(heapRelation))\n-\t\t\t\t\tereport(ERROR,\n-\t\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n-\t\t\t\t\t\t\t errmsg(\"cannot move system relation \\\"%s\\\"\",\n-\t\t\t\t\t\t\t\t\tget_rel_name(relationOid))));\n-",
"msg_date": "Sun, 14 Feb 2021 20:10:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "On Sun, Feb 14, 2021 at 08:10:50PM -0600, Justin Pryzby wrote:\n> Isn't this dead code ?\n\nNope, it's not dead. Those two code paths can be hit when attempting\na reidex with a tablespace move directly on toast tables and indexes,\nsee:\n=# create table aa (a text);\nCREATE TABLE\n=# select relname from pg_class where oid > 16000;\n relname\n----------------------\n aa\n pg_toast_16385\n pg_toast_16385_index\n(3 rows)\n=# reindex (concurrently, tablespace pg_default) table\n pg_toast.pg_toast_16385;\nERROR: 0A000: cannot move system relation \"pg_toast_16385\"\nLOCATION: ReindexRelationConcurrently, indexcmds.c:3295\n=# reindex (concurrently, tablespace pg_default) index\n pg_toast.pg_toast_16385_index;\nERROR: 0A000: cannot move system relation \"pg_toast_16385_index\"\nLOCATION: ReindexRelationConcurrently, indexcmds.c:3439\n\nIt is easy to save the relation name using \\gset in a regression test,\nbut we had better keep a reference to the relation name in the error\nmessage so this would not be really portable. Using a PL function to\ndo that with a CATCH block would not work either as CONCURRENTLY\ncannot be run in a transaction block. This leaves 090_reindexdb.pl,\nbut I was not really convinced that this was worth the extra test\ncycles (I am aware of the --tablespace option missing in reindexdb,\nsomeone I know was trying to get that done for the next CF).\n--\nMichael",
"msg_date": "Mon, 15 Feb 2021 11:58:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on\n the fly"
},
{
"msg_contents": "Hi Hackers,\n\nInside PostgresNode.pm there is a free port choosing routine --- \nget_free_port(). The comment section there says:\n\n\t# On non-Linux, non-Windows kernels, binding to 127.0.0/24 addresses\n\t# other than 127.0.0.1 might fail with EADDRNOTAVAIL.\n\nAnd this is an absolute true, on BSD-like systems (macOS and FreeBSD \ntested) it hangs on looping through the entire ports range over and over \nwhen $PostgresNode::use_tcp = 1 is set, since bind fails with:\n\n# Checking port 52208\n# bind: 127.0.0.1 52208\n# bind: 127.0.0.2 52208\nbind: Can't assign requested address\n\nTo reproduce just apply reproduce.diff and try to run 'make -C \nsrc/bin/pg_ctl check'.\n\nThis is not a case with standard Postgres tests, since TestLib.pm \nchooses unix sockets automatically everywhere outside Windows. However, \nwe got into this problem when tried to run a custom tap test that \nrequired TCP for stable running.\n\nThat way, if it really could happen why not to just skip binding to \n127.0.0/24 addresses other than 127.0.0.1 outside of Linux/Windows as \nper attached patch_PostgresNode.diff?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Tue, 20 Apr 2021 01:22:41 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Free port choosing freezes when PostgresNode::use_tcp is used on BSD\n systems"
},
{
"msg_contents": "Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n> And this is an absolute true, on BSD-like systems (macOS and FreeBSD \n> tested) it hangs on looping through the entire ports range over and over \n> when $PostgresNode::use_tcp = 1 is set, since bind fails with:\n\nHm.\n\n> That way, if it really could happen why not to just skip binding to \n> 127.0.0/24 addresses other than 127.0.0.1 outside of Linux/Windows as \n> per attached patch_PostgresNode.diff?\n\nThat patch seems wrong, or at least it's ignoring the advice immediately\nabove about binding to 0.0.0.0 only on Windows.\n\nI wonder whether we could get away with just replacing the $use_tcp\ntest with $TestLib::windows_os. It's not really apparent to me\nwhy we should care about 127.0.0.not-1 on Unix-oid systems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 19:22:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Free port choosing freezes when PostgresNode::use_tcp is used on\n BSD systems"
},
{
"msg_contents": "\nOn 4/19/21 7:22 PM, Tom Lane wrote:\n> Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n>> And this is an absolute true, on BSD-like systems (macOS and FreeBSD \n>> tested) it hangs on looping through the entire ports range over and over \n>> when $PostgresNode::use_tcp = 1 is set, since bind fails with:\n> Hm.\n>\n>> That way, if it really could happen why not to just skip binding to \n>> 127.0.0/24 addresses other than 127.0.0.1 outside of Linux/Windows as \n>> per attached patch_PostgresNode.diff?\n> That patch seems wrong, or at least it's ignoring the advice immediately\n> above about binding to 0.0.0.0 only on Windows.\n>\n> I wonder whether we could get away with just replacing the $use_tcp\n> test with $TestLib::windows_os. It's not really apparent to me\n> why we should care about 127.0.0.not-1 on Unix-oid systems.\n>\n> \t\t\t\n\n\nYeah\n\n\nThe comment is a bit strange anyway - Cygwin is actually going to use\nUnix sockets, not TCP.\n\n\nI think I would just change the test to this: $use_tcp &&\n$TestLib::windows_os.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 20 Apr 2021 10:59:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Free port choosing freezes when PostgresNode::use_tcp is used on\n BSD systems"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/19/21 7:22 PM, Tom Lane wrote:\n>> I wonder whether we could get away with just replacing the $use_tcp\n>> test with $TestLib::windows_os. It's not really apparent to me\n>> why we should care about 127.0.0.not-1 on Unix-oid systems.\n\n> Yeah\n> The comment is a bit strange anyway - Cygwin is actually going to use\n> Unix sockets, not TCP.\n> I think I would just change the test to this: $use_tcp &&\n> $TestLib::windows_os.\n\nWorks for me, but we need to revise the comment to match.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Apr 2021 11:03:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Free port choosing freezes when PostgresNode::use_tcp is used on\n BSD systems"
},
{
"msg_contents": "On 2021-04-20 18:03, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 4/19/21 7:22 PM, Tom Lane wrote:\n>>> I wonder whether we could get away with just replacing the $use_tcp\n>>> test with $TestLib::windows_os. It's not really apparent to me\n>>> why we should care about 127.0.0.not-1 on Unix-oid systems.\n> \n>> Yeah\n>> The comment is a bit strange anyway - Cygwin is actually going to use\n>> Unix sockets, not TCP.\n>> I think I would just change the test to this: $use_tcp &&\n>> $TestLib::windows_os.\n> \n> Works for me, but we need to revise the comment to match.\n> \n\nThen it could be somewhat like that, I guess.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 21 Apr 2021 01:49:59 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Free port choosing freezes when PostgresNode::use_tcp is used on\n BSD systems"
},
{
"msg_contents": "\nOn 4/20/21 6:49 PM, Alexey Kondratov wrote:\n> On 2021-04-20 18:03, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> On 4/19/21 7:22 PM, Tom Lane wrote:\n>>>> I wonder whether we could get away with just replacing the $use_tcp\n>>>> test with $TestLib::windows_os.� It's not really apparent to me\n>>>> why we should care about 127.0.0.not-1 on Unix-oid systems.\n>>\n>>> Yeah\n>>> The comment is a bit strange anyway - Cygwin is actually going to use\n>>> Unix sockets, not TCP.\n>>> I think I would just change the test to this: $use_tcp &&\n>>> $TestLib::windows_os.\n>>\n>> Works for me, but we need to revise the comment to match.\n>>\n>\n> Then it could be somewhat like that, I guess.\n>\n>\n>\n\n\npushed with slight edits.\n\n\nThanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 21 Apr 2021 10:32:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Free port choosing freezes when PostgresNode::use_tcp is used on\n BSD systems"
}
] |
[
{
"msg_contents": "Hi!\n\nBased on discussion about observing changes on an open query in a\nreactive manner (to support reactive web applications) [1], I\nidentified that one critical feature is missing to fully implement\ndiscussed design of having reactive queries be represented as\nmaterialized views, and changes to these materialized views would then\nbe observed and pushed to the client through LISTEN/NOTIFY.\n\nThis is my first time contributing to PostgreSQL, so I hope I am\nstarting this process well.\n\nI would like to propose that support for AFTER triggers are added to\nmaterialized views. I experimented a bit and it seems this is mostly\njust a question of enabling/exposing them. See attached patch. This\nenabled me to add trigger to a material view which mostly worked. Here\nare my findings.\n\nRunning REFRESH MATERIALIZED VIEW CONCURRENTLY calls triggers. Both\nper statement and per row. There are few improvements which could be\ndone:\n\n- Currently only insert and remove operations are done on the\nmaterialized view. This is because the current logic just removes\nchanged rows and inserts new rows.\n- In current concurrently refresh logic those insert and remove\noperations are made even if there are no changes to be done. Which\ntriggers a statement trigger unnecessary. A small improvement could be\nto skip the statement in that case, but looking at the code this seems\nmaybe tricky because both each of inserts and deletions are done\ninside one query each.\n- Current concurrently refresh logic does never do updates on existing\nrows. It would be nicer to have that so that triggers are more aligned\nwith real changes to the data. So current two queries could be changed\nto three, each doing one of the insert, update, and delete.\n\nNon-concurrent refresh does not trigger any trigger. But it seems all\ndata to do so is there (previous table, new table), at least for the\nstatement-level trigger. Row-level triggers could also be simulated\nprobably (with TRUNCATE and INSERT triggers).\n\n[1] https://www.postgresql.org/message-id/flat/CAKLmikP%2BPPB49z8rEEvRjFOD0D2DV72KdqYN7s9fjh9sM_32ZA%40mail.gmail.com\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Mon, 24 Dec 2018 12:59:43 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature: triggers on materialized views"
},
{
"msg_contents": "On Mon, Dec 24, 2018 at 12:59:43PM -0800, Mitar wrote:\n> Hi!\n> \n> Based on discussion about observing changes on an open query in a\n> reactive manner (to support reactive web applications) [1], I\n> identified that one critical feature is missing to fully implement\n> discussed design of having reactive queries be represented as\n> materialized views, and changes to these materialized views would then\n> be observed and pushed to the client through LISTEN/NOTIFY.\n> \n> This is my first time contributing to PostgreSQL, so I hope I am\n> starting this process well.\n\nYou've got the right mailing list, a description of what you want, and\na PoC patch. You also got the patch in during the time between\nCommitfests. You're doing great!\n\n> I would like to propose that support for AFTER triggers are added to\n> materialized views. I experimented a bit and it seems this is mostly\n> just a question of enabling/exposing them. See attached patch.\n\nAbout that. When there's a change (or possible change) in user-visible\nbehavior, it should come with regression tests, which it would make\nsense to add to src/tests/regress/matview.sql along with the\ncorresponding changes to src/tests/regress/expected/matview.out\n\n> This enabled me to add trigger to a material view which mostly\n> worked. Here are my findings.\n> \n> Running REFRESH MATERIALIZED VIEW CONCURRENTLY calls triggers. Both\n> per statement and per row.\n\nYou'd want at least one test for each of those new features.\n\n> There are few improvements which could be\n> done:\n> \n> - Currently only insert and remove operations are done on the\n> materialized view. This is because the current logic just removes\n> changed rows and inserts new rows.\n\nWhat other operations might you want to support?\n\n> - In current concurrently refresh logic those insert and remove\n> operations are made even if there are no changes to be done. Which\n> triggers a statement trigger unnecessary. A small improvement could be\n> to skip the statement in that case, but looking at the code this seems\n> maybe tricky because both each of inserts and deletions are done\n> inside one query each.\n\nAs far as you can tell, is this just an efficiency optimization, or\nmight it go to correctness of the behavior?\n\n> - Current concurrently refresh logic does never do updates on existing\n> rows. It would be nicer to have that so that triggers are more aligned\n> with real changes to the data. So current two queries could be changed\n> to three, each doing one of the insert, update, and delete.\n\nI'm not sure I understand the problem being described here. Do you see\nthese as useful to separate for some reason?\n\n> Non-concurrent refresh does not trigger any trigger. But it seems\n> all data to do so is there (previous table, new table), at least for\n> the statement-level trigger. Row-level triggers could also be\n> simulated probably (with TRUNCATE and INSERT triggers).\n\nWould it make more sense to fill in the missing implementations of NEW\nand OLD for per-row triggers instead of adding another hack?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 24 Dec 2018 23:20:19 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nThanks for reply!\n\nOn Mon, Dec 24, 2018 at 2:20 PM David Fetter <david@fetter.org> wrote:\n> You've got the right mailing list, a description of what you want, and\n> a PoC patch. You also got the patch in during the time between\n> Commitfests. You're doing great!\n\nGreat!\n\nOne thing I am unclear about is how it is determined if this is a\nviable feature to be eventually included. You gave me some suggestions\nto improve in my patch (adding tests and so on). Does this mean that\nthe patch should be fully done before a decision is made?\n\nAlso, the workflow is that I improve things, and resubmit a patch to\nthe mailing list, for now?\n\n> > - Currently only insert and remove operations are done on the\n> > materialized view. This is because the current logic just removes\n> > changed rows and inserts new rows.\n>\n> What other operations might you want to support?\n\nUpdate. So if a row is changing, instead of doing a remove and insert,\nwhat currently is being done, I would prefer an update. Then UPDATE\ntrigger operation would happen as well. Maybe the INSERT query could\nbe changed to INSERT ... ON CONFLICT UPDATE query (not sure if this\none does UPDATE trigger operation on conflict), and REMOVE changed to\nremove just rows which were really removed, but not only updated.\n\n> As far as you can tell, is this just an efficiency optimization, or\n> might it go to correctness of the behavior?\n\nIt is just an optimization. Or maybe even just a surprise. Maybe a\ndocumentation addition could help here. In my use case I would loop\nover OLD and NEW REFERENCING TABLE so if they are empty, nothing would\nbe done. But it is just surprising that DELETE trigger is called even\nwhen no rows are being deleted in the materialized view.\n\n> I'm not sure I understand the problem being described here. Do you see\n> these as useful to separate for some reason?\n\nSo rows which are just updated currently get first DELETE trigger\ncalled and then INSERT. The issue is that if I am observing this\nbehavior from outside, it makes it unclear when I see DELETE if this\nmeans really that a row has been deleted or it just means that later\non an INSERT would happen. Now I have to wait for an eventual INSERT\nto determine that. But how long should I wait? It makes consuming\nthese notifications tricky.\n\nIf I just blindly respond to those notifications, this could introduce\nother problems. For example, if I have a reactive web application it\ncould mean a visible flicker to the user. Instead of updating rendered\nrow, I would first delete it and then later on re-insert it.\n\n> > Non-concurrent refresh does not trigger any trigger. But it seems\n> > all data to do so is there (previous table, new table), at least for\n> > the statement-level trigger. Row-level triggers could also be\n> > simulated probably (with TRUNCATE and INSERT triggers).\n>\n> Would it make more sense to fill in the missing implementations of NEW\n> and OLD for per-row triggers instead of adding another hack?\n\nYou lost me here. But I agree, we should implement this fully, without\nhacks. I just do not know how exactly.\n\nAre you saying that we should support only row-level triggers, or that\nwe should support both statement-level and row-level triggers, but\njust make sure we implement this properly? I think that my suggestion\nof using TRUNCATE and INSERT triggers is reasonable in the case of\nfull refresh. This is what happens. If we would want to have\nDELETE/UPDATE/INSERT triggers, we would have to compute the diff like\nconcurrent version has to do, which would defeat the difference\nbetween the two. But yes, all INSERT trigger calls should have NEW\nprovided.\n\nSo per-statement trigger would have TRUNCATE and INSERT called. And\nper-row trigger would have TRUNCATE and per-row INSERTs called.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Mon, 24 Dec 2018 16:13:44 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nI made another version of the patch. This one does UPDATEs for changed\nrow instead of DELETE/INSERT.\n\nAll existing regression tests are still passing (make check).\n\n\nMitar\n\nOn Mon, Dec 24, 2018 at 4:13 PM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> Thanks for reply!\n>\n> On Mon, Dec 24, 2018 at 2:20 PM David Fetter <david@fetter.org> wrote:\n> > You've got the right mailing list, a description of what you want, and\n> > a PoC patch. You also got the patch in during the time between\n> > Commitfests. You're doing great!\n>\n> Great!\n>\n> One thing I am unclear about is how it is determined if this is a\n> viable feature to be eventually included. You gave me some suggestions\n> to improve in my patch (adding tests and so on). Does this mean that\n> the patch should be fully done before a decision is made?\n>\n> Also, the workflow is that I improve things, and resubmit a patch to\n> the mailing list, for now?\n>\n> > > - Currently only insert and remove operations are done on the\n> > > materialized view. This is because the current logic just removes\n> > > changed rows and inserts new rows.\n> >\n> > What other operations might you want to support?\n>\n> Update. So if a row is changing, instead of doing a remove and insert,\n> what currently is being done, I would prefer an update. Then UPDATE\n> trigger operation would happen as well. Maybe the INSERT query could\n> be changed to INSERT ... ON CONFLICT UPDATE query (not sure if this\n> one does UPDATE trigger operation on conflict), and REMOVE changed to\n> remove just rows which were really removed, but not only updated.\n>\n> > As far as you can tell, is this just an efficiency optimization, or\n> > might it go to correctness of the behavior?\n>\n> It is just an optimization. Or maybe even just a surprise. Maybe a\n> documentation addition could help here. In my use case I would loop\n> over OLD and NEW REFERENCING TABLE so if they are empty, nothing would\n> be done. But it is just surprising that DELETE trigger is called even\n> when no rows are being deleted in the materialized view.\n>\n> > I'm not sure I understand the problem being described here. Do you see\n> > these as useful to separate for some reason?\n>\n> So rows which are just updated currently get first DELETE trigger\n> called and then INSERT. The issue is that if I am observing this\n> behavior from outside, it makes it unclear when I see DELETE if this\n> means really that a row has been deleted or it just means that later\n> on an INSERT would happen. Now I have to wait for an eventual INSERT\n> to determine that. But how long should I wait? It makes consuming\n> these notifications tricky.\n>\n> If I just blindly respond to those notifications, this could introduce\n> other problems. For example, if I have a reactive web application it\n> could mean a visible flicker to the user. Instead of updating rendered\n> row, I would first delete it and then later on re-insert it.\n>\n> > > Non-concurrent refresh does not trigger any trigger. But it seems\n> > > all data to do so is there (previous table, new table), at least for\n> > > the statement-level trigger. Row-level triggers could also be\n> > > simulated probably (with TRUNCATE and INSERT triggers).\n> >\n> > Would it make more sense to fill in the missing implementations of NEW\n> > and OLD for per-row triggers instead of adding another hack?\n>\n> You lost me here. But I agree, we should implement this fully, without\n> hacks. I just do not know how exactly.\n>\n> Are you saying that we should support only row-level triggers, or that\n> we should support both statement-level and row-level triggers, but\n> just make sure we implement this properly? I think that my suggestion\n> of using TRUNCATE and INSERT triggers is reasonable in the case of\n> full refresh. This is what happens. If we would want to have\n> DELETE/UPDATE/INSERT triggers, we would have to compute the diff like\n> concurrent version has to do, which would defeat the difference\n> between the two. But yes, all INSERT trigger calls should have NEW\n> provided.\n>\n> So per-statement trigger would have TRUNCATE and INSERT called. And\n> per-row trigger would have TRUNCATE and per-row INSERTs called.\n>\n>\n> Mitar\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Mon, 24 Dec 2018 18:17:16 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nSo I think this makes it work great for REFRESH MATERIALIZED VIEW\nCONCURRENTLY. I think we can leave empty statement triggers as they\nare. I have not found a nice way to not do them.\n\nFor adding triggers to REFRESH MATERIALIZED VIEW I would need some\nhelp and pointers. I am not sure how to write calling triggers there.\nAny reference to an existing code which does something similar would\nbe great. So I think after swapping heaps we should call TRUNCATE\ntrigger and then INSERT for all new rows.\n\n\nMitar\n\nOn Mon, Dec 24, 2018 at 6:17 PM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> I made another version of the patch. This one does UPDATEs for changed\n> row instead of DELETE/INSERT.\n>\n> All existing regression tests are still passing (make check).\n>\n>\n> Mitar\n>\n> On Mon, Dec 24, 2018 at 4:13 PM Mitar <mmitar@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > Thanks for reply!\n> >\n> > On Mon, Dec 24, 2018 at 2:20 PM David Fetter <david@fetter.org> wrote:\n> > > You've got the right mailing list, a description of what you want, and\n> > > a PoC patch. You also got the patch in during the time between\n> > > Commitfests. You're doing great!\n> >\n> > Great!\n> >\n> > One thing I am unclear about is how it is determined if this is a\n> > viable feature to be eventually included. You gave me some suggestions\n> > to improve in my patch (adding tests and so on). Does this mean that\n> > the patch should be fully done before a decision is made?\n> >\n> > Also, the workflow is that I improve things, and resubmit a patch to\n> > the mailing list, for now?\n> >\n> > > > - Currently only insert and remove operations are done on the\n> > > > materialized view. This is because the current logic just removes\n> > > > changed rows and inserts new rows.\n> > >\n> > > What other operations might you want to support?\n> >\n> > Update. So if a row is changing, instead of doing a remove and insert,\n> > what currently is being done, I would prefer an update. Then UPDATE\n> > trigger operation would happen as well. Maybe the INSERT query could\n> > be changed to INSERT ... ON CONFLICT UPDATE query (not sure if this\n> > one does UPDATE trigger operation on conflict), and REMOVE changed to\n> > remove just rows which were really removed, but not only updated.\n> >\n> > > As far as you can tell, is this just an efficiency optimization, or\n> > > might it go to correctness of the behavior?\n> >\n> > It is just an optimization. Or maybe even just a surprise. Maybe a\n> > documentation addition could help here. In my use case I would loop\n> > over OLD and NEW REFERENCING TABLE so if they are empty, nothing would\n> > be done. But it is just surprising that DELETE trigger is called even\n> > when no rows are being deleted in the materialized view.\n> >\n> > > I'm not sure I understand the problem being described here. Do you see\n> > > these as useful to separate for some reason?\n> >\n> > So rows which are just updated currently get first DELETE trigger\n> > called and then INSERT. The issue is that if I am observing this\n> > behavior from outside, it makes it unclear when I see DELETE if this\n> > means really that a row has been deleted or it just means that later\n> > on an INSERT would happen. Now I have to wait for an eventual INSERT\n> > to determine that. But how long should I wait? It makes consuming\n> > these notifications tricky.\n> >\n> > If I just blindly respond to those notifications, this could introduce\n> > other problems. For example, if I have a reactive web application it\n> > could mean a visible flicker to the user. Instead of updating rendered\n> > row, I would first delete it and then later on re-insert it.\n> >\n> > > > Non-concurrent refresh does not trigger any trigger. But it seems\n> > > > all data to do so is there (previous table, new table), at least for\n> > > > the statement-level trigger. Row-level triggers could also be\n> > > > simulated probably (with TRUNCATE and INSERT triggers).\n> > >\n> > > Would it make more sense to fill in the missing implementations of NEW\n> > > and OLD for per-row triggers instead of adding another hack?\n> >\n> > You lost me here. But I agree, we should implement this fully, without\n> > hacks. I just do not know how exactly.\n> >\n> > Are you saying that we should support only row-level triggers, or that\n> > we should support both statement-level and row-level triggers, but\n> > just make sure we implement this properly? I think that my suggestion\n> > of using TRUNCATE and INSERT triggers is reasonable in the case of\n> > full refresh. This is what happens. If we would want to have\n> > DELETE/UPDATE/INSERT triggers, we would have to compute the diff like\n> > concurrent version has to do, which would defeat the difference\n> > between the two. But yes, all INSERT trigger calls should have NEW\n> > provided.\n> >\n> > So per-statement trigger would have TRUNCATE and INSERT called. And\n> > per-row trigger would have TRUNCATE and per-row INSERTs called.\n> >\n> >\n> > Mitar\n> >\n> > --\n> > http://mitar.tnode.com/\n> > https://twitter.com/mitar_m\n>\n>\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Mon, 24 Dec 2018 18:20:01 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On Mon, Dec 24, 2018 at 04:13:44PM -0800, Mitar wrote:\n> Hi!\n> \n> Thanks for reply!\n> \n> On Mon, Dec 24, 2018 at 2:20 PM David Fetter <david@fetter.org> wrote:\n> > You've got the right mailing list, a description of what you want, and\n> > a PoC patch. You also got the patch in during the time between\n> > Commitfests. You're doing great!\n> \n> Great!\n> \n> One thing I am unclear about is how it is determined if this is a\n> viable feature to be eventually included. You gave me some suggestions\n> to improve in my patch (adding tests and so on). Does this mean that\n> the patch should be fully done before a decision is made?\n> \n> Also, the workflow is that I improve things, and resubmit a patch to\n> the mailing list, for now?\n> \n> > > - Currently only insert and remove operations are done on the\n> > > materialized view. This is because the current logic just removes\n> > > changed rows and inserts new rows.\n> >\n> > What other operations might you want to support?\n> \n> Update. So if a row is changing, instead of doing a remove and insert,\n> what currently is being done, I would prefer an update. Then UPDATE\n> trigger operation would happen as well. Maybe the INSERT query could\n> be changed to INSERT ... ON CONFLICT UPDATE query (not sure if this\n> one does UPDATE trigger operation on conflict), and REMOVE changed to\n> remove just rows which were really removed, but not only updated.\n\nThere might be a reason it's the way it is. Looking at the commits\nthat introduced this might shed some light.\n\n> > I'm not sure I understand the problem being described here. Do you see\n> > these as useful to separate for some reason?\n> \n> So rows which are just updated currently get first DELETE trigger\n> called and then INSERT. The issue is that if I am observing this\n> behavior from outside, it makes it unclear when I see DELETE if this\n> means really that a row has been deleted or it just means that later\n> on an INSERT would happen. Now I have to wait for an eventual INSERT\n> to determine that. But how long should I wait? It makes consuming\n> these notifications tricky.\n\nIf it helps you think about it better, all NOTIFICATIONs are sent on\nCOMMIT, i.e. you don't need to worry as much about what things should\nor shouldn't have arrived. The down side, such as it is, is that they\ndon't convey premature knowledge about a state that may never arrive.\n\n> If I just blindly respond to those notifications, this could introduce\n> other problems. For example, if I have a reactive web application it\n> could mean a visible flicker to the user. Instead of updating rendered\n> row, I would first delete it and then later on re-insert it.\n\nThis is at what I hope is a level quite distinct from database\noperations. Separation of concerns via the model-view-controller (or\nsimilar) architecture and all that.\n\n> > > Non-concurrent refresh does not trigger any trigger. But it seems\n> > > all data to do so is there (previous table, new table), at least for\n> > > the statement-level trigger. Row-level triggers could also be\n> > > simulated probably (with TRUNCATE and INSERT triggers).\n> >\n> > Would it make more sense to fill in the missing implementations of NEW\n> > and OLD for per-row triggers instead of adding another hack?\n> \n> You lost me here. But I agree, we should implement this fully, without\n> hacks. I just do not know how exactly.\n\nSorry I was unclear. The SQL standard defines both transition tables,\nwhich we have, for per-statement triggers, and transition variables,\nwhich we don't, for per-row triggers. Here's the relevant part of the\nsyntax:\n\n<trigger definition> ::=\n CREATE TRIGGER <trigger name> <trigger action time> <trigger event>\n ON <table name> [ REFERENCING <transition table or variable list> ]\n <triggered action>\n\n<transition table or variable list> ::=\n <transition table or variable>...\n<transition table or\n OLD [ ROW ] [ AS\n | NEW [ ROW ] [ AS\n | OLD TABLE [ AS ]\n | NEW TABLE [ AS ]\nvariable> ::=\n] <old transition variable name>\n] <new transition variable name>\n<old transition table name>\n<new transition table name>\n\n> Are you saying that we should support only row-level triggers, or that\n> we should support both statement-level and row-level triggers, but\n> just make sure we implement this properly?\n\nThe latter, although we might need to defer the row-level triggers\nuntil we support transition variables.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Tue, 25 Dec 2018 19:03:12 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Tue, Dec 25, 2018 at 10:03 AM David Fetter <david@fetter.org> wrote:\n> If it helps you think about it better, all NOTIFICATIONs are sent on\n> COMMIT, i.e. you don't need to worry as much about what things should\n> or shouldn't have arrived. The down side, such as it is, is that they\n> don't convey premature knowledge about a state that may never arrive.\n\nThis fact does not really help me. My client code listening to\nNOTIFICATIONs does not know when some other client made COMMIT. There\nis no NOTIFICATION saying \"this is the end of the commit for which I\njust sent you notifications\".\n\n> This is at what I hope is a level quite distinct from database\n> operations. Separation of concerns via the model-view-controller (or\n> similar) architecture and all that.\n\nOf course, but garbage in, garbage out. If notifications are\nsuperfluous then another abstraction layer has to fix them. I would\nprefer if this would not have to be the case.\n\nBut it seems it is relatively easy to fix this logic and have both\nINSERTs, DELETEs and UPDATEs. The patch I updated and attached\npreviously does that.\n\n> Sorry I was unclear. The SQL standard defines both transition tables,\n> which we have, for per-statement triggers, and transition variables,\n> which we don't, for per-row triggers.\n\nI thought that PostgreSQL has transition variables per-row triggers,\nonly that it is not possible to (re)name them (are depend on the\ntrigger function language)? But there are OLD and NEW variables\navailable in per-row triggers, or equivalent?\n\n> The latter, although we might need to defer the row-level triggers\n> until we support transition variables.\n\nNot sure how transition variables are implemented currently for\nregular tables, but we could probably do the same?\n\nAnyway, I do not know how to proceed here to implement or\nstatement-level or row-level triggers here. It could be just a matter\na calling some function to call them, but I am not familiar with the\ncodebase enough to know what. So any pointers to existing code which\ndoes something similar would be great. So, what to call once material\nviews' heaps are swapped to call triggers?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Tue, 25 Dec 2018 10:26:56 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 2018-Dec-24, Mitar wrote:\n\n\n> I made another version of the patch. This one does UPDATEs for changed\n> row instead of DELETE/INSERT.\n> \n> All existing regression tests are still passing (make check).\n\nOkay, but you don't add any for your new feature, which is not good.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 25 Dec 2018 23:47:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Tue, Dec 25, 2018 at 6:47 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I made another version of the patch. This one does UPDATEs for changed\n> > row instead of DELETE/INSERT.\n> >\n> > All existing regression tests are still passing (make check).\n>\n> Okay, but you don't add any for your new feature, which is not good.\n\nYes, I have not yet done that. I want first to also add calling\ntriggers for non-concurrent refresh, but I would need a bit help there\n(what to call, example of maybe code which does something similar\nalready).\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Tue, 25 Dec 2018 18:56:48 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 2018-Dec-25, Mitar wrote:\n\n> On Tue, Dec 25, 2018 at 6:47 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > I made another version of the patch. This one does UPDATEs for changed\n> > > row instead of DELETE/INSERT.\n> > >\n> > > All existing regression tests are still passing (make check).\n> >\n> > Okay, but you don't add any for your new feature, which is not good.\n> \n> Yes, I have not yet done that. I want first to also add calling\n> triggers for non-concurrent refresh, but I would need a bit help there\n> (what to call, example of maybe code which does something similar\n> already).\n\nWell, REFRESH CONCURRENTLY runs completely different than non-concurrent\nREFRESH. The former updates the existing data by observing the\ndifferences with the previous data; the latter simply re-runs the query\nand overwrites everything. So if you simply enabled triggers on\nnon-concurrent refresh, you'd just see a bunch of inserts into a\nthrowaway data area (a transient relfilenode, we call it), then a swap\nof the MV's relfilenode with the throwaway one. I doubt it'd be useful.\nBut then I'm not clear *why* you would like to do a non-concurrent\nrefresh. Maybe your situation would be best served by forbidding non-\nconcurrent refresh when the MV contains any triggers.\n\nAlternatively, maybe reimplement non-concurrent refresh so that it works\nidentically to concurrent refresh (except with a stronger lock). Not\nsure if this implies any performance penalties.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 26 Dec 2018 00:05:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Tue, Dec 25, 2018 at 7:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> But then I'm not clear *why* you would like to do a non-concurrent\n> refresh.\n\nI mostly wanted to support if for two reasons:\n\n- completeness: maybe we cannot imagine the use case yet, but somebody\nmight in the future\n- getting trigger calls for initial inserts: you can then create\nmaterialized view without data, attach triggers, and then run a\nregular refresh; this allows you to have only one code path to process\nany (including initial) changes to the view through notifications,\ninstead of handling initial data differently\n\n> Maybe your situation would be best served by forbidding non-\n> concurrent refresh when the MV contains any triggers.\n\nIf this would be acceptable by the community, I could do it. I worry\nthough that one could probably get themselves into a situation where\nmaterialized view losses all data through some WITH NO DATA operation\nand concurrent refresh is not possible. Currently concurrent refresh\nworks only with data. We could make concurrent refresh also work when\nmaterialized view has no data easily (it would just insert data and\nnot compute diff).\n\n> Alternatively, maybe reimplement non-concurrent refresh so that it works\n> identically to concurrent refresh (except with a stronger lock). Not\n> sure if this implies any performance penalties.\n\nAh, yes. I could just do TRUNCATE and INSERT, instead of heap swap.\nThat would then generate reasonable trigger calls.\n\nAre there any existing benchmarks for such operations I could use to\nsee if there are any performance changes if I change implementation\nhere? Any guidelines how to evaluate this?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Tue, 25 Dec 2018 19:16:46 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nI did a bit of benchmarking. It seems my version with UPDATE takes\neven slightly less time (~5%).\n\n\nMitar\n\nOn Mon, Dec 24, 2018 at 6:17 PM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> I made another version of the patch. This one does UPDATEs for changed\n> row instead of DELETE/INSERT.\n>\n> All existing regression tests are still passing (make check).\n>\n>\n> Mitar\n>\n> On Mon, Dec 24, 2018 at 4:13 PM Mitar <mmitar@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > Thanks for reply!\n> >\n> > On Mon, Dec 24, 2018 at 2:20 PM David Fetter <david@fetter.org> wrote:\n> > > You've got the right mailing list, a description of what you want, and\n> > > a PoC patch. You also got the patch in during the time between\n> > > Commitfests. You're doing great!\n> >\n> > Great!\n> >\n> > One thing I am unclear about is how it is determined if this is a\n> > viable feature to be eventually included. You gave me some suggestions\n> > to improve in my patch (adding tests and so on). Does this mean that\n> > the patch should be fully done before a decision is made?\n> >\n> > Also, the workflow is that I improve things, and resubmit a patch to\n> > the mailing list, for now?\n> >\n> > > > - Currently only insert and remove operations are done on the\n> > > > materialized view. This is because the current logic just removes\n> > > > changed rows and inserts new rows.\n> > >\n> > > What other operations might you want to support?\n> >\n> > Update. So if a row is changing, instead of doing a remove and insert,\n> > what currently is being done, I would prefer an update. Then UPDATE\n> > trigger operation would happen as well. Maybe the INSERT query could\n> > be changed to INSERT ... ON CONFLICT UPDATE query (not sure if this\n> > one does UPDATE trigger operation on conflict), and REMOVE changed to\n> > remove just rows which were really removed, but not only updated.\n> >\n> > > As far as you can tell, is this just an efficiency optimization, or\n> > > might it go to correctness of the behavior?\n> >\n> > It is just an optimization. Or maybe even just a surprise. Maybe a\n> > documentation addition could help here. In my use case I would loop\n> > over OLD and NEW REFERENCING TABLE so if they are empty, nothing would\n> > be done. But it is just surprising that DELETE trigger is called even\n> > when no rows are being deleted in the materialized view.\n> >\n> > > I'm not sure I understand the problem being described here. Do you see\n> > > these as useful to separate for some reason?\n> >\n> > So rows which are just updated currently get first DELETE trigger\n> > called and then INSERT. The issue is that if I am observing this\n> > behavior from outside, it makes it unclear when I see DELETE if this\n> > means really that a row has been deleted or it just means that later\n> > on an INSERT would happen. Now I have to wait for an eventual INSERT\n> > to determine that. But how long should I wait? It makes consuming\n> > these notifications tricky.\n> >\n> > If I just blindly respond to those notifications, this could introduce\n> > other problems. For example, if I have a reactive web application it\n> > could mean a visible flicker to the user. Instead of updating rendered\n> > row, I would first delete it and then later on re-insert it.\n> >\n> > > > Non-concurrent refresh does not trigger any trigger. But it seems\n> > > > all data to do so is there (previous table, new table), at least for\n> > > > the statement-level trigger. Row-level triggers could also be\n> > > > simulated probably (with TRUNCATE and INSERT triggers).\n> > >\n> > > Would it make more sense to fill in the missing implementations of NEW\n> > > and OLD for per-row triggers instead of adding another hack?\n> >\n> > You lost me here. But I agree, we should implement this fully, without\n> > hacks. I just do not know how exactly.\n> >\n> > Are you saying that we should support only row-level triggers, or that\n> > we should support both statement-level and row-level triggers, but\n> > just make sure we implement this properly? I think that my suggestion\n> > of using TRUNCATE and INSERT triggers is reasonable in the case of\n> > full refresh. This is what happens. If we would want to have\n> > DELETE/UPDATE/INSERT triggers, we would have to compute the diff like\n> > concurrent version has to do, which would defeat the difference\n> > between the two. But yes, all INSERT trigger calls should have NEW\n> > provided.\n> >\n> > So per-statement trigger would have TRUNCATE and INSERT called. And\n> > per-row trigger would have TRUNCATE and per-row INSERTs called.\n> >\n> >\n> > Mitar\n> >\n> > --\n> > http://mitar.tnode.com/\n> > https://twitter.com/mitar_m\n>\n>\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Wed, 26 Dec 2018 00:26:32 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 2018-Dec-25, Mitar wrote:\n\n> On Tue, Dec 25, 2018 at 7:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > But then I'm not clear *why* you would like to do a non-concurrent\n> > refresh.\n> \n> I mostly wanted to support if for two reasons:\n> \n> - completeness: maybe we cannot imagine the use case yet, but somebody\n> might in the future\n\nUnderstood. We don't like features that fail to work in conjunction\nwith other features, so this is a good goal to keep in mind.\n\n> - getting trigger calls for initial inserts: you can then create\n> materialized view without data, attach triggers, and then run a\n> regular refresh; this allows you to have only one code path to process\n> any (including initial) changes to the view through notifications,\n> instead of handling initial data differently\n\nSounds like you could do this by fixing concurrent refresh to also work\nwhen the MV is WITH NO DATA.\n\n> > Maybe your situation would be best served by forbidding non-\n> > concurrent refresh when the MV contains any triggers.\n> \n> If this would be acceptable by the community, I could do it.\n\nI think your chances are 50%/50% that this would appear acceptable.\n\n> > Alternatively, maybe reimplement non-concurrent refresh so that it works\n> > identically to concurrent refresh (except with a stronger lock). Not\n> > sure if this implies any performance penalties.\n> \n> Ah, yes. I could just do TRUNCATE and INSERT, instead of heap swap.\n> That would then generate reasonable trigger calls.\n\nRight.\n\n> Are there any existing benchmarks for such operations I could use to\n> see if there are any performance changes if I change implementation\n> here? Any guidelines how to evaluate this?\n\nNot that I know of. Typically the developer of a feature comes up with\nappropriate performance tests also, targetting average and worst cases.\n\nIf the performance worsens with the different implementation, one idea\nis to keep both and only use the slow one when triggers are present.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 26 Dec 2018 09:38:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Wed, Dec 26, 2018 at 4:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Sounds like you could do this by fixing concurrent refresh to also work\n> when the MV is WITH NO DATA.\n\nYes, I do not think this would be too hard to fix. I could do this nevertheless.\n\n> > Ah, yes. I could just do TRUNCATE and INSERT, instead of heap swap.\n> > That would then generate reasonable trigger calls.\n>\n> Right.\n\nI have tested this yesterday and performance is 2x worse that heap\nswap on the benchmark I made. So I do not think this is a viable\napproach.\n\nI am now looking into simply triggering TRUNCATE and INSERT triggers\nafter heap swap simulating the above. I made AFTER STATEMENT triggers\nand it looks like it is working, only NEW table is not populated for\nsome reason. Any suggestions? See attached patch.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Wed, 26 Dec 2018 09:07:22 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nI have made an updated version of the patch, added tests and\ndocumentation changes. This is my view now a complete patch. Please\nprovide any feedback or comments you might have for me to improve the\npatch. I will also add it to commitfest.\n\nA summary of the patch: This patch enables adding AFTER triggers (both\nROW and STATEMENT) on materialized views. They are fired when doing\nREFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\nTriggers are not fired if you call REFRESH without CONCURRENTLY. This\nis based on some discussion on the mailing list because implementing\nit for without CONCURRENTLY would require us to add logic for firing\ntriggers where there was none before (and is just an efficient heap\nswap).\n\nTo be able to create a materialized view without data, specify\ntriggers, and REFRESH CONCURRENTLY so that those triggers would be\ncalled also for initial data, I have tested and determined that there\nis no reason why REFRESH CONCURRENTLY could not be run on\nuninitialized materialized view. So I removed that check and things\nseem to just work. Including triggers being called for initial data. I\nthink this makes REFRESH CONCURRENTLY have one less special case which\nis in general nicer.\n\nI have run tests and all old tests still succeed. I have added more\ntests for the new feature.\n\nI have run benchmark to evaluate the impact of me changing\nrefresh_by_match_merge to do UPDATE instead of DELETE and INSERT for\nrows which were just updated. In fact it seems this improves\nperformance slightly (4% in my benchmark, mean over 10 runs). I guess\nthat this is because it is cheaper to just change one column's values\n(what benchmark is doing when changing rows) instead of removing and\ninserting the whole row. Because REFRESH MATERIALIZED VIEW\nCONCURRENTLY is meant to be used when not a lot of data has been\nchanged anyway, I find this a pleasantly surprising additional\nimprovement in this patch. I am attaching the benchmark script I have\nused. I compared the time of the final refresh query in the script. (I\nwould love if pgbench could take a custom init script to run before\nthe timed part of the script.)\n\n\nMitar\n\nOn Mon, Dec 24, 2018 at 12:59 PM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> Based on discussion about observing changes on an open query in a\n> reactive manner (to support reactive web applications) [1], I\n> identified that one critical feature is missing to fully implement\n> discussed design of having reactive queries be represented as\n> materialized views, and changes to these materialized views would then\n> be observed and pushed to the client through LISTEN/NOTIFY.\n>\n> This is my first time contributing to PostgreSQL, so I hope I am\n> starting this process well.\n>\n> I would like to propose that support for AFTER triggers are added to\n> materialized views. I experimented a bit and it seems this is mostly\n> just a question of enabling/exposing them. See attached patch. This\n> enabled me to add trigger to a material view which mostly worked. Here\n> are my findings.\n>\n> Running REFRESH MATERIALIZED VIEW CONCURRENTLY calls triggers. Both\n> per statement and per row. There are few improvements which could be\n> done:\n>\n> - Currently only insert and remove operations are done on the\n> materialized view. This is because the current logic just removes\n> changed rows and inserts new rows.\n> - In current concurrently refresh logic those insert and remove\n> operations are made even if there are no changes to be done. Which\n> triggers a statement trigger unnecessary. A small improvement could be\n> to skip the statement in that case, but looking at the code this seems\n> maybe tricky because both each of inserts and deletions are done\n> inside one query each.\n> - Current concurrently refresh logic does never do updates on existing\n> rows. It would be nicer to have that so that triggers are more aligned\n> with real changes to the data. So current two queries could be changed\n> to three, each doing one of the insert, update, and delete.\n>\n> Non-concurrent refresh does not trigger any trigger. But it seems all\n> data to do so is there (previous table, new table), at least for the\n> statement-level trigger. Row-level triggers could also be simulated\n> probably (with TRUNCATE and INSERT triggers).\n>\n> [1] https://www.postgresql.org/message-id/flat/CAKLmikP%2BPPB49z8rEEvRjFOD0D2DV72KdqYN7s9fjh9sM_32ZA%40mail.gmail.com\n>\n>\n> Mitar\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Thu, 27 Dec 2018 23:43:57 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOne more version of the patch with slightly more deterministic tests.\n\n\nMitar\n\nOn Thu, Dec 27, 2018 at 11:43 PM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> I have made an updated version of the patch, added tests and\n> documentation changes. This is my view now a complete patch. Please\n> provide any feedback or comments you might have for me to improve the\n> patch. I will also add it to commitfest.\n>\n> A summary of the patch: This patch enables adding AFTER triggers (both\n> ROW and STATEMENT) on materialized views. They are fired when doing\n> REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n> Triggers are not fired if you call REFRESH without CONCURRENTLY. This\n> is based on some discussion on the mailing list because implementing\n> it for without CONCURRENTLY would require us to add logic for firing\n> triggers where there was none before (and is just an efficient heap\n> swap).\n>\n> To be able to create a materialized view without data, specify\n> triggers, and REFRESH CONCURRENTLY so that those triggers would be\n> called also for initial data, I have tested and determined that there\n> is no reason why REFRESH CONCURRENTLY could not be run on\n> uninitialized materialized view. So I removed that check and things\n> seem to just work. Including triggers being called for initial data. I\n> think this makes REFRESH CONCURRENTLY have one less special case which\n> is in general nicer.\n>\n> I have run tests and all old tests still succeed. I have added more\n> tests for the new feature.\n>\n> I have run benchmark to evaluate the impact of me changing\n> refresh_by_match_merge to do UPDATE instead of DELETE and INSERT for\n> rows which were just updated. In fact it seems this improves\n> performance slightly (4% in my benchmark, mean over 10 runs). I guess\n> that this is because it is cheaper to just change one column's values\n> (what benchmark is doing when changing rows) instead of removing and\n> inserting the whole row. Because REFRESH MATERIALIZED VIEW\n> CONCURRENTLY is meant to be used when not a lot of data has been\n> changed anyway, I find this a pleasantly surprising additional\n> improvement in this patch. I am attaching the benchmark script I have\n> used. I compared the time of the final refresh query in the script. (I\n> would love if pgbench could take a custom init script to run before\n> the timed part of the script.)\n>\n>\n> Mitar\n>\n> On Mon, Dec 24, 2018 at 12:59 PM Mitar <mmitar@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > Based on discussion about observing changes on an open query in a\n> > reactive manner (to support reactive web applications) [1], I\n> > identified that one critical feature is missing to fully implement\n> > discussed design of having reactive queries be represented as\n> > materialized views, and changes to these materialized views would then\n> > be observed and pushed to the client through LISTEN/NOTIFY.\n> >\n> > This is my first time contributing to PostgreSQL, so I hope I am\n> > starting this process well.\n> >\n> > I would like to propose that support for AFTER triggers are added to\n> > materialized views. I experimented a bit and it seems this is mostly\n> > just a question of enabling/exposing them. See attached patch. This\n> > enabled me to add trigger to a material view which mostly worked. Here\n> > are my findings.\n> >\n> > Running REFRESH MATERIALIZED VIEW CONCURRENTLY calls triggers. Both\n> > per statement and per row. There are few improvements which could be\n> > done:\n> >\n> > - Currently only insert and remove operations are done on the\n> > materialized view. This is because the current logic just removes\n> > changed rows and inserts new rows.\n> > - In current concurrently refresh logic those insert and remove\n> > operations are made even if there are no changes to be done. Which\n> > triggers a statement trigger unnecessary. A small improvement could be\n> > to skip the statement in that case, but looking at the code this seems\n> > maybe tricky because both each of inserts and deletions are done\n> > inside one query each.\n> > - Current concurrently refresh logic does never do updates on existing\n> > rows. It would be nicer to have that so that triggers are more aligned\n> > with real changes to the data. So current two queries could be changed\n> > to three, each doing one of the insert, update, and delete.\n> >\n> > Non-concurrent refresh does not trigger any trigger. But it seems all\n> > data to do so is there (previous table, new table), at least for the\n> > statement-level trigger. Row-level triggers could also be simulated\n> > probably (with TRUNCATE and INSERT triggers).\n> >\n> > [1] https://www.postgresql.org/message-id/flat/CAKLmikP%2BPPB49z8rEEvRjFOD0D2DV72KdqYN7s9fjh9sM_32ZA%40mail.gmail.com\n> >\n> >\n> > Mitar\n> >\n> > --\n> > http://mitar.tnode.com/\n> > https://twitter.com/mitar_m\n>\n>\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Thu, 27 Dec 2018 23:51:31 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nHm, why in commitfest it does not display the latest patch?\n\nhttps://commitfest.postgresql.org/21/1953/\n\nIt does display correctly the latest e-mail, but not the link to the patch. :-(\n\n\nMitar\n\nOn Thu, Dec 27, 2018 at 11:51 PM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> One more version of the patch with slightly more deterministic tests.\n>\n>\n> Mitar\n>\n> On Thu, Dec 27, 2018 at 11:43 PM Mitar <mmitar@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > I have made an updated version of the patch, added tests and\n> > documentation changes. This is my view now a complete patch. Please\n> > provide any feedback or comments you might have for me to improve the\n> > patch. I will also add it to commitfest.\n> >\n> > A summary of the patch: This patch enables adding AFTER triggers (both\n> > ROW and STATEMENT) on materialized views. They are fired when doing\n> > REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n> > Triggers are not fired if you call REFRESH without CONCURRENTLY. This\n> > is based on some discussion on the mailing list because implementing\n> > it for without CONCURRENTLY would require us to add logic for firing\n> > triggers where there was none before (and is just an efficient heap\n> > swap).\n> >\n> > To be able to create a materialized view without data, specify\n> > triggers, and REFRESH CONCURRENTLY so that those triggers would be\n> > called also for initial data, I have tested and determined that there\n> > is no reason why REFRESH CONCURRENTLY could not be run on\n> > uninitialized materialized view. So I removed that check and things\n> > seem to just work. Including triggers being called for initial data. I\n> > think this makes REFRESH CONCURRENTLY have one less special case which\n> > is in general nicer.\n> >\n> > I have run tests and all old tests still succeed. I have added more\n> > tests for the new feature.\n> >\n> > I have run benchmark to evaluate the impact of me changing\n> > refresh_by_match_merge to do UPDATE instead of DELETE and INSERT for\n> > rows which were just updated. In fact it seems this improves\n> > performance slightly (4% in my benchmark, mean over 10 runs). I guess\n> > that this is because it is cheaper to just change one column's values\n> > (what benchmark is doing when changing rows) instead of removing and\n> > inserting the whole row. Because REFRESH MATERIALIZED VIEW\n> > CONCURRENTLY is meant to be used when not a lot of data has been\n> > changed anyway, I find this a pleasantly surprising additional\n> > improvement in this patch. I am attaching the benchmark script I have\n> > used. I compared the time of the final refresh query in the script. (I\n> > would love if pgbench could take a custom init script to run before\n> > the timed part of the script.)\n> >\n> >\n> > Mitar\n> >\n> > On Mon, Dec 24, 2018 at 12:59 PM Mitar <mmitar@gmail.com> wrote:\n> > >\n> > > Hi!\n> > >\n> > > Based on discussion about observing changes on an open query in a\n> > > reactive manner (to support reactive web applications) [1], I\n> > > identified that one critical feature is missing to fully implement\n> > > discussed design of having reactive queries be represented as\n> > > materialized views, and changes to these materialized views would then\n> > > be observed and pushed to the client through LISTEN/NOTIFY.\n> > >\n> > > This is my first time contributing to PostgreSQL, so I hope I am\n> > > starting this process well.\n> > >\n> > > I would like to propose that support for AFTER triggers are added to\n> > > materialized views. I experimented a bit and it seems this is mostly\n> > > just a question of enabling/exposing them. See attached patch. This\n> > > enabled me to add trigger to a material view which mostly worked. Here\n> > > are my findings.\n> > >\n> > > Running REFRESH MATERIALIZED VIEW CONCURRENTLY calls triggers. Both\n> > > per statement and per row. There are few improvements which could be\n> > > done:\n> > >\n> > > - Currently only insert and remove operations are done on the\n> > > materialized view. This is because the current logic just removes\n> > > changed rows and inserts new rows.\n> > > - In current concurrently refresh logic those insert and remove\n> > > operations are made even if there are no changes to be done. Which\n> > > triggers a statement trigger unnecessary. A small improvement could be\n> > > to skip the statement in that case, but looking at the code this seems\n> > > maybe tricky because both each of inserts and deletions are done\n> > > inside one query each.\n> > > - Current concurrently refresh logic does never do updates on existing\n> > > rows. It would be nicer to have that so that triggers are more aligned\n> > > with real changes to the data. So current two queries could be changed\n> > > to three, each doing one of the insert, update, and delete.\n> > >\n> > > Non-concurrent refresh does not trigger any trigger. But it seems all\n> > > data to do so is there (previous table, new table), at least for the\n> > > statement-level trigger. Row-level triggers could also be simulated\n> > > probably (with TRUNCATE and INSERT triggers).\n> > >\n> > > [1] https://www.postgresql.org/message-id/flat/CAKLmikP%2BPPB49z8rEEvRjFOD0D2DV72KdqYN7s9fjh9sM_32ZA%40mail.gmail.com\n> > >\n> > >\n> > > Mitar\n> > >\n> > > --\n> > > http://mitar.tnode.com/\n> > > https://twitter.com/mitar_m\n> >\n> >\n> >\n> > --\n> > http://mitar.tnode.com/\n> > https://twitter.com/mitar_m\n>\n>\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 28 Dec 2018 00:11:31 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nFalse alarm. It just looks like updating patches takes longer than\nupdating e-mails.\n\n\nMitar\n\nOn Fri, Dec 28, 2018 at 12:11 AM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> Hm, why in commitfest it does not display the latest patch?\n>\n> https://commitfest.postgresql.org/21/1953/\n>\n> It does display correctly the latest e-mail, but not the link to the patch. :-(\n>\n>\n> Mitar\n>\n> On Thu, Dec 27, 2018 at 11:51 PM Mitar <mmitar@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > One more version of the patch with slightly more deterministic tests.\n> >\n> >\n> > Mitar\n> >\n> > On Thu, Dec 27, 2018 at 11:43 PM Mitar <mmitar@gmail.com> wrote:\n> > >\n> > > Hi!\n> > >\n> > > I have made an updated version of the patch, added tests and\n> > > documentation changes. This is my view now a complete patch. Please\n> > > provide any feedback or comments you might have for me to improve the\n> > > patch. I will also add it to commitfest.\n> > >\n> > > A summary of the patch: This patch enables adding AFTER triggers (both\n> > > ROW and STATEMENT) on materialized views. They are fired when doing\n> > > REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n> > > Triggers are not fired if you call REFRESH without CONCURRENTLY. This\n> > > is based on some discussion on the mailing list because implementing\n> > > it for without CONCURRENTLY would require us to add logic for firing\n> > > triggers where there was none before (and is just an efficient heap\n> > > swap).\n> > >\n> > > To be able to create a materialized view without data, specify\n> > > triggers, and REFRESH CONCURRENTLY so that those triggers would be\n> > > called also for initial data, I have tested and determined that there\n> > > is no reason why REFRESH CONCURRENTLY could not be run on\n> > > uninitialized materialized view. So I removed that check and things\n> > > seem to just work. Including triggers being called for initial data. I\n> > > think this makes REFRESH CONCURRENTLY have one less special case which\n> > > is in general nicer.\n> > >\n> > > I have run tests and all old tests still succeed. I have added more\n> > > tests for the new feature.\n> > >\n> > > I have run benchmark to evaluate the impact of me changing\n> > > refresh_by_match_merge to do UPDATE instead of DELETE and INSERT for\n> > > rows which were just updated. In fact it seems this improves\n> > > performance slightly (4% in my benchmark, mean over 10 runs). I guess\n> > > that this is because it is cheaper to just change one column's values\n> > > (what benchmark is doing when changing rows) instead of removing and\n> > > inserting the whole row. Because REFRESH MATERIALIZED VIEW\n> > > CONCURRENTLY is meant to be used when not a lot of data has been\n> > > changed anyway, I find this a pleasantly surprising additional\n> > > improvement in this patch. I am attaching the benchmark script I have\n> > > used. I compared the time of the final refresh query in the script. (I\n> > > would love if pgbench could take a custom init script to run before\n> > > the timed part of the script.)\n> > >\n> > >\n> > > Mitar\n> > >\n> > > On Mon, Dec 24, 2018 at 12:59 PM Mitar <mmitar@gmail.com> wrote:\n> > > >\n> > > > Hi!\n> > > >\n> > > > Based on discussion about observing changes on an open query in a\n> > > > reactive manner (to support reactive web applications) [1], I\n> > > > identified that one critical feature is missing to fully implement\n> > > > discussed design of having reactive queries be represented as\n> > > > materialized views, and changes to these materialized views would then\n> > > > be observed and pushed to the client through LISTEN/NOTIFY.\n> > > >\n> > > > This is my first time contributing to PostgreSQL, so I hope I am\n> > > > starting this process well.\n> > > >\n> > > > I would like to propose that support for AFTER triggers are added to\n> > > > materialized views. I experimented a bit and it seems this is mostly\n> > > > just a question of enabling/exposing them. See attached patch. This\n> > > > enabled me to add trigger to a material view which mostly worked. Here\n> > > > are my findings.\n> > > >\n> > > > Running REFRESH MATERIALIZED VIEW CONCURRENTLY calls triggers. Both\n> > > > per statement and per row. There are few improvements which could be\n> > > > done:\n> > > >\n> > > > - Currently only insert and remove operations are done on the\n> > > > materialized view. This is because the current logic just removes\n> > > > changed rows and inserts new rows.\n> > > > - In current concurrently refresh logic those insert and remove\n> > > > operations are made even if there are no changes to be done. Which\n> > > > triggers a statement trigger unnecessary. A small improvement could be\n> > > > to skip the statement in that case, but looking at the code this seems\n> > > > maybe tricky because both each of inserts and deletions are done\n> > > > inside one query each.\n> > > > - Current concurrently refresh logic does never do updates on existing\n> > > > rows. It would be nicer to have that so that triggers are more aligned\n> > > > with real changes to the data. So current two queries could be changed\n> > > > to three, each doing one of the insert, update, and delete.\n> > > >\n> > > > Non-concurrent refresh does not trigger any trigger. But it seems all\n> > > > data to do so is there (previous table, new table), at least for the\n> > > > statement-level trigger. Row-level triggers could also be simulated\n> > > > probably (with TRUNCATE and INSERT triggers).\n> > > >\n> > > > [1] https://www.postgresql.org/message-id/flat/CAKLmikP%2BPPB49z8rEEvRjFOD0D2DV72KdqYN7s9fjh9sM_32ZA%40mail.gmail.com\n> > > >\n> > > >\n> > > > Mitar\n> > > >\n> > > > --\n> > > > http://mitar.tnode.com/\n> > > > https://twitter.com/mitar_m\n> > >\n> > >\n> > >\n> > > --\n> > > http://mitar.tnode.com/\n> > > https://twitter.com/mitar_m\n> >\n> >\n> >\n> > --\n> > http://mitar.tnode.com/\n> > https://twitter.com/mitar_m\n>\n>\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 28 Dec 2018 00:48:46 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 28/12/2018 08:43, Mitar wrote:\n> A summary of the patch: This patch enables adding AFTER triggers (both\n> ROW and STATEMENT) on materialized views. They are fired when doing\n> REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n\nWhat bothers me about this patch is that it subtly changes what a\ntrigger means. It currently means, say, INSERT was executed on this\ntable. You are expanding that to mean, a row was inserted into this\ntable -- somehow.\n\nTriggers should generally refer to user-facing commands. Could you not\nmake a trigger on REFRESH itself?\n\n> Triggers are not fired if you call REFRESH without CONCURRENTLY. This\n> is based on some discussion on the mailing list because implementing\n> it for without CONCURRENTLY would require us to add logic for firing\n> triggers where there was none before (and is just an efficient heap\n> swap).\n\nThis is also a problem, because it would allow bypassing the trigger\naccidentally.\n\nMoreover, consider that there could be updatable materialized views,\njust like there are updatable normal views. And there could be triggers\non those updatable materialized views. Those would look similar but\nwork quite differently from what you are proposing here.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 12:23:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nI am new to contributing to PostgreSQL and this is my first time\nhaving patches in commit fest, so I am not sure about details of the\nprocess here, but I assume that replying and discuss the patch during\nthis period is one of the actives, so I am replying to the comment. If\nI should wait or something like that, please advise.\n\nOn Fri, Jan 4, 2019 at 3:23 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> > A summary of the patch: This patch enables adding AFTER triggers (both\n> > ROW and STATEMENT) on materialized views. They are fired when doing\n> > REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n>\n> What bothers me about this patch is that it subtly changes what a\n> trigger means. It currently means, say, INSERT was executed on this\n> table. You are expanding that to mean, a row was inserted into this\n> table -- somehow.\n\nAren't almost all statements these days generated by some sort of\nautomatic logic? Which generates those INSERTs and then you get\ntriggers on them? I am not sure where is this big difference in your\nview coming from? Triggers are not defined as \"user-made INSERT was\nexecuted on this table\". I think it has always been defined as \"INSERT\nmodified this table\", no matter where this insert came from (from\nuser, from some other trigger, by backup process). I mean, this is the\nbeauty of declarative programming. You define it once and you do not\ncare who triggers it.\n\nMaterialized views are anyway just built-in implementation of tables +\ntriggers to rerun the query. You could reconstruct them manually. And\nwhy would not triggers be called if tables is being modified through\nINSERTs? So if PostgreSQL has such a feature, why make it limited and\nartificially make it less powerful? It is literally not possible to\nhave triggers only because there is \"if trigger on a materialized\nview, throw an exception\".\n\n> Triggers should generally refer to user-facing commands\n\nSo triggers on table A are not run when some other trigger from table\nB decides to insert data into table A? Not true. I think triggers have\nnever cared who and where an INSERT came from. They just trigger. From\nuser, from another trigger, or from some built-in PostgreSQL procedure\ncalled REFRESH.\n\n> Could you not make a trigger on REFRESH itself?\n\nIf you mean if I could simulate this somehow before or after I call\nREFRESH, then not really. I would not have access to previous state of\nthe table to compute the diff anymore. Moreover, I would have to\nrecompute the diff again, when REFRESH already did it once.\n\nI could implement materialized views myself using regular tables and\ntriggers. And then have triggers after change on that table. But this\nsounds very sad.\n\nOr, are you saying that we should introduce a whole new type of of\ntrigger, REFRESH trigger, which would be valid only on materialized\nviews, and get OLD and NEW relations for previous and old state? I\nthink this could be an option, but it would require much more work,\nand more changes to API. Is this what community would prefer?\n\n> This is also a problem, because it would allow bypassing the trigger\n> accidentally.\n\nSure, this is why it is useful to explain that CONCURRENT REFRESH uses\nINSERT/UPDATE/DELETE and this is why you get triggers, and REFRESH\ndoes not (but it is faster). I explained this in documentation.\n\nBut yes, this is downside. I checked the idea of calling row-level\ntriggers after regular REFRESH, but it seems it will introduce a lot\nof overhead and special handling. I tried implementing it as TRUNCATE\n+ INSERTS instead of heap swap and it is 2x slower.\n\n> Moreover, consider that there could be updatable materialized views,\n> just like there are updatable normal views. And there could be triggers\n> on those updatable materialized views. Those would look similar but\n> work quite differently from what you are proposing here.\n\nHm, not really. I would claim they would behave exactly the same.\nAFTER trigger on INSERT on a materialized view would trigger for rows\nwhich have changed through user updating materialized view directly,\nor by calling CONCURRENT REFRESH which inserted a row. In both cases\nthe same trigger would run because materialized view had a row\ninserted. Pretty nice.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 4 Jan 2019 14:10:16 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Dear,\n\nYou can try https://github.com/ntqvinh/PgMvIncrementalUpdate to generate\ntriggers in C for incremental updates of matviews.\n\nFor asynchronous updates, the tool does generate the triggers for\ncollecting updated/inserted/deleted rows and then the codes for doing\nincremental updating as well.\n\nTks and best regards,\n\nVinh\n\n\n\nOn Sat, Jan 5, 2019 at 5:10 AM Mitar <mmitar@gmail.com> wrote:\n\n> Hi!\n>\n> I am new to contributing to PostgreSQL and this is my first time\n> having patches in commit fest, so I am not sure about details of the\n> process here, but I assume that replying and discuss the patch during\n> this period is one of the actives, so I am replying to the comment. If\n> I should wait or something like that, please advise.\n>\n> On Fri, Jan 4, 2019 at 3:23 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > > A summary of the patch: This patch enables adding AFTER triggers (both\n> > > ROW and STATEMENT) on materialized views. They are fired when doing\n> > > REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n> >\n> > What bothers me about this patch is that it subtly changes what a\n> > trigger means. It currently means, say, INSERT was executed on this\n> > table. You are expanding that to mean, a row was inserted into this\n> > table -- somehow.\n>\n> Aren't almost all statements these days generated by some sort of\n> automatic logic? Which generates those INSERTs and then you get\n> triggers on them? I am not sure where is this big difference in your\n> view coming from? Triggers are not defined as \"user-made INSERT was\n> executed on this table\". I think it has always been defined as \"INSERT\n> modified this table\", no matter where this insert came from (from\n> user, from some other trigger, by backup process). I mean, this is the\n> beauty of declarative programming. You define it once and you do not\n> care who triggers it.\n>\n> Materialized views are anyway just built-in implementation of tables +\n> triggers to rerun the query. You could reconstruct them manually. And\n> why would not triggers be called if tables is being modified through\n> INSERTs? So if PostgreSQL has such a feature, why make it limited and\n> artificially make it less powerful? It is literally not possible to\n> have triggers only because there is \"if trigger on a materialized\n> view, throw an exception\".\n>\n> > Triggers should generally refer to user-facing commands\n>\n> So triggers on table A are not run when some other trigger from table\n> B decides to insert data into table A? Not true. I think triggers have\n> never cared who and where an INSERT came from. They just trigger. From\n> user, from another trigger, or from some built-in PostgreSQL procedure\n> called REFRESH.\n>\n> > Could you not make a trigger on REFRESH itself?\n>\n> If you mean if I could simulate this somehow before or after I call\n> REFRESH, then not really. I would not have access to previous state of\n> the table to compute the diff anymore. Moreover, I would have to\n> recompute the diff again, when REFRESH already did it once.\n>\n> I could implement materialized views myself using regular tables and\n> triggers. And then have triggers after change on that table. But this\n> sounds very sad.\n>\n> Or, are you saying that we should introduce a whole new type of of\n> trigger, REFRESH trigger, which would be valid only on materialized\n> views, and get OLD and NEW relations for previous and old state? I\n> think this could be an option, but it would require much more work,\n> and more changes to API. Is this what community would prefer?\n>\n> > This is also a problem, because it would allow bypassing the trigger\n> > accidentally.\n>\n> Sure, this is why it is useful to explain that CONCURRENT REFRESH uses\n> INSERT/UPDATE/DELETE and this is why you get triggers, and REFRESH\n> does not (but it is faster). I explained this in documentation.\n>\n> But yes, this is downside. I checked the idea of calling row-level\n> triggers after regular REFRESH, but it seems it will introduce a lot\n> of overhead and special handling. I tried implementing it as TRUNCATE\n> + INSERTS instead of heap swap and it is 2x slower.\n>\n> > Moreover, consider that there could be updatable materialized views,\n> > just like there are updatable normal views. And there could be triggers\n> > on those updatable materialized views. Those would look similar but\n> > work quite differently from what you are proposing here.\n>\n> Hm, not really. I would claim they would behave exactly the same.\n> AFTER trigger on INSERT on a materialized view would trigger for rows\n> which have changed through user updating materialized view directly,\n> or by calling CONCURRENT REFRESH which inserted a row. In both cases\n> the same trigger would run because materialized view had a row\n> inserted. Pretty nice.\n>\n>\n> Mitar\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n>\n>\n\nDear,You can try https://github.com/ntqvinh/PgMvIncrementalUpdate to generate triggers in C for incremental updates of matviews.For asynchronous updates, the tool does generate the triggers for collecting updated/inserted/deleted rows and then the codes for doing incremental updating as well.Tks and best regards,VinhOn Sat, Jan 5, 2019 at 5:10 AM Mitar <mmitar@gmail.com> wrote:Hi!\n\nI am new to contributing to PostgreSQL and this is my first time\nhaving patches in commit fest, so I am not sure about details of the\nprocess here, but I assume that replying and discuss the patch during\nthis period is one of the actives, so I am replying to the comment. If\nI should wait or something like that, please advise.\n\nOn Fri, Jan 4, 2019 at 3:23 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> > A summary of the patch: This patch enables adding AFTER triggers (both\n> > ROW and STATEMENT) on materialized views. They are fired when doing\n> > REFRESH MATERIALIZED VIEW CONCURRENTLY for rows which have changed.\n>\n> What bothers me about this patch is that it subtly changes what a\n> trigger means. It currently means, say, INSERT was executed on this\n> table. You are expanding that to mean, a row was inserted into this\n> table -- somehow.\n\nAren't almost all statements these days generated by some sort of\nautomatic logic? Which generates those INSERTs and then you get\ntriggers on them? I am not sure where is this big difference in your\nview coming from? Triggers are not defined as \"user-made INSERT was\nexecuted on this table\". I think it has always been defined as \"INSERT\nmodified this table\", no matter where this insert came from (from\nuser, from some other trigger, by backup process). I mean, this is the\nbeauty of declarative programming. You define it once and you do not\ncare who triggers it.\n\nMaterialized views are anyway just built-in implementation of tables +\ntriggers to rerun the query. You could reconstruct them manually. And\nwhy would not triggers be called if tables is being modified through\nINSERTs? So if PostgreSQL has such a feature, why make it limited and\nartificially make it less powerful? It is literally not possible to\nhave triggers only because there is \"if trigger on a materialized\nview, throw an exception\".\n\n> Triggers should generally refer to user-facing commands\n\nSo triggers on table A are not run when some other trigger from table\nB decides to insert data into table A? Not true. I think triggers have\nnever cared who and where an INSERT came from. They just trigger. From\nuser, from another trigger, or from some built-in PostgreSQL procedure\ncalled REFRESH.\n\n> Could you not make a trigger on REFRESH itself?\n\nIf you mean if I could simulate this somehow before or after I call\nREFRESH, then not really. I would not have access to previous state of\nthe table to compute the diff anymore. Moreover, I would have to\nrecompute the diff again, when REFRESH already did it once.\n\nI could implement materialized views myself using regular tables and\ntriggers. And then have triggers after change on that table. But this\nsounds very sad.\n\nOr, are you saying that we should introduce a whole new type of of\ntrigger, REFRESH trigger, which would be valid only on materialized\nviews, and get OLD and NEW relations for previous and old state? I\nthink this could be an option, but it would require much more work,\nand more changes to API. Is this what community would prefer?\n\n> This is also a problem, because it would allow bypassing the trigger\n> accidentally.\n\nSure, this is why it is useful to explain that CONCURRENT REFRESH uses\nINSERT/UPDATE/DELETE and this is why you get triggers, and REFRESH\ndoes not (but it is faster). I explained this in documentation.\n\nBut yes, this is downside. I checked the idea of calling row-level\ntriggers after regular REFRESH, but it seems it will introduce a lot\nof overhead and special handling. I tried implementing it as TRUNCATE\n+ INSERTS instead of heap swap and it is 2x slower.\n\n> Moreover, consider that there could be updatable materialized views,\n> just like there are updatable normal views. And there could be triggers\n> on those updatable materialized views. Those would look similar but\n> work quite differently from what you are proposing here.\n\nHm, not really. I would claim they would behave exactly the same.\nAFTER trigger on INSERT on a materialized view would trigger for rows\nwhich have changed through user updating materialized view directly,\nor by calling CONCURRENT REFRESH which inserted a row. In both cases\nthe same trigger would run because materialized view had a row\ninserted. Pretty nice.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Sat, 5 Jan 2019 17:53:07 +0700",
"msg_from": "=?UTF-8?B?Tmd1eeG7hW4gVHLhuqduIFF14buRYyBWaW5o?= <ntquocvinh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Sat, Jan 5, 2019 at 2:53 AM Nguyễn Trần Quốc Vinh\n<ntquocvinh@gmail.com> wrote:\n> You can try https://github.com/ntqvinh/PgMvIncrementalUpdate to generate triggers in C for incremental updates of matviews.\n>\n> For asynchronous updates, the tool does generate the triggers for collecting updated/inserted/deleted rows and then the codes for doing incremental updating as well.\n\nThank you for sharing this. This looks interesting, but I could not\ntest it myself (not using Windows), so I just read through the code.\n\nHaving better updating of materialized views using incremental\napproach would really benefit my use case as well. Then triggers being\nadded through my patch here on materialized view itself could\ncommunicate those changes which were done to the client. If I\nunderstand things correctly, this IVM would benefit the speed of how\nquickly we can do refreshes, and also if would allow that we call\nrefresh on a materialized view for every change on the source tables,\nknowing exactly what we have to update in the materialized view.\nReally cool. I also see that there was recently more discussion about\nIVM on the mailing list. [1]\n\n[1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n[2] https://www.postgresql.org/message-id/flat/FC784A9F-F599-4DCC-A45D-DBF6FA582D30@QQdd.eu\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Sat, 5 Jan 2019 13:57:39 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 1/5/19 11:57 PM, Mitar wrote:\n> \n> Having better updating of materialized views using incremental\n> approach would really benefit my use case as well. Then triggers being\n> added through my patch here on materialized view itself could\n> communicate those changes which were done to the client. If I\n> understand things correctly, this IVM would benefit the speed of how\n> quickly we can do refreshes, and also if would allow that we call\n> refresh on a materialized view for every change on the source tables,\n> knowing exactly what we have to update in the materialized view.\n> Really cool. I also see that there was recently more discussion about\n> IVM on the mailing list. [1]\n\nThere doesn't seem to be consensus on whether or not we want this patch. \n Peter has issues with the way it works and Andres [1] thinks it should \nbe pushed to PG13 or possibly rejected.\n\nI'll push this to PG13 for now.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/20190216054526.zss2cufdxfeudr4i%40alap3.anarazel.de\n\n",
"msg_date": "Thu, 7 Mar 2019 10:13:01 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Thu, Mar 7, 2019 at 12:13 AM David Steele <david@pgmasters.net> wrote:\n> There doesn't seem to be consensus on whether or not we want this patch.\n> Peter has issues with the way it works and Andres [1] thinks it should\n> be pushed to PG13 or possibly rejected.\n>\n> I'll push this to PG13 for now.\n\nSorry, I am new to PostgreSQL development process. So this has been\npushed for maybe (if at all) release planned for 2020 and is not\nanymore in consideration for PG12 to be released this year? From my\nvery inexperienced eye this looks like a very far push. What is\nexpected to happen in the year which would make it clearer if this is\nsomething which has a chance of going and/or what should be improved,\nif improving is even an option? I worry that nothing will happen for a\nyear and we will all forget about this and then we will be all to\nsquare zero.\n\nI must say that i do not really see a reason why this would not be\nincluded. I mean, materialized views are really just a sugar on top of\nhaving a table you refresh with a stored query, and if that table can\nhave triggers, why not also a materialized view.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 14 Mar 2019 01:05:59 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 3/14/19 12:05 PM, Mitar wrote:\n> Hi!\n> \n> On Thu, Mar 7, 2019 at 12:13 AM David Steele <david@pgmasters.net> wrote:\n>> There doesn't seem to be consensus on whether or not we want this patch.\n>> Peter has issues with the way it works and Andres [1] thinks it should\n>> be pushed to PG13 or possibly rejected.\n>>\n>> I'll push this to PG13 for now.\n> \n> Sorry, I am new to PostgreSQL development process. So this has been\n> pushed for maybe (if at all) release planned for 2020 and is not\n> anymore in consideration for PG12 to be released this year? From my\n> very inexperienced eye this looks like a very far push. What is\n> expected to happen in the year which would make it clearer if this is\n> something which has a chance of going and/or what should be improved,\n> if improving is even an option? I worry that nothing will happen for a\n> year and we will all forget about this and then we will be all to\n> square zero.\n> \n> I must say that i do not really see a reason why this would not be\n> included. I mean, materialized views are really just a sugar on top of\n> having a table you refresh with a stored query, and if that table can\n> have triggers, why not also a materialized view.\n\nThe reason is that you have not gotten any committer support for this \npatch or attracted significant review, that I can see. On the contrary, \nthree committers have expressed doubts about all or some of the patch \nand it doesn't seem to me that their issues have been addressed.\n\nThis is also a relatively new patch which makes large changes -- we \ngenerally like to get those in earlier than the second-to-last CF.\n\nI can only spend so much time looking at each patch, so Peter, Álvaro, \nor Andres are welcome to jump in and let me know if I have it wrong.\n\nWhat you need to be doing for PG13 is very specifically addressing \ncommitter concerns and gathering a consensus that the behavior of this \npatch is the way to go.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Fri, 15 Mar 2019 13:46:39 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "Hi!\n\nOn Fri, Mar 15, 2019 at 2:46 AM David Steele <david@pgmasters.net> wrote:\n> The reason is that you have not gotten any committer support for this\n> patch or attracted significant review, that I can see. On the contrary,\n> three committers have expressed doubts about all or some of the patch\n> and it doesn't seem to me that their issues have been addressed.\n\nTo my understanding many comments were to early version of the patch\nwhich were or addressed or explained why proposed changes not work\n(use TRUNCATE/INSERT instead of swapping heads is slower). If I missed\nanything and have not addressed it, please point this out.\n\nThe only pending/unaddressed comment is about the philosophical\nquestion of what it means to be a trigger. There it seems we simply\ndisagree with the reviewer and I do not know how to address that. I\njust see this as a very pragmatical feature which provides features\nyou would have if you would not use PostgreSQL abstraction. If you can\nuse features then, why not also with the abstraction?\n\nSo in my view this looks more like a lack of review feedback on the\nlatest version of the patch. I really do not know how to ask for more\nfeedback or to move the philosophical discussion further. I thought\nthat commit fest is in fact exactly a place to motivate and collect\nsuch feedback instead of waiting for it in the limbo.\n\n> What you need to be doing for PG13 is very specifically addressing\n> committer concerns and gathering a consensus that the behavior of this\n> patch is the way to go.\n\nTo my understanding the current patch addresses all concerns made by\nreviewers on older versions of the patch, or explains why proposals\ncannot work out, modulo the question of \"does this change what trigger\nis\".\n\nMoreover, it improves performance of CONCURRENT REFRESH for about 5%\nbased on my tests because of the split to INSERT/UPDATE/DELETE instead\nof TRUNCATE/INSERT, when measuring across a mixed set of queries which\ninclude just UPDATEs to source tables.\n\nThank you everyone who is involved in this process for your time, I do\nappreciate. I am just trying to explain that I am a bit at loss on\nconcrete next steps I could take here.\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 15 Mar 2019 09:15:16 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On 3/15/19 8:15 PM, Mitar wrote:\n> \n> The only pending/unaddressed comment is about the philosophical\n> question of what it means to be a trigger. There it seems we simply\n> disagree with the reviewer and I do not know how to address that. I\n> just see this as a very pragmatical feature which provides features\n> you would have if you would not use PostgreSQL abstraction. If you can\n> use features then, why not also with the abstraction?\n\nThis seems to be a pretty big deal to me. When the reviewer is also a \ncommitter I think you need to give serious thought to their objections.\n\n> So in my view this looks more like a lack of review feedback on the\n> latest version of the patch. I really do not know how to ask for more\n> feedback or to move the philosophical discussion further. I thought\n> that commit fest is in fact exactly a place to motivate and collect\n> such feedback instead of waiting for it in the limbo.\n\nYes, but sometimes these things take time.\n\n>> What you need to be doing for PG13 is very specifically addressing\n>> committer concerns and gathering a consensus that the behavior of this\n>> patch is the way to go.\n> \n> To my understanding the current patch addresses all concerns made by\n> reviewers on older versions of the patch, or explains why proposals\n> cannot work out, modulo the question of \"does this change what trigger\n> is\".\n\nStill a pretty important question...\n\n> Thank you everyone who is involved in this process for your time, I do\n> appreciate. I am just trying to explain that I am a bit at loss on\n> concrete next steps I could take here.\n\nThis is the last commitfest, so committers and reviewers are focused on \nwhat is most likely to make it into PG12. Your patch does not seem to \nbe attracting the attention it needs to make it into this release.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Wed, 20 Mar 2019 22:48:06 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On Tue, Dec 25, 2018 at 10:05 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Well, REFRESH CONCURRENTLY runs completely different than non-concurrent\n> REFRESH. The former updates the existing data by observing the\n> differences with the previous data; the latter simply re-runs the query\n> and overwrites everything. So if you simply enabled triggers on\n> non-concurrent refresh, you'd just see a bunch of inserts into a\n> throwaway data area (a transient relfilenode, we call it), then a swap\n> of the MV's relfilenode with the throwaway one. I doubt it'd be useful.\n> But then I'm not clear *why* you would like to do a non-concurrent\n> refresh. Maybe your situation would be best served by forbidding non-\n> concurrent refresh when the MV contains any triggers.\n>\n> Alternatively, maybe reimplement non-concurrent refresh so that it works\n> identically to concurrent refresh (except with a stronger lock). Not\n> sure if this implies any performance penalties.\n\nSorry to jump in late, but all of this sounds very strange to me.\nIt's possible for either concurrent or non-concurrent refresh to be\nfaster, depending on the circumstances; for example, if a concurrent\nrefresh would end up deleting all the rows and inserting them again, I\nthink that could be slower than just blowing all the data away and\nstarting over. So disabling non-concurrent refresh sounds like a bad\nidea. For the same reason, reimplementing it to work like a\nconcurrent refresh also sounds like a bad idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 21 Mar 2019 15:36:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On Fri, Jan 4, 2019 at 6:23 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> What bothers me about this patch is that it subtly changes what a\n> trigger means. It currently means, say, INSERT was executed on this\n> table. You are expanding that to mean, a row was inserted into this\n> table -- somehow.\n\nYeah. The fact that a concurrent refresh currently does DELETE+INSERT\nrather than UPDATE is currently an implementation detail. If you\nallow users to hook up triggers to the inserts, then suddenly it's no\nlonger an implementation detail: it is a user-visible behavior that\ncan't be changed in the future without breaking backward\ncompatibility.\n\n> Triggers should generally refer to user-facing commands. Could you not\n> make a trigger on REFRESH itself?\n\nI'm not sure that would help with the use case... but that seems like\nsomething to think about, especially if it could use the transition\ntable machinery somehow.\n\n> > Triggers are not fired if you call REFRESH without CONCURRENTLY. This\n> > is based on some discussion on the mailing list because implementing\n> > it for without CONCURRENTLY would require us to add logic for firing\n> > triggers where there was none before (and is just an efficient heap\n> > swap).\n>\n> This is also a problem, because it would allow bypassing the trigger\n> accidentally.\n>\n> Moreover, consider that there could be updatable materialized views,\n> just like there are updatable normal views. And there could be triggers\n> on those updatable materialized views. Those would look similar but\n> work quite differently from what you are proposing here.\n\nYeah.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 21 Mar 2019 15:41:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 03:41:08PM -0400, Robert Haas wrote:\n> Yeah. The fact that a concurrent refresh currently does DELETE+INSERT\n> rather than UPDATE is currently an implementation detail. If you\n> allow users to hook up triggers to the inserts, then suddenly it's no\n> longer an implementation detail: it is a user-visible behavior that\n> can't be changed in the future without breaking backward\n> compatibility.\n\nWe are visibly going nowhere for this thread for v12, so I have marked\nthe proposal as returned with feedback.\n--\nMichael",
"msg_date": "Mon, 1 Apr 2019 16:06:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature: triggers on materialized views"
}
] |
[
{
"msg_contents": "Hi!\n\nSometimes materialized views are used to cache a complex query on\nwhich a client works. But after client disconnects, the materialized\nview could be deleted. Regular VIEWs and TABLEs both have support for\ntemporary versions which get automatically dropped at the end of the\nsession. It seems it is easy to add the same thing for materialized\nviews as well. See attached PoC patch.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Tue, 25 Dec 2018 01:51:32 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature: temporary materialized views"
},
{
"msg_contents": "On 2018-Dec-25, Mitar wrote:\n\n> Sometimes materialized views are used to cache a complex query on\n> which a client works. But after client disconnects, the materialized\n> view could be deleted. Regular VIEWs and TABLEs both have support for\n> temporary versions which get automatically dropped at the end of the\n> session. It seems it is easy to add the same thing for materialized\n> views as well. See attached PoC patch.\n\nI think MVs that are dropped at session end are a sensible feature. I\nprobably wouldn't go as far as allowing ON COMMIT actions, though, so\nthis much effort is the right amount.\n\nI think if you really want to do this you should just use OptTemp, and\ndelete OptNoLog. Of course, you need to add tests and patch the docs.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 26 Dec 2018 14:00:22 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOn Wed, Dec 26, 2018 at 9:00 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think MVs that are dropped at session end are a sensible feature.\n\nThanks.\n\n> I probably wouldn't go as far as allowing ON COMMIT actions, though\n\nI agree. I do not see much usefulness for it. The only use case I can\nthink of would be to support REFRESH as an ON COMMIT action. That\nwould be maybe useful in the MV setting. After every transaction in my\nsession, REFRESH this materialized view.\n\nBut personally I do not have an use case for that, so I will leave it\nto somebody else. :-)\n\n> I think if you really want to do this you should just use OptTemp, and\n> delete OptNoLog.\n\nSounds good.\n\nOptTemp seems to have a misleading warning in some cases when it is\nnot used on tables though:\n\n\"GLOBAL is deprecated in temporary table creation\"\n\nShould we change this language to something else? \"GLOBAL is\ndeprecated in temporary object creation\"? Based on grammar it seems to\nbe used for tables, views, sequences, and soon materialized views.\n\n> Of course, you need to add tests and patch the docs.\n\nSure.\n\n[1] https://www.postgresql.org/message-id/29165.1545842105%40sss.pgh.pa.us\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Wed, 26 Dec 2018 09:19:54 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "st 26. 12. 2018 v 18:20 odesílatel Mitar <mmitar@gmail.com> napsal:\n\n> Hi!\n>\n> On Wed, Dec 26, 2018 at 9:00 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > I think MVs that are dropped at session end are a sensible feature.\n>\n> Thanks.\n>\n> > I probably wouldn't go as far as allowing ON COMMIT actions, though\n>\n> I agree. I do not see much usefulness for it. The only use case I can\n> think of would be to support REFRESH as an ON COMMIT action. That\n> would be maybe useful in the MV setting. After every transaction in my\n> session, REFRESH this materialized view.\n>\n> But personally I do not have an use case for that, so I will leave it\n> to somebody else. :-)\n>\n> > I think if you really want to do this you should just use OptTemp, and\n> > delete OptNoLog.\n>\n> Sounds good.\n>\n> OptTemp seems to have a misleading warning in some cases when it is\n> not used on tables though:\n>\n> \"GLOBAL is deprecated in temporary table creation\"\n>\n> Should we change this language to something else? \"GLOBAL is\n> deprecated in temporary object creation\"? Based on grammar it seems to\n> be used for tables, views, sequences, and soon materialized views.\n>\n\nThis message is wrong - probably better \"GLOBAL temporary tables are not\nsupported\"\n\nRegards\n\nPavel\n\n>\n> > Of course, you need to add tests and patch the docs.\n>\n> Sure.\n>\n> [1] https://www.postgresql.org/message-id/29165.1545842105%40sss.pgh.pa.us\n>\n>\n> Mitar\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n>\n>\n\nst 26. 12. 2018 v 18:20 odesílatel Mitar <mmitar@gmail.com> napsal:Hi!\n\nOn Wed, Dec 26, 2018 at 9:00 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think MVs that are dropped at session end are a sensible feature.\n\nThanks.\n\n> I probably wouldn't go as far as allowing ON COMMIT actions, though\n\nI agree. I do not see much usefulness for it. The only use case I can\nthink of would be to support REFRESH as an ON COMMIT action. That\nwould be maybe useful in the MV setting. After every transaction in my\nsession, REFRESH this materialized view.\n\nBut personally I do not have an use case for that, so I will leave it\nto somebody else. :-)\n\n> I think if you really want to do this you should just use OptTemp, and\n> delete OptNoLog.\n\nSounds good.\n\nOptTemp seems to have a misleading warning in some cases when it is\nnot used on tables though:\n\n\"GLOBAL is deprecated in temporary table creation\"\n\nShould we change this language to something else? \"GLOBAL is\ndeprecated in temporary object creation\"? Based on grammar it seems to\nbe used for tables, views, sequences, and soon materialized views.This message is wrong - probably better \"GLOBAL temporary tables are not supported\"RegardsPavel\n\n> Of course, you need to add tests and patch the docs.\n\nSure.\n\n[1] https://www.postgresql.org/message-id/29165.1545842105%40sss.pgh.pa.us\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Wed, 26 Dec 2018 18:23:26 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2018-Dec-26, Mitar wrote:\n\n> OptTemp seems to have a misleading warning in some cases when it is\n> not used on tables though:\n> \n> \"GLOBAL is deprecated in temporary table creation\"\n> \n> Should we change this language to something else? \"GLOBAL is\n> deprecated in temporary object creation\"? Based on grammar it seems to\n> be used for tables, views, sequences, and soon materialized views.\n\nI'd just leave those messages alone.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 26 Dec 2018 18:10:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nI made a new version of the patch. I added tests and changes to the\ndocs and made sure various other aspects of this change for as well. I\nthink this now makes temporary materialized views fully implemented\nand that in my view patch is complete. If there is anything else to\nadd, please let me know, I do not yet have much experience\ncontributing here. What are next steps? Do I just wait for it to be\nincluded into Commitfest? Do I add it there myself?\n\n\nMitar\n\nOn Wed, Dec 26, 2018 at 9:00 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2018-Dec-25, Mitar wrote:\n>\n> > Sometimes materialized views are used to cache a complex query on\n> > which a client works. But after client disconnects, the materialized\n> > view could be deleted. Regular VIEWs and TABLEs both have support for\n> > temporary versions which get automatically dropped at the end of the\n> > session. It seems it is easy to add the same thing for materialized\n> > views as well. See attached PoC patch.\n>\n> I think MVs that are dropped at session end are a sensible feature. I\n> probably wouldn't go as far as allowing ON COMMIT actions, though, so\n> this much effort is the right amount.\n>\n> I think if you really want to do this you should just use OptTemp, and\n> delete OptNoLog. Of course, you need to add tests and patch the docs.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Thu, 27 Dec 2018 01:01:48 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2018-Dec-27, Mitar wrote:\n\n> Hi!\n> \n> I made a new version of the patch. I added tests and changes to the\n> docs and made sure various other aspects of this change for as well. I\n> think this now makes temporary materialized views fully implemented\n> and that in my view patch is complete. If there is anything else to\n> add, please let me know, I do not yet have much experience\n> contributing here. What are next steps? Do I just wait for it to be\n> included into Commitfest? Do I add it there myself?\n\nYes, please add it yourself to the commitfest.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 27 Dec 2018 10:15:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nThanks, I did it.\n\nI am attaching a new version of the patch with few more lines added to tests.\n\nI noticed that there is no good summary of the latest patch, so let me\nmake it here:\n\nSo the latest version of the patch adds an option for \"temporary\"\nmaterialized views. Such materialized views are automatically deleted\nat the end of the session. Moreover, it also modifies the materialized\nview creation logic so that now if any of the source relations are\ntemporary, the final materialized view is temporary as well. This now\nmakes materialized views more aligned with regular views.\n\nTests test that this really works, that refreshing of such views work,\nand that refreshing can also work from a trigger.\n\n\nMitar\n\nOn Thu, Dec 27, 2018 at 5:15 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2018-Dec-27, Mitar wrote:\n>\n> > Hi!\n> >\n> > I made a new version of the patch. I added tests and changes to the\n> > docs and made sure various other aspects of this change for as well. I\n> > think this now makes temporary materialized views fully implemented\n> > and that in my view patch is complete. If there is anything else to\n> > add, please let me know, I do not yet have much experience\n> > contributing here. What are next steps? Do I just wait for it to be\n> > included into Commitfest? Do I add it there myself?\n>\n> Yes, please add it yourself to the commitfest.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Thu, 27 Dec 2018 10:35:44 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOne more version of the patch with more deterministic tests.\n\n\nMitar\n\nOn Thu, Dec 27, 2018 at 10:35 AM Mitar <mmitar@gmail.com> wrote:\n>\n> Hi!\n>\n> Thanks, I did it.\n>\n> I am attaching a new version of the patch with few more lines added to tests.\n>\n> I noticed that there is no good summary of the latest patch, so let me\n> make it here:\n>\n> So the latest version of the patch adds an option for \"temporary\"\n> materialized views. Such materialized views are automatically deleted\n> at the end of the session. Moreover, it also modifies the materialized\n> view creation logic so that now if any of the source relations are\n> temporary, the final materialized view is temporary as well. This now\n> makes materialized views more aligned with regular views.\n>\n> Tests test that this really works, that refreshing of such views work,\n> and that refreshing can also work from a trigger.\n>\n>\n> Mitar\n>\n> On Thu, Dec 27, 2018 at 5:15 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2018-Dec-27, Mitar wrote:\n> >\n> > > Hi!\n> > >\n> > > I made a new version of the patch. I added tests and changes to the\n> > > docs and made sure various other aspects of this change for as well. I\n> > > think this now makes temporary materialized views fully implemented\n> > > and that in my view patch is complete. If there is anything else to\n> > > add, please let me know, I do not yet have much experience\n> > > contributing here. What are next steps? Do I just wait for it to be\n> > > included into Commitfest? Do I add it there myself?\n> >\n> > Yes, please add it yourself to the commitfest.\n> >\n> > --\n> > Álvaro Herrera https://www.2ndQuadrant.com/\n> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Thu, 27 Dec 2018 23:48:10 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 12/28/18 8:48 AM, Mitar wrote:> One more version of the patch with \nmore deterministic tests.\n\nHer is quick initial review. I will do more testing later.\n\nIt applies builds and passes the tests.\n\nThe feature seems useful and also improves consistency, if we have \ntemporary tables and temporary views there should logically also be \ntemporary materialized tables.\n\nAs for you leaving out ON COMMIT I feel that it is ok since of the \nexisting options only really DROP makes any sense (you cannot truncate \nmaterialized views) and since temporary views do not have any ON COMMIT \nsupport.\n\n= Comments on the code\n\nThe changes to the code are small and look mostly correct.\n\nIn create_ctas_internal() why do you copy the relation even when you do \nnot modify it?\n\nIs it really ok to just remove SECURITY_RESTRICTED_OPERATION from \nExecCreateTableAs()? I feel it is there for a good reason and that we \npreferably want to reduce the duration of SECURITY_RESTRICTED_OPERATION \nto only include when we actually execute the query.\n\nAndreas\n\n",
"msg_date": "Fri, 11 Jan 2019 17:51:32 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOn Fri, Jan 11, 2019 at 8:51 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> Her is quick initial review. I will do more testing later.\n\nThanks for doing the review!\n\n> In create_ctas_internal() why do you copy the relation even when you do\n> not modify it?\n\nI was modelling this after code in view.c [1]. I can move copy into the \"if\".\n\n> Is it really ok to just remove SECURITY_RESTRICTED_OPERATION from\n> ExecCreateTableAs()? I feel it is there for a good reason and that we\n> preferably want to reduce the duration of SECURITY_RESTRICTED_OPERATION\n> to only include when we actually execute the query.\n\nThe comment there said that this is not really necessary for security:\n\n\"This is not necessary for security, but this keeps the behavior\nsimilar to REFRESH MATERIALIZED VIEW. Otherwise, one could create a\nmaterialized view not possible to refresh.\"\n\nBased on my experimentation, this is required to be able to use\ntemporary materialized views, but it does mean one has to pay\nattention from where one can refresh. For example, you cannot refresh\nfrom outside of the current session, because temporary object is not\navailable there. I have not seen any other example where refresh would\nnot be possible.\n\nThis is why I felt comfortable removing this. Also, no test failed\nafter removing this.\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/commands/view.c#L554\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 11 Jan 2019 11:47:54 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/11/19 8:47 PM, Mitar wrote:\n>> In create_ctas_internal() why do you copy the relation even when you do\n>> not modify it?\n> \n> I was modelling this after code in view.c [1]. I can move copy into the \"if\".\n\nMakes sense.\n\n>> Is it really ok to just remove SECURITY_RESTRICTED_OPERATION from\n>> ExecCreateTableAs()? I feel it is there for a good reason and that we\n>> preferably want to reduce the duration of SECURITY_RESTRICTED_OPERATION\n>> to only include when we actually execute the query.\n> \n> The comment there said that this is not really necessary for security:\n> \n> \"This is not necessary for security, but this keeps the behavior\n> similar to REFRESH MATERIALIZED VIEW. Otherwise, one could create a\n> materialized view not possible to refresh.\"\n> \n> Based on my experimentation, this is required to be able to use\n> temporary materialized views, but it does mean one has to pay\n> attention from where one can refresh. For example, you cannot refresh\n> from outside of the current session, because temporary object is not\n> available there. I have not seen any other example where refresh would\n> not be possible.\n> \n> This is why I felt comfortable removing this. Also, no test failed\n> after removing this.\n\nHm, I am still not convinced just removing it is a good idea. Sure, it \nis not a security issue but usability is also important. The question is \nhow much this worsens usability and how much extra work it would be to \nkeep the restriction.\n\nBtw, if we are going to remove SECURITY_RESTRICTED_OPERATION we should \nremove more code. There is no reason to save and reset the bitmask if we \ndo not alter it.\n\nAndreas\n\n",
"msg_date": "Thu, 17 Jan 2019 16:52:04 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 1/11/19 8:47 PM, Mitar wrote:\n>>> Is it really ok to just remove SECURITY_RESTRICTED_OPERATION from\n>>> ExecCreateTableAs()?\n\n>> The comment there said that this is not really necessary for security:\n>> \"This is not necessary for security, but this keeps the behavior\n>> similar to REFRESH MATERIALIZED VIEW. Otherwise, one could create a\n>> materialized view not possible to refresh.\"\n\n> Hm, I am still not convinced just removing it is a good idea. Sure, it \n> is not a security issue but usability is also important.\n\nIndeed. I don't buy the argument that this should work differently\nfor temp views. The fact that they're only accessible in the current\nsession is no excuse for that: security considerations still matter,\nbecause you can have different privilege contexts within a single\nsession (consider SECURITY DEFINER functions etc).\n\nWhat is the stumbling block to just leaving that alone?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 10:57:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/17/19 4:57 PM, Tom Lane wrote:\n> Andreas Karlsson <andreas@proxel.se> writes:\n>> On 1/11/19 8:47 PM, Mitar wrote:\n>>>> Is it really ok to just remove SECURITY_RESTRICTED_OPERATION from\n>>>> ExecCreateTableAs()?\n> \n>>> The comment there said that this is not really necessary for security:\n>>> \"This is not necessary for security, but this keeps the behavior\n>>> similar to REFRESH MATERIALIZED VIEW. Otherwise, one could create a\n>>> materialized view not possible to refresh.\"\n> \n>> Hm, I am still not convinced just removing it is a good idea. Sure, it\n>> is not a security issue but usability is also important.\n> \n> Indeed. I don't buy the argument that this should work differently\n> for temp views. The fact that they're only accessible in the current\n> session is no excuse for that: security considerations still matter,\n> because you can have different privilege contexts within a single\n> session (consider SECURITY DEFINER functions etc).\n> \n> What is the stumbling block to just leaving that alone?\n\nI think the issue Mitar ran into is that the temporary materialized view \nis created in the rStartup callback of the receiver which happens after \nSECURITY_RESTRICTED_OPERATION is set in ExecCreateTableAs(), so the \ncreation of the view itself is denied.\n\n From a cursory glance it looks like it would be possible to move the \nsetting of SECURITY_RESTRICTED_OPERATION to inside the rStartup \ncallabck, other than that the code for resetting the security context \nmight get a bit ugly. Do you see any flaws with that solution?\n\nAndreas\n\n",
"msg_date": "Thu, 17 Jan 2019 18:53:58 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 1/17/19 4:57 PM, Tom Lane wrote:\n>> What is the stumbling block to just leaving that alone?\n\n> I think the issue Mitar ran into is that the temporary materialized view \n> is created in the rStartup callback of the receiver which happens after \n> SECURITY_RESTRICTED_OPERATION is set in ExecCreateTableAs(), so the \n> creation of the view itself is denied.\n\nHm.\n\n> From a cursory glance it looks like it would be possible to move the \n> setting of SECURITY_RESTRICTED_OPERATION to inside the rStartup \n> callabck, other than that the code for resetting the security context \n> might get a bit ugly. Do you see any flaws with that solution?\n\nDon't think that works: the point here is to restrict what can happen\nduring planning/execution of the view query, so letting planning and\nquery startup happen first is no good.\n\nCreating the view object inside the rStartup callback is itself pretty\nmuch of a kluge; you'd expect that to happen earlier. I think the\nreason it was done that way was it was easier to find out the view's\ncolumn set there, but I'm sure we can find another way --- doing the\nobject creation more like a regular view does it is the obvious approach.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 14:31:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/11/19 8:47 PM, Mitar wrote:\n> Thanks for doing the review!\n\nI did some functional testing today and everything seems to work as \nexpected other than that the tab completion for psql seems to be missing.\n\nAndreas\n\n\n",
"msg_date": "Thu, 17 Jan 2019 23:40:52 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOn Thu, Jan 17, 2019 at 9:53 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> > What is the stumbling block to just leaving that alone?\n>\n> I think the issue Mitar ran into is that the temporary materialized view\n> is created in the rStartup callback of the receiver which happens after\n> SECURITY_RESTRICTED_OPERATION is set in ExecCreateTableAs(), so the\n> creation of the view itself is denied.\n\nYes, the error without that change is:\n\nERROR: cannot create temporary table within security-restricted operation\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 17 Jan 2019 17:50:32 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOn Thu, Jan 17, 2019 at 2:40 PM Andreas Karlsson <andreas@proxel.se> wrote:\n> I did some functional testing today and everything seems to work as\n> expected other than that the tab completion for psql seems to be missing.\n\nThanks. I can add those as soon as I figure how. :-)\n\nSo what are next steps here besides tab autocompletion? It is OK to\nremove that security check? If I understand correctly, there are some\ngeneral refactoring of code Tom is proposing, but I am not sure if I\nam able to do that/understand that.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 17 Jan 2019 17:53:08 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/18/19 2:53 AM, Mitar wrote:> On Thu, Jan 17, 2019 at 2:40 PM \nAndreas Karlsson <andreas@proxel.se> wrote:\n>> I did some functional testing today and everything seems to work as\n>> expected other than that the tab completion for psql seems to be missing.\n> \n> Thanks. I can add those as soon as I figure how. :-)\n\nThese rules are usually pretty easy to add. Just take a look in \nsrc/bin/psql/tab-complete.c to see how it is usually done.\n\n> So what are next steps here besides tab autocompletion? It is OK to\n> remove that security check? If I understand correctly, there are some\n> general refactoring of code Tom is proposing, but I am not sure if I\n> am able to do that/understand that.\n\nNo, I do not think it is ok to remove the check without a compelling \nargument for why the usability we gain from this check is not worth it. \nAdditionally I agree with Tom that the way the code is written currently \nis confusing so this refactoring would most likely be a win even without \nyour patch.\n\nI might take a stab at refactoring this myself this weekend. Hopefully \nit is not too involved.\n\nAndreas\n\n",
"msg_date": "Fri, 18 Jan 2019 16:18:03 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOn Fri, Jan 18, 2019 at 7:18 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> These rules are usually pretty easy to add. Just take a look in\n> src/bin/psql/tab-complete.c to see how it is usually done.\n\nThanks. I have added the auto-complete and attached a new patch.\n\n> I might take a stab at refactoring this myself this weekend. Hopefully\n> it is not too involved.\n\nThat would be great! I can afterwards update the patch accordingly.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Fri, 18 Jan 2019 11:32:02 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/17/19 8:31 PM, Tom Lane wrote:\n> Creating the view object inside the rStartup callback is itself pretty\n> much of a kluge; you'd expect that to happen earlier. I think the\n> reason it was done that way was it was easier to find out the view's\n> column set there, but I'm sure we can find another way --- doing the\n> object creation more like a regular view does it is the obvious approach.\n\nHere is a a stab at refactoring this so the object creation does not \nhappen in a callback. I am not that fond of the new API, but given how \ndifferent all the various callers of CreateIntoRelDestReceiver() are I \nhad no better idea.\n\nThe idea behind the patch is to always create the empty \ntable/materialized view before executing the query and do it in one more \nunified code path, and then later populate it unless NO DATA was \nspecified. I feel this makes the code easier to follow.\n\nThis patch introduces a small behavioral change, as can be seen from the \ndiff in the test suite, where since the object creation is moved earlier \nthe CTAS query snapshot will for example see the newly created table. \nThe new behavior seems more correct to me, but maybe I am missing some \nunintended consequences.\n\nAndreas",
"msg_date": "Mon, 21 Jan 2019 03:31:55 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/18/19 8:32 PM, Mitar wrote:\n> On Fri, Jan 18, 2019 at 7:18 AM Andreas Karlsson <andreas@proxel.se> wrote:\n>> These rules are usually pretty easy to add. Just take a look in\n>> src/bin/psql/tab-complete.c to see how it is usually done.\n> \n> Thanks. I have added the auto-complete and attached a new patch.\n\nHm, I do not think we should complete UNLOGGED MATERIALIZED VIEW even \nthough it is valid syntax. If you try to create one you will just get an \nerror. I am leaning towards removing the existing completion for this, \nbecause I do not see the point of completing to useless but technically \nvalid syntax.\n\nThis is the one I think we should probably remove:\n\n \telse if (TailMatches(\"CREATE\", \"UNLOGGED\"))\n \t\tCOMPLETE_WITH(\"TABLE\", \"MATERIALIZED VIEW\");\n\n>> I might take a stab at refactoring this myself this weekend. Hopefully\n>> it is not too involved.\n> \n> That would be great! I can afterwards update the patch accordingly.\n\nI have submitted a first shot at this. Let's see what others think of my \npatch.\n\nAndreas\n\n",
"msg_date": "Mon, 21 Jan 2019 03:46:05 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 1/21/19 3:31 AM, Andreas Karlsson wrote:\n> Here is a a stab at refactoring this so the object creation does not \n> happen in a callback.\n\nRebased my patch on top of Andres's pluggable storage patches. Plus some \nminor style changes.\n\nAndreas",
"msg_date": "Tue, 22 Jan 2019 03:10:17 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On Tue, Jan 22, 2019 at 03:10:17AM +0100, Andreas Karlsson wrote:\n> On 1/21/19 3:31 AM, Andreas Karlsson wrote:\n> > Here is a a stab at refactoring this so the object creation does not\n> > happen in a callback.\n> \n> Rebased my patch on top of Andres's pluggable storage patches. Plus some\n> minor style changes.\n\nTaking a note to look at this refactoring bit, which is different from\nthe temp matview part. Moved to next CF for now.\n--\nMichael",
"msg_date": "Mon, 4 Feb 2019 15:09:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2/4/19 7:09 AM, Michael Paquier wrote:\n> On Tue, Jan 22, 2019 at 03:10:17AM +0100, Andreas Karlsson wrote:\n>> On 1/21/19 3:31 AM, Andreas Karlsson wrote:\n>>> Here is a a stab at refactoring this so the object creation does not\n>>> happen in a callback.\n>>\n>> Rebased my patch on top of Andres's pluggable storage patches. Plus some\n>> minor style changes.\n> \n> Taking a note to look at this refactoring bit, which is different from\n> the temp matview part. Moved to next CF for now.\n\nShould I submit it as a separate CF entry or is it easiest if my \nrefactoring and Mi Tar's feature are reviewed together?\n\nAndreas\n\n",
"msg_date": "Mon, 4 Feb 2019 16:10:09 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On Mon, Feb 04, 2019 at 04:10:09PM +0100, Andreas Karlsson wrote:\n> Should I submit it as a separate CF entry or is it easiest if my refactoring\n> and Mi Tar's feature are reviewed together?\n\nThe refactoring patch is talking about changing the way objects are\ncreated within a CTAS, which is quite different from what is proposed\non this thread, so in order to attract the correct audience a separate\nthread with another CF entry seems more appropriate.\n\nNow... You have on this thread all the audience which already worked\non 874fe3a. And I am just looking at this patch, evaluating the\nbehavior change this is introducing. Still I would recommend a\nseparate thread as others may want to comment on that particular\npoint.\n--\nMichael",
"msg_date": "Tue, 5 Feb 2019 12:59:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi Andreas,\n\nOn Tue, Feb 05, 2019 at 12:59:12PM +0900, Michael Paquier wrote:\n> Now... You have on this thread all the audience which already worked\n> on 874fe3a. And I am just looking at this patch, evaluating the\n> behavior change this is introducing. Still I would recommend a\n> separate thread as others may want to comment on that particular\n> point.\n\nSo I have read through your patch, and there are a couple of things\nwhich I think we could simplify more. Here are my notes:\n1) We could remove the into clause from DR_intorel, which is used for\ntwo things:\n- Determine the relkind of the relation created. However the relation\ngets created before entering in the executor, and we already know its\nOID, so we also know its relkind.\n- skipData is visibly always false.\nWe may want to keep skipData to have an assertion at the beginning of\ninforel_startup for sanity purposes though.\n2) DefineIntoRelForDestReceiver is just a wrapper for\ncreate_ctas_nodata, so we had better just merge both of them and\nexpose directly the routine creating the relation definition, so the\nnew interface is a bit awkward.\n3) The part about the regression diff is well... Expected... We may\nwant a comment about that. We could consider as well adding a\nregression test inspired from REINDEX SCHEMA to show that the CTAS is\ncreated before the data is actually filled in.\n--\nMichael",
"msg_date": "Tue, 5 Feb 2019 20:36:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2/5/19 12:36 PM, Michael Paquier wrote:> - skipData is visibly always \nfalse.\n > We may want to keep skipData to have an assertion at the beginning of\n > inforel_startup for sanity purposes though.\nThis is not true in this version of the patch. The following two cases \nwould crash if we add such an assertion:\n\nEXPLAIN ANALYZE CREATE TABLE foo AS SELECT 1 WITH NO DATA;\n\nand\n\nPREPARE s AS SELECT 1;\nCREATE TABLE bar AS EXECUTE s WITH NO DATA;\n\nsince they both still run the setup and tear down steps of the executor.\n\nI guess that I could fix that for the second case as soon as I \nunderstand how much of the portal stuff can be skipped in \nExecuteQuery(). But I am not sure what we should do with EXPLAIN ANALYZE \n... NO DATA. It feels like a contraction to me. Should we just raise an \nerror? Or should we try to preserve the current behavior where you see \nsomething like the below?\n\n QUERY PLAN\n-----------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4) (never executed)\n Planning Time: 0.040 ms\n Execution Time: 0.002 ms\n(3 rows)\n\n > 2) DefineIntoRelForDestReceiver is just a wrapper for\n > create_ctas_nodata, so we had better just merge both of them and\n > expose directly the routine creating the relation definition, so the\n > new interface is a bit awkward.\nAgreed, the API is awakward as it is now but it was the least awkward \none I managed to design. But I think if we fix the issue above then it \nmight be possible to create a less awkward API.\n\n > 3) The part about the regression diff is well... Expected... We may\n > want a comment about that. We could consider as well adding a\n > regression test inspired from REINDEX SCHEMA to show that the CTAS is\n > created before the data is actually filled in.\nYeah, that sounds like a good idea.\n\nAndreas\n\n",
"msg_date": "Tue, 5 Feb 2019 18:56:00 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2/5/19 6:56 PM, Andreas Karlsson wrote:\n> On 2/5/19 12:36 PM, Michael Paquier wrote:> - skipData is visibly always \n> false.\n> > We may want to keep skipData to have an assertion at the beginning of\n> > inforel_startup for sanity purposes though.\n> This is not true in this version of the patch. The following two cases \n> would crash if we add such an assertion:\n> \n> EXPLAIN ANALYZE CREATE TABLE foo AS SELECT 1 WITH NO DATA;\n> \n> and\n> \n> PREPARE s AS SELECT 1;\n> CREATE TABLE bar AS EXECUTE s WITH NO DATA;\n> \n> since they both still run the setup and tear down steps of the executor.\n> \n> I guess that I could fix that for the second case as soon as I \n> understand how much of the portal stuff can be skipped in \n> ExecuteQuery(). But I am not sure what we should do with EXPLAIN ANALYZE \n> ... NO DATA. It feels like a contraction to me. Should we just raise an \n> error? Or should we try to preserve the current behavior where you see \n> something like the below?\n\nIn general I do not like how EXPLAIN CREATE TABLE AS uses a separate \ncode path than CREATE TABLE AS, because we get weird but mostly harmless \nedge cases like the below and that I do not think that EXPLAIN ANALYZE \nCREATE MATERIALIZED VIEW sets the security context properly.\n\nI am not sure if any of this is worth fixing, but it certainly \ncontributed to why I thought that it was hard to design a good API.\n\npostgres=# EXPLAIN ANALYZE CREATE TABLE IF NOT EXISTS bar AS SELECT 1;\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.002 \nrows=1 loops=1)\n Planning Time: 0.030 ms\n Execution Time: 12.245 ms\n(3 rows)\n\nTime: 18.223 ms\npostgres=# EXPLAIN ANALYZE CREATE TABLE IF NOT EXISTS bar AS SELECT 1;\nERROR: relation \"bar\" already exists\nTime: 2.129 ms\n\nAndreas\n\n",
"msg_date": "Wed, 6 Feb 2019 03:11:33 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On Tue, Feb 05, 2019 at 06:56:00PM +0100, Andreas Karlsson wrote:\n> I guess that I could fix that for the second case as soon as I understand\n> how much of the portal stuff can be skipped in ExecuteQuery(). But I am not\n> sure what we should do with EXPLAIN ANALYZE ... NO DATA. It feels like a\n> contraction to me. Should we just raise an error? Or should we try to\n> preserve the current behavior where you see something like the\n> below?\n\nThis grammar is documented, so it seems to me that it would be just\nannoying for users relying on it to throw an error for a pattern that\nsimply worked, particularly if a driver layer is using it.\n\nThe issue this outlines is that we have a gap in the tests for a\nsubset of the grammar, which is not a good thing.\n\nIf I put Assert(!into->skipData) at the beginning of intorel_startup()\nthen the main regression test suite is able to pass, both on HEAD and\nwith your patch. There is one test for CTAS EXECUTE in prepare.sql,\nso let's add a pattern with WITH NO DATA there for the first pattern.\nAdding a second test with EXPLAIN SELECT INTO into select_into.sql\nalso looks like a good thing.\n\nAttached is a patch to do that and close the gap. With that, we will\nbe able to check for inconsistencies better when working on the\nfollow-up patches. What do you think?\n--\nMichael",
"msg_date": "Wed, 6 Feb 2019 18:18:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2/6/19 10:18 AM, Michael Paquier wrote:\n> Attached is a patch to do that and close the gap. With that, we will\n> be able to check for inconsistencies better when working on the\n> follow-up patches. What do you think?\n\nI approve. I was when testing this stuff that I found the IF NOT EXISTS \nissue.\n\nAndreas\n\n",
"msg_date": "Wed, 6 Feb 2019 17:05:56 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On Wed, Feb 06, 2019 at 05:05:56PM +0100, Andreas Karlsson wrote:\n> On 2/6/19 10:18 AM, Michael Paquier wrote:\n>> Attached is a patch to do that and close the gap. With that, we will\n>> be able to check for inconsistencies better when working on the\n>> follow-up patches. What do you think?\n> \n> I approve. I was when testing this stuff that I found the IF NOT EXISTS\n> issue.\n\nThanks, I have committed those extra tests to close the gap.\n--\nMichael",
"msg_date": "Thu, 7 Feb 2019 09:23:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 2/7/19 2:23 AM, Michael Paquier wrote:\n> On Wed, Feb 06, 2019 at 05:05:56PM +0100, Andreas Karlsson wrote:\n>> On 2/6/19 10:18 AM, Michael Paquier wrote:\n>>> Attached is a patch to do that and close the gap. With that, we will\n>>> be able to check for inconsistencies better when working on the\n>>> follow-up patches. What do you think?\n>>\n>> I approve. I was when testing this stuff that I found the IF NOT EXISTS\n>> issue.\n> \n> Thanks, I have committed those extra tests to close the gap.\n\nI think a new patch is required here so I have marked this Waiting on \nAuthor. cfbot is certainly not happy and anyone trying to review is \ngoing to have hard time trying to determine what to review.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Thu, 7 Mar 2019 10:45:04 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Feature: temporary materialized views"
},
{
"msg_contents": "On Thu, Mar 07, 2019 at 10:45:04AM +0200, David Steele wrote:\n> I think a new patch is required here so I have marked this Waiting on\n> Author. cfbot is certainly not happy and anyone trying to review is going\n> to have hard time trying to determine what to review.\n\nI would recommend to mark this patch as returned with feedback as we\nalready know that we need to rethink a bit harder the way relations\nare created in CTAS, not to mention that the case of EXPLAIN CTAS IF\nNOT EXISTS is not correctly handled. This requires more than three of\nwork which is what remains until the end of this CF, so v12 is not a\nsane target.\n--\nMichael",
"msg_date": "Fri, 8 Mar 2019 10:38:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 3/8/19 3:38 AM, Michael Paquier wrote:\n> On Thu, Mar 07, 2019 at 10:45:04AM +0200, David Steele wrote:\n>> I think a new patch is required here so I have marked this Waiting on\n>> Author. cfbot is certainly not happy and anyone trying to review is going\n>> to have hard time trying to determine what to review.\n> \n> I would recommend to mark this patch as returned with feedback as we\n> already know that we need to rethink a bit harder the way relations\n> are created in CTAS, not to mention that the case of EXPLAIN CTAS IF\n> NOT EXISTS is not correctly handled. This requires more than three of\n> work which is what remains until the end of this CF, so v12 is not a\n> sane target.\n\nOK, I will do that on March 13th if there are no arguments to the contrary.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Sun, 10 Mar 2019 08:55:08 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 3/8/19 2:38 AM, Michael Paquier wrote:\n> On Thu, Mar 07, 2019 at 10:45:04AM +0200, David Steele wrote:\n>> I think a new patch is required here so I have marked this Waiting on\n>> Author. cfbot is certainly not happy and anyone trying to review is going\n>> to have hard time trying to determine what to review.\n> \n> I would recommend to mark this patch as returned with feedback as we\n> already know that we need to rethink a bit harder the way relations\n> are created in CTAS, not to mention that the case of EXPLAIN CTAS IF\n> NOT EXISTS is not correctly handled. This requires more than three of\n> work which is what remains until the end of this CF, so v12 is not a\n> sane target.\n\nAgreed. Even if I could find the time to write a patch for this there is \nno way it would make it into v12.\n\nAndreas\n\n\n",
"msg_date": "Mon, 11 Mar 2019 12:06:36 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nI just want to make sure if I understand correctly. But my initial\nproposal/patch is currently waiting first for all patches for the\nrefactoring to happen, which are done by amazing Andreas? This sounds\ngood to me and I see a lot of progress/work has been done and I am OK\nwith waiting. Please ping me explicitly if there will be anything I am\nexpected to do at any point in time.\n\nAnd just to make sure, these current patches are doing just\nrefactoring but are not also introducing temporary materialized views\nyet? Or is that also done in patches made by Andreas?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 14 Mar 2019 01:13:51 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 3/14/19 9:13 AM, Mitar wrote:> I just want to make sure if I \nunderstand correctly. But my initial\n> proposal/patch is currently waiting first for all patches for the\n> refactoring to happen, which are done by amazing Andreas? This sounds\n> good to me and I see a lot of progress/work has been done and I am OK\n> with waiting. Please ping me explicitly if there will be anything I am\n> expected to do at any point in time.\n> \n> And just to make sure, these current patches are doing just\n> refactoring but are not also introducing temporary materialized views\n> yet? Or is that also done in patches made by Andreas?\n\nYeah, your patch is sadly stuck behind the refactoring, and the \nrefactoring proved to be harder to do than I initially thought. The \ndifferent code paths for executing CREATE MATERIALIZED VIEW are so \ndifferent that it is hard to find a good common interface.\n\nSo there is unfortunately little you can do here other than wait for me \nor someone else to do the refactoring as I cannot see your patch getting \naccepted without keeping the existing restrictions on side effects for \nCREATE MATERIALIZED VIEW.\n\nAndreas\n\n",
"msg_date": "Thu, 14 Mar 2019 15:56:26 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "Hi!\n\nOn Thu, Mar 14, 2019 at 7:56 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> Yeah, your patch is sadly stuck behind the refactoring, and the\n> refactoring proved to be harder to do than I initially thought. The\n> different code paths for executing CREATE MATERIALIZED VIEW are so\n> different that it is hard to find a good common interface.\n>\n> So there is unfortunately little you can do here other than wait for me\n> or someone else to do the refactoring as I cannot see your patch getting\n> accepted without keeping the existing restrictions on side effects for\n> CREATE MATERIALIZED VIEW.\n\nSounds good. I will wait.\n\nThanks.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 14 Mar 2019 16:19:40 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: temporary materialized views"
},
{
"msg_contents": "On 3/15/19 3:19 AM, Mitar wrote:\n> \n> On Thu, Mar 14, 2019 at 7:56 AM Andreas Karlsson <andreas@proxel.se> wrote:\n>> Yeah, your patch is sadly stuck behind the refactoring, and the\n>> refactoring proved to be harder to do than I initially thought. The\n>> different code paths for executing CREATE MATERIALIZED VIEW are so\n>> different that it is hard to find a good common interface.\n>>\n>> So there is unfortunately little you can do here other than wait for me\n>> or someone else to do the refactoring as I cannot see your patch getting\n>> accepted without keeping the existing restrictions on side effects for\n>> CREATE MATERIALIZED VIEW.\n> \n> Sounds good. I will wait.\n\nThis patch has been marked as Returned with Feedback since it is not \nclear when the refactoring it depends on will be done.\n\nYou can submit to a future commitfest when you are able to produce a new \npatch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Fri, 15 Mar 2019 13:51:04 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature: temporary materialized views"
}
] |
[
{
"msg_contents": "Is there any way that one of the Postgres Background process may go down?\nmeaning the process getting stopped?\n\nFor example, can the wal sender process alone stop working? If it does so,\nwhich part of the logs I must check to proceed further.\n\n\n\n-----\n--\nThanks,\nRajan.\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Tue, 25 Dec 2018 03:20:23 -0700 (MST)",
"msg_from": "rajan <vgmonnet@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is there any way that one of the Postgres Background/Utility\n process may go down?"
}
] |
[
{
"msg_contents": "Fix failure to check for open() or fsync() failures.\n\nWhile it seems OK to not be concerned about fsync() failure for a\npre-existing signal file, it's not OK to not even check for open()\nfailure. This at least causes complaints from static analyzers,\nand I think on some platforms passing -1 to fsync() or close() might\ntrigger assertion-type failures. Also add (void) casts to make clear\nthat we're ignoring fsync's result intentionally.\n\nOversights in commit 2dedf4d9a, noted by Coverity.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/8528e3d849a896f8711c56fb41eae56f8c986729\n\nModified Files\n--------------\nsrc/backend/access/transam/xlog.c | 17 ++++++++++++-----\n1 file changed, 12 insertions(+), 5 deletions(-)\n\n",
"msg_date": "Wed, 26 Dec 2018 21:08:23 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix failure to check for open() or fsync() failures."
},
{
"msg_contents": "On Wed, Dec 26, 2018 at 09:08:23PM +0000, Tom Lane wrote:\n> Fix failure to check for open() or fsync() failures.\n> \n> While it seems OK to not be concerned about fsync() failure for a\n> pre-existing signal file, it's not OK to not even check for open()\n> failure. This at least causes complaints from static analyzers,\n> and I think on some platforms passing -1 to fsync() or close() might\n> trigger assertion-type failures. Also add (void) casts to make clear\n> that we're ignoring fsync's result intentionally.\n> \n> Oversights in commit 2dedf4d9a, noted by Coverity.\n\n fd = BasicOpenFilePerm(STANDBY_SIGNAL_FILE, O_RDWR | PG_BINARY | get_sync_bit(sync_method),\n S_IRUSR | S_IWUSR);\n- pg_fsync(fd);\n- close(fd);\n+ if (fd >= 0)\n+ {\n+ (void) pg_fsync(fd);\n+ close(fd);\n+ }\n\nWouldn't it be more simple to remove stat() and just call\nBasicOpenFilePerm, complaining with FATAL about any failures,\nincluding EACCES, on the way? The code is racy as designed, even if\nthat's not a big deal for recovery purposes.\n--\nMichael",
"msg_date": "Thu, 27 Dec 2018 07:43:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix failure to check for open() or fsync() failures."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Dec 26, 2018 at 09:08:23PM +0000, Tom Lane wrote:\n>> Fix failure to check for open() or fsync() failures.\n>> \n>> While it seems OK to not be concerned about fsync() failure for a\n>> pre-existing signal file, it's not OK to not even check for open()\n>> failure. This at least causes complaints from static analyzers,\n\n> Wouldn't it be more simple to remove stat() and just call\n> BasicOpenFilePerm, complaining with FATAL about any failures,\n> including EACCES, on the way? The code is racy as designed, even if\n> that's not a big deal for recovery purposes.\n\nIt appears to me that the code is intentionally not worrying about\nfsync failure, so it seems wrong for it to FATAL out if it's unable\nto open the file to fsync it. And it surely shouldn't do so if the\nfile isn't there.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 26 Dec 2018 17:55:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix failure to check for open() or fsync() failures."
},
{
"msg_contents": "On Wed, Dec 26, 2018 at 05:55:36PM -0500, Tom Lane wrote:\n> It appears to me that the code is intentionally not worrying about\n> fsync failure, so it seems wrong for it to FATAL out if it's unable\n> to open the file to fsync it. And it surely shouldn't do so if the\n> file isn't there.\n\nMy point is a bit different though: it seems to me that we could just\ncall BasicOpenFilePerm() and remove the stat() to do exactly the same\nthings, simplifying the code.\n--\nMichael",
"msg_date": "Thu, 27 Dec 2018 10:09:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix failure to check for open() or fsync() failures."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Dec 26, 2018 at 05:55:36PM -0500, Tom Lane wrote:\n>> It appears to me that the code is intentionally not worrying about\n>> fsync failure, so it seems wrong for it to FATAL out if it's unable\n>> to open the file to fsync it. And it surely shouldn't do so if the\n>> file isn't there.\n\n> My point is a bit different though: it seems to me that we could just\n> call BasicOpenFilePerm() and remove the stat() to do exactly the same\n> things, simplifying the code.\n\nOh, I see. Yeah, if we're ignoring errors anyway, the stat calls\nseem redundant.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 26 Dec 2018 20:35:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix failure to check for open() or fsync() failures."
},
{
"msg_contents": "On Wed, Dec 26, 2018 at 08:35:22PM -0500, Tom Lane wrote:\n> Oh, I see. Yeah, if we're ignoring errors anyway, the stat calls\n> seem redundant.\n\nFor this one, I think that we could simplify as attached (this causes\nopen() to fail additionally because of the sync flags, but that's not\nreally worth worrying). Thoughts?\n--\nMichael",
"msg_date": "Thu, 27 Dec 2018 11:10:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix failure to check for open() or fsync() failures."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Dec 26, 2018 at 08:35:22PM -0500, Tom Lane wrote:\n>> Oh, I see. Yeah, if we're ignoring errors anyway, the stat calls\n>> seem redundant.\n\n> For this one, I think that we could simplify as attached (this causes\n> open() to fail additionally because of the sync flags, but that's not\n> really worth worrying). Thoughts?\n\nActually, now that I think a bit more, this isn't a good idea. We want\nstandby_signal_file_found (resp. recovery_signal_file_found) to get set\nif the file exists, even if we're unable to fsync it for some reason.\nA counterexample to the proposed patch is that a signal file that's\nread-only to the server will get ignored, which it should not be.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 26 Dec 2018 21:30:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix failure to check for open() or fsync() failures."
}
] |
[
{
"msg_contents": "Hi all.\n\n-----Information-----\nMembers of Fujitsu Japan may not be able to reply in the term below.\nTERM: 29th December 2018 ~ 6th January 2019\n\nNOTE:\nMembers of Fujitsu Japan are those whose mail domain is \"@jp.fujitsu.com\"\n\nBest regards,\n---------------------\nRyohei Nagaura\n\n\n\n",
"msg_date": "Thu, 27 Dec 2018 01:46:34 +0000",
"msg_from": "\"Nagaura, Ryohei\" <nagaura.ryohei@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[information] Winter vacation"
}
] |
[
{
"msg_contents": "# PostgreSQL partition tables use more private memory\n\nHi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way. \n\n- when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;\n\n- it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;\n\n- it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;\n\n- pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.\n\nOur PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.\n\nI did a test using valgrind in test environment to repeat this scene, the following is the steps. \n\n## 1. env\n\n- RHEL 6.3 X86_64\n- PostgreSQL 10.2\n\n## 2. non-partition table sql\n\n drop table tb_part_test cascade;\n \n create table tb_part_test\n (\n STATIS_DATE int NOT NULL, \n ORDER_NUM int DEFAULT NULL,\n CMMDTY_CODE varchar(40) default '',\n RECEIVE_PLANT varchar(4) DEFAULT '',\n RECEIVE_LOCAT varchar(10) DEFAULT '',\n SUPPLIER_CODE varchar(20) DEFAULT '',\n RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',\n \n c1 varchar(2) DEFAULT '',\n c2 varchar(2) DEFAULT '',\n c3 varchar(2) DEFAULT '',\n c4 varchar(2) DEFAULT '',\n c5 varchar(2) DEFAULT '',\n c6 varchar(2) DEFAULT '',\n c7 varchar(2) DEFAULT '',\n c8 varchar(2) DEFAULT '',\n c9 varchar(2) DEFAULT '',\n c10 varchar(2) DEFAULT '',\n c11 varchar(2) DEFAULT '',\n c12 varchar(2) DEFAULT '',\n c13 varchar(2) DEFAULT '',\n c14 varchar(2) DEFAULT '',\n c15 varchar(2) DEFAULT '',\n c16 varchar(2) DEFAULT '',\n c17 varchar(2) DEFAULT '',\n c18 varchar(2) DEFAULT '',\n c19 varchar(2) DEFAULT '',\n c20 varchar(2) DEFAULT '',\n c21 varchar(2) DEFAULT '',\n c22 varchar(2) DEFAULT '',\n c23 varchar(2) DEFAULT '',\n c24 varchar(2) DEFAULT ''\n );\n\n## 3. partition table sql\n\n drop table tb_part_test cascade;\n \n create table tb_part_test\n (\n STATIS_DATE int NOT NULL, \n ORDER_NUM int DEFAULT NULL,\n CMMDTY_CODE varchar(40) default '',\n RECEIVE_PLANT varchar(4) DEFAULT '',\n RECEIVE_LOCAT varchar(10) DEFAULT '',\n SUPPLIER_CODE varchar(20) DEFAULT '',\n RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',\n \n c1 varchar(2) DEFAULT '',\n c2 varchar(2) DEFAULT '',\n c3 varchar(2) DEFAULT '',\n c4 varchar(2) DEFAULT '',\n c5 varchar(2) DEFAULT '',\n c6 varchar(2) DEFAULT '',\n c7 varchar(2) DEFAULT '',\n c8 varchar(2) DEFAULT '',\n c9 varchar(2) DEFAULT '',\n c10 varchar(2) DEFAULT '',\n c11 varchar(2) DEFAULT '',\n c12 varchar(2) DEFAULT '',\n c13 varchar(2) DEFAULT '',\n c14 varchar(2) DEFAULT '',\n c15 varchar(2) DEFAULT '',\n c16 varchar(2) DEFAULT '',\n c17 varchar(2) DEFAULT '',\n c18 varchar(2) DEFAULT '',\n c19 varchar(2) DEFAULT '',\n c20 varchar(2) DEFAULT '',\n c21 varchar(2) DEFAULT '',\n c22 varchar(2) DEFAULT '',\n c23 varchar(2) DEFAULT '',\n c24 varchar(2) DEFAULT ''\n )PARTITION BY LIST (STATIS_DATE); \n \n DO $$\n DECLARE r record;\n BEGIN\n FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series( '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd\n LOOP\n EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION OF tb_part_test FOR VALUES IN (' || r.dt || ')';\n END LOOP;\n END$$;\n\n\n## 4. test.sql\n\n copy (select pg_backend_pid()) to '/tmp/test.pid';\n \n update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';\n\n## 5. test1.sql(tb_part_test1 is a partition table, and it has the same structure with tb_part_test)\n\n copy (select pg_backend_pid()) to '/tmp/test.pid';\n\n update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';\n\n update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';\n\n## 6. valgrind command\n\n valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix=\"%m %p \" --log_statement=all --shared_buffers=4GB\n\n## 7. test steps\n\n1. Start pg using valgrind, create non-partition table, run pgbench for 1000s, get 29201\\_nonpart\\_1000s.log\n\n pgbench -n -T 1000 -r -f test.sql\n\n2. Start pg using valgrind, create partition table, run pgbench for 1000s, get 27064\\_part\\_1000s.log\n\n pgbench -n -T 1000 -r -f test.sql\n\n3. Start pg using valgrind, create partition table, run pgbench for 2000s, get 864\\_part\\_2000s.log\n\n pgbench -n -T 2000 -r -f test.sql\n\n4. Start pg using valgrind, create partition table, run pgbench for 1000s, get 16507\\_part\\_2tb\\_1000s.log\n\n pgbench -n -T 1000 -r -f test1.sql\n\nThe attachments are valgrind logs. Thanks. \n\nSincerely,\nMarcus Mo",
"msg_date": "Thu, 27 Dec 2018 14:44:18 +0800 (CST)",
"msg_from": "=?GBK?B?tPPLyQ==?= <dasong2410@163.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL partition tables use more private memory"
},
{
"msg_contents": "Hi\n\nčt 27. 12. 2018 v 11:48 odesílatel 大松 <dasong2410@163.com> napsal:\n\n> # PostgreSQL partition tables use more private memory\n>\n> Hi, there is a process private memory issue about partition tables in our\n> production environment. We're not sure if it's a bug or Pg just works in\n> this way.\n>\n\n> - when dml operated on partition tables, the pg process will occupy more\n> memory(I saw this in top command result, RES-SHR) than normal tables, it\n> could be 10x more;\n>\n\nPostgreSQL uses process memory for catalog caches. Partitions are like\ntables - if you use lot of partitions, then you use lot of tables, and you\nneed lot of memory for caches. This caches are dropped when some in system\ncatalog is changed.\n\n\n>\n> - it related to partition and column quantity, the more partitions and\n> columns the partition table has, the more memory the related process\n> occupies;\n>\n> - it also related table quantity refered to dml statments which executed\n> in the process, two tables could double the memory, valgrind log will show\n> you the result;\n>\n> - pg process will not release this memory until the process is\n> disconnected, unfortunately our applications use connection pool that will\n> not release connections.\n>\n\nIt is expected behave - a) glibc memory holds allocated memory inside\nprocess to process end, b) when there are not changes in system catalog,\nthen caches are not cleaned.\n\nWhen you have this issue, then it is necessary to close processes - a\npooling software can define \"dirty\" time, and should be able to close\nsession after this time. Maybe one hour, maybe twenty minutes.\n\nRegards\n\nPavel\n\n\n> Our PostgreSQL database server which encounters this problem has about\n> 48GB memory, there are more than one hundred pg processes in this server,\n> and each process comsumes couple hundreds MB of private memory. It\n> frequently runs out of the physical memory and swap recently.\n>\n> I did a test using valgrind in test environment to repeat this scene, the\n> following is the steps.\n>\n> ## 1. env\n>\n> - RHEL 6.3 X86_64\n> - PostgreSQL 10.2\n>\n> ## 2. non-partition table sql\n>\n> drop table tb_part_test cascade;\n>\n> create table tb_part_test\n> (\n> STATIS_DATE int NOT NULL,\n> ORDER_NUM int DEFAULT NULL,\n> CMMDTY_CODE varchar(40) default '',\n> RECEIVE_PLANT varchar(4) DEFAULT '',\n> RECEIVE_LOCAT varchar(10) DEFAULT '',\n> SUPPLIER_CODE varchar(20) DEFAULT '',\n> RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',\n>\n> c1 varchar(2) DEFAULT '',\n> c2 varchar(2) DEFAULT '',\n> c3 varchar(2) DEFAULT '',\n> c4 varchar(2) DEFAULT '',\n> c5 varchar(2) DEFAULT '',\n> c6 varchar(2) DEFAULT '',\n> c7 varchar(2) DEFAULT '',\n> c8 varchar(2) DEFAULT '',\n> c9 varchar(2) DEFAULT '',\n> c10 varchar(2) DEFAULT '',\n> c11 varchar(2) DEFAULT '',\n> c12 varchar(2) DEFAULT '',\n> c13 varchar(2) DEFAULT '',\n> c14 varchar(2) DEFAULT '',\n> c15 varchar(2) DEFAULT '',\n> c16 varchar(2) DEFAULT '',\n> c17 varchar(2) DEFAULT '',\n> c18 varchar(2) DEFAULT '',\n> c19 varchar(2) DEFAULT '',\n> c20 varchar(2) DEFAULT '',\n> c21 varchar(2) DEFAULT '',\n> c22 varchar(2) DEFAULT '',\n> c23 varchar(2) DEFAULT '',\n> c24 varchar(2) DEFAULT ''\n> );\n>\n> ## 3. partition table sql\n>\n> drop table tb_part_test cascade;\n>\n> create table tb_part_test\n> (\n> STATIS_DATE int NOT NULL,\n> ORDER_NUM int DEFAULT NULL,\n> CMMDTY_CODE varchar(40) default '',\n> RECEIVE_PLANT varchar(4) DEFAULT '',\n> RECEIVE_LOCAT varchar(10) DEFAULT '',\n> SUPPLIER_CODE varchar(20) DEFAULT '',\n> RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',\n>\n> c1 varchar(2) DEFAULT '',\n> c2 varchar(2) DEFAULT '',\n> c3 varchar(2) DEFAULT '',\n> c4 varchar(2) DEFAULT '',\n> c5 varchar(2) DEFAULT '',\n> c6 varchar(2) DEFAULT '',\n> c7 varchar(2) DEFAULT '',\n> c8 varchar(2) DEFAULT '',\n> c9 varchar(2) DEFAULT '',\n> c10 varchar(2) DEFAULT '',\n> c11 varchar(2) DEFAULT '',\n> c12 varchar(2) DEFAULT '',\n> c13 varchar(2) DEFAULT '',\n> c14 varchar(2) DEFAULT '',\n> c15 varchar(2) DEFAULT '',\n> c16 varchar(2) DEFAULT '',\n> c17 varchar(2) DEFAULT '',\n> c18 varchar(2) DEFAULT '',\n> c19 varchar(2) DEFAULT '',\n> c20 varchar(2) DEFAULT '',\n> c21 varchar(2) DEFAULT '',\n> c22 varchar(2) DEFAULT '',\n> c23 varchar(2) DEFAULT '',\n> c24 varchar(2) DEFAULT ''\n> )PARTITION BY LIST (STATIS_DATE);\n>\n> DO $$\n> DECLARE r record;\n> BEGIN\n> FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series(\n> '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd\n> LOOP\n> EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION\n> OF tb_part_test FOR VALUES IN (' || r.dt || ')';\n> END LOOP;\n> END$$;\n>\n>\n> ## 4. test.sql\n>\n> copy (select pg_backend_pid()) to '/tmp/test.pid';\n>\n> update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE =\n> '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND\n> SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND\n> RECEIVE_PLANT_TYPE = '04';\n>\n> ## 5. test1.sql(tb_part_test1 is a partition table, and it has the same\n> structure with tb_part_test)\n>\n> copy (select pg_backend_pid()) to '/tmp/test.pid';\n>\n> update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE =\n> '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND\n> SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND\n> RECEIVE_PLANT_TYPE = '04';\n>\n> update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE =\n> '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND\n> SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND\n> RECEIVE_PLANT_TYPE = '04';\n>\n> ## 6. valgrind command\n>\n> valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes\n> --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes\n> --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix=\"%m\n> %p \" --log_statement=all --shared_buffers=4GB\n>\n> ## 7. test steps\n>\n> 1. Start pg using valgrind, create non-partition table, run pgbench for\n> 1000s, get 29201\\_nonpart\\_1000s.log\n>\n> pgbench -n -T 1000 -r -f test.sql\n>\n> 2. Start pg using valgrind, create partition table, run pgbench for\n> 1000s, get 27064\\_part\\_1000s.log\n>\n> pgbench -n -T 1000 -r -f test.sql\n>\n> 3. Start pg using valgrind, create partition table, run pgbench for\n> 2000s, get 864\\_part\\_2000s.log\n>\n> pgbench -n -T 2000 -r -f test.sql\n>\n> 4. Start pg using valgrind, create partition table, run pgbench for\n> 1000s, get 16507\\_part\\_2tb\\_1000s.log\n>\n> pgbench -n -T 1000 -r -f test1.sql\n>\n> The attachments are valgrind logs. Thanks.\n>\n> Sincerely,\n> Marcus Mo\n>\n>\n>\n>\n\nHičt 27. 12. 2018 v 11:48 odesílatel 大松 <dasong2410@163.com> napsal:# PostgreSQL partition tables use more private memory\n\r\nHi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way. \n\n- when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;PostgreSQL uses process memory for catalog caches. Partitions are like tables - if you use lot of partitions, then you use lot of tables, and you need lot of memory for caches. This caches are dropped when some in system catalog is changed. \n\n- it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;\n\n- it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;\n\n- pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.It is expected behave - a) glibc memory holds allocated memory inside process to process end, b) when there are not changes in system catalog, then caches are not cleaned.When you have this issue, then it is necessary to close processes - a pooling software can define \"dirty\" time, and should be able to close session after this time. Maybe one hour, maybe twenty minutes.RegardsPavel\n\r\nOur PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.\n\r\nI did a test using valgrind in test environment to repeat this scene, the following is the steps. \n\n## 1. env\n\n- RHEL 6.3 X86_64\n- PostgreSQL 10.2\n\n## 2. non-partition table sql\n\n drop table tb_part_test cascade;\n \n create table tb_part_test\n (\n STATIS_DATE int NOT NULL, \n ORDER_NUM int DEFAULT NULL,\n CMMDTY_CODE varchar(40) default '',\n RECEIVE_PLANT varchar(4) DEFAULT '',\n RECEIVE_LOCAT varchar(10) DEFAULT '',\n SUPPLIER_CODE varchar(20) DEFAULT '',\n RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',\n \n c1 varchar(2) DEFAULT '',\n c2 varchar(2) DEFAULT '',\n c3 varchar(2) DEFAULT '',\n c4 varchar(2) DEFAULT '',\n c5 varchar(2) DEFAULT '',\n c6 varchar(2) DEFAULT '',\n c7 varchar(2) DEFAULT '',\n c8 varchar(2) DEFAULT '',\n c9 varchar(2) DEFAULT '',\n c10 varchar(2) DEFAULT '',\n c11 varchar(2) DEFAULT '',\n c12 varchar(2) DEFAULT '',\n c13 varchar(2) DEFAULT '',\n c14 varchar(2) DEFAULT '',\n c15 varchar(2) DEFAULT '',\n c16 varchar(2) DEFAULT '',\n c17 varchar(2) DEFAULT '',\n c18 varchar(2) DEFAULT '',\n c19 varchar(2) DEFAULT '',\n c20 varchar(2) DEFAULT '',\n c21 varchar(2) DEFAULT '',\n c22 varchar(2) DEFAULT '',\n c23 varchar(2) DEFAULT '',\n c24 varchar(2) DEFAULT ''\n );\n\n## 3. partition table sql\n\n drop table tb_part_test cascade;\n \n create table tb_part_test\n (\n STATIS_DATE int NOT NULL, \n ORDER_NUM int DEFAULT NULL,\n CMMDTY_CODE varchar(40) default '',\n RECEIVE_PLANT varchar(4) DEFAULT '',\n RECEIVE_LOCAT varchar(10) DEFAULT '',\n SUPPLIER_CODE varchar(20) DEFAULT '',\n RECEIVE_PLANT_TYPE varchar(2) DEFAULT '',\n \n c1 varchar(2) DEFAULT '',\n c2 varchar(2) DEFAULT '',\n c3 varchar(2) DEFAULT '',\n c4 varchar(2) DEFAULT '',\n c5 varchar(2) DEFAULT '',\n c6 varchar(2) DEFAULT '',\n c7 varchar(2) DEFAULT '',\n c8 varchar(2) DEFAULT '',\n c9 varchar(2) DEFAULT '',\n c10 varchar(2) DEFAULT '',\n c11 varchar(2) DEFAULT '',\n c12 varchar(2) DEFAULT '',\n c13 varchar(2) DEFAULT '',\n c14 varchar(2) DEFAULT '',\n c15 varchar(2) DEFAULT '',\n c16 varchar(2) DEFAULT '',\n c17 varchar(2) DEFAULT '',\n c18 varchar(2) DEFAULT '',\n c19 varchar(2) DEFAULT '',\n c20 varchar(2) DEFAULT '',\n c21 varchar(2) DEFAULT '',\n c22 varchar(2) DEFAULT '',\n c23 varchar(2) DEFAULT '',\n c24 varchar(2) DEFAULT ''\n )PARTITION BY LIST (STATIS_DATE); \n \n DO $$\n DECLARE r record;\n BEGIN\n FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series( '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd\n LOOP\n EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION OF tb_part_test FOR VALUES IN (' || r.dt || ')';\n END LOOP;\n END$$;\n\n\n## 4. test.sql\n\n copy (select pg_backend_pid()) to '/tmp/test.pid';\n \n update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';\n\n## 5. test1.sql(tb_part_test1 is a partition table, and it has the same structure with tb_part_test)\n\n copy (select pg_backend_pid()) to '/tmp/test.pid';\n\n update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';\n\n update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';\n\n## 6. valgrind command\n\n valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix=\"%m %p \" --log_statement=all --shared_buffers=4GB\n\n## 7. test steps\n\n1. Start pg using valgrind, create non-partition table, run pgbench for 1000s, get 29201\\_nonpart\\_1000s.log\n\n pgbench -n -T 1000 -r -f test.sql\n\n2. Start pg using valgrind, create partition table, run pgbench for 1000s, get 27064\\_part\\_1000s.log\n\n pgbench -n -T 1000 -r -f test.sql\n\n3. Start pg using valgrind, create partition table, run pgbench for 2000s, get 864\\_part\\_2000s.log\n\n pgbench -n -T 2000 -r -f test.sql\n\n4. Start pg using valgrind, create partition table, run pgbench for 1000s, get 16507\\_part\\_2tb\\_1000s.log\n\n pgbench -n -T 1000 -r -f test1.sql\n\r\nThe attachments are valgrind logs. Thanks. \n\r\nSincerely,\r\nMarcus Mo",
"msg_date": "Thu, 27 Dec 2018 12:01:34 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL partition tables use more private memory"
},
{
"msg_contents": "Hi,\n\nOn 2018/12/27 15:44, 锟斤拷锟斤拷 wrote:\n> # PostgreSQL partition tables use more private memory\n> \n> Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way. \n> \n> - when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;\n> \n> - it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;\n> \n> - it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;\n> \n> - pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.\n> \n> Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.\n\nOther than the problems Pavel mentioned in his email, it's a known problem\nthat PostgreSQL will consume tons of memory if you perform an\nUPDATE/DELETE on a partitioned table containing many partitions, which is\napparently what you're describing.\n\nIt's something we've been working on to fix. Please see if the patches\nposted in the following email helps reduce the memory footprint in your case.\n\nhttps://www.postgresql.org/message-id/55bd88c6-f311-2791-0a36-11c693c69753%40lab.ntt.co.jp\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 27 Dec 2018 20:28:19 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL partition tables use more private memory"
},
{
"msg_contents": "Thanks you guys, I will test the patches you mentioned, and keep you updated.\n\nThanks,\nMarcus\n\nSent from my iPhone\n\n> On Dec 27, 2018, at 19:28, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> \n> Hi,\n> \n>> On 2018/12/27 15:44, 大松 wrote:\n>> # PostgreSQL partition tables use more private memory\n>> \n>> Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way. \n>> \n>> - when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;\n>> \n>> - it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;\n>> \n>> - it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;\n>> \n>> - pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.\n>> \n>> Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.\n> \n> Other than the problems Pavel mentioned in his email, it's a known problem\n> that PostgreSQL will consume tons of memory if you perform an\n> UPDATE/DELETE on a partitioned table containing many partitions, which is\n> apparently what you're describing.\n> \n> It's something we've been working on to fix. Please see if the patches\n> posted in the following email helps reduce the memory footprint in your case.\n> \n> https://www.postgresql.org/message-id/55bd88c6-f311-2791-0a36-11c693c69753%40lab.ntt.co.jp\n> \n> Thanks,\n> Amit\n\n\n\n",
"msg_date": "Thu, 27 Dec 2018 19:58:51 +0800",
"msg_from": "Marcus Mao <dasong2410@163.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL partition tables use more private memory"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n",
"msg_date": "Thu, 27 Dec 2018 21:57:26 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Yugo.\n\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n\nGreat. :-)\n\nI think it would address an important gap in PostgreSQL’s feature set.\n\n> 2. How to compute the delta to be applied to materialized views\n> \n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n\nWe had a similar discussion in this thread\nhttps://www.postgresql.org/message-id/flat/FC784A9F-F599-4DCC-A45D-DBF6FA582D30%40QQdd.eu,\nand I’m very much in agreement that the \"change propagation equations”\napproach can solve for a very substantial subset of common MV use cases.\n\n> There could be several operations for view definition: selection,\n> projection, \n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple \n> view definition, and then support more complex views.\n\nSuch a decomposition also allows ’stacking’, allowing complex MV definitions\nto be attacked even with only a small handful of modules.\n\nI did a bit of an experiment to see if \"change propagation equations” could\nbe computed directly from the MV’s pg_node_tree representation in the\ncatalog in PlPgSQL. I found that pg_node_trees are not particularly friendly\nto manipulation in PlPgSQL. Even with a more friendly-to-PlPgSQL\nrepresentation (I played with JSONB), then the next problem is making sense\nof the structures, and unfortunately amongst the many plan/path/tree utility\nfunctions in the code base, I figured only a very few could be sensibly\nexposed to PlPgSQL. Ultimately, although I’m still attracted to the idea,\nand I think it could be made to work, native code is the way to go at least\nfor now.\n\n> 4. When to maintain materialized views\n> \n> [...]\n> \n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n\nCertainly the eager approach allows progress to be made with less\ninfrastructure.\n\nI am concerned that the eager approach only addresses a subset of the MV use\ncase space, though. For example, if we presume that an MV is present because\nthe underlying direct query would be non-performant, then we have to at\nleast question whether applying the delta-update would also be detrimental\nto some use cases.\n\nIn the eager maintenance approache, we have to consider a race condition\nwhere two\ndifferent transactions change base tables simultaneously as discussed in\n[4].\n\nI wonder if that nudges towards a logged approach. If the race is due to\nfact of JOIN-worthy tuples been made visible after a COMMIT, but not before,\nthen does it not follow that the eager approach has to fire some kind of\nreconciliation work at COMMIT time? That seems to imply a persistent queue\nof some kind, since we can’t assume transactions to be so small to be able\nto hold the queue in memory.\n\nHmm. I hadn’t really thought about that particular corner case. I guess a\n‘catch' could be simply be to detect such a concurrent update and demote the\nrefresh approach by marking the MV stale awaiting a full refresh.\n\ndenty.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Mon, 31 Dec 2018 03:41:15 -0700 (MST)",
"msg_from": "denty <denty@QQdd.eu>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi all, just wanted to say I am very happy to see progress made on this,\nmy codebase has multiple \"materialized tables\" which are maintained with\nstatement triggers (transition tables) and custom functions. They are ugly\nand a pain to maintain, but they work because I have no other\nsolution...for now at least.\n\nI am concerned that the eager approach only addresses a subset of the MV use\n> case space, though. For example, if we presume that an MV is present\n> because\n> the underlying direct query would be non-performant, then we have to at\n> least question whether applying the delta-update would also be detrimental\n> to some use cases.\n>\n\nI will say that in my case, as long as my reads of the materialized view\nare always consistent with the underlying data, that's what's important. I\ndon't mind if it's eager, or lazy (as long as lazy still means it will\nrefresh prior to reading).\n\nHi all, just wanted to say I am very happy to see progress made on this, my codebase has multiple \"materialized tables\" which are maintained with statement triggers (transition tables) and custom functions. They are ugly and a pain to maintain, but they work because I have no other solution...for now at least.I am concerned that the eager approach only addresses a subset of the MV use\ncase space, though. For example, if we presume that an MV is present because\nthe underlying direct query would be non-performant, then we have to at\nleast question whether applying the delta-update would also be detrimental\nto some use cases.I will say that in my case, as long as my reads of the materialized view are always consistent with the underlying data, that's what's important. I don't mind if it's eager, or lazy (as long as lazy still means it will refresh prior to reading).",
"msg_date": "Mon, 31 Dec 2018 11:20:11 -0500",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Dear all,\n\nWe have some result on incremental update for MVs. We generate triggers on\nC to do the incremental maintenance. We posted the code to github about 1\nyear ago, but unfortunately i posted a not-right ctrigger.h header. The\nmistake was exposed to me when a person could not compile the generated\ntriggers and reported to me. And now i re-posted with the right ctrigger.h\nfile.\n\nYou can find the codes of the generator here:\nhttps://github.com/ntqvinh/PgMvIncrementalUpdate/commits/master. You can\nfind how did we do here:\nhttps://link.springer.com/article/10.1134/S0361768816050066. The paper is\nabout generating of codes in pl/pgsql. Anyway i see it is useful for\nreading the codes. I don't know if i can share the paper or not so that i\ndon't publish anywhere else. The text about how to generate triggers in C\nwas published with open-access but unfortunately, it is in Vietnamese.\n\nWe are happy if the codes are useful for someone.\n\nThank you and best regards,\n\nNTQ Vinh\n\nTS. Nguyễn Trần Quốc Vinh\n-----------------------------------------------\nChủ nhiệm khoa Tin học\nTrường ĐH Sư phạm - ĐH Đà Nẵng\n------------------------------------------------\nNguyen Tran Quoc Vinh, PhD\nDean\nFaculty of Information Technology\nDanang University of Education\nWebsite: http://it.ued.udn.vn; http://www.ued.vn <http://www.ued.udn.vn/>;\nhttp://www.ued.udn.vn\nSCV: http://scv.ued.vn/~ntquocvinh <http://scv.ued.udn.vn/~ntquocvinh>\nPhone: (+84) 511.6-512-586\nMobile: (+84) 914.78-08-98\n\nOn Mon, Dec 31, 2018 at 11:20 PM Adam Brusselback <adambrusselback@gmail.com>\nwrote:\n\n> Hi all, just wanted to say I am very happy to see progress made on this,\n> my codebase has multiple \"materialized tables\" which are maintained with\n> statement triggers (transition tables) and custom functions. They are ugly\n> and a pain to maintain, but they work because I have no other\n> solution...for now at least.\n>\n> I am concerned that the eager approach only addresses a subset of the MV\n>> use\n>> case space, though. For example, if we presume that an MV is present\n>> because\n>> the underlying direct query would be non-performant, then we have to at\n>> least question whether applying the delta-update would also be detrimental\n>> to some use cases.\n>>\n>\n> I will say that in my case, as long as my reads of the materialized view\n> are always consistent with the underlying data, that's what's important. I\n> don't mind if it's eager, or lazy (as long as lazy still means it will\n> refresh prior to reading).\n>\n\nDear all,We have some result on incremental update for MVs. We generate triggers on C to do the incremental maintenance. We posted the code to github about 1 year ago, but unfortunately i posted a not-right ctrigger.h header. The mistake was exposed to me when a person could not compile the generated triggers and reported to me. And now i re-posted with the right ctrigger.h file.You can find the codes of the generator here: https://github.com/ntqvinh/PgMvIncrementalUpdate/commits/master. You can find how did we do here: https://link.springer.com/article/10.1134/S0361768816050066. The paper is about generating of codes in pl/pgsql. Anyway i see it is useful for reading the codes. I don't know if i can share the paper or not so that i don't publish anywhere else. The text about how to generate triggers in C was published with open-access but unfortunately, it is in Vietnamese.We are happy if the codes are useful for someone.Thank you and best regards,NTQ VinhTS. Nguyễn Trần Quốc Vinh\n-----------------------------------------------\nChủ nhiệm khoa Tin học\nTrường ĐH Sư phạm - ĐH Đà Nẵng------------------------------------------------Nguyen Tran Quoc Vinh, PhDDeanFaculty of Information TechnologyDanang University of EducationWebsite: http://it.ued.udn.vn; http://www.ued.vn; http://www.ued.udn.vnSCV: http://scv.ued.vn/~ntquocvinhPhone: (+84) 511.6-512-586Mobile: (+84) 914.78-08-98On Mon, Dec 31, 2018 at 11:20 PM Adam Brusselback <adambrusselback@gmail.com> wrote:Hi all, just wanted to say I am very happy to see progress made on this, my codebase has multiple \"materialized tables\" which are maintained with statement triggers (transition tables) and custom functions. They are ugly and a pain to maintain, but they work because I have no other solution...for now at least.I am concerned that the eager approach only addresses a subset of the MV use\ncase space, though. For example, if we presume that an MV is present because\nthe underlying direct query would be non-performant, then we have to at\nleast question whether applying the delta-update would also be detrimental\nto some use cases.I will say that in my case, as long as my reads of the materialized view are always consistent with the underlying data, that's what's important. I don't mind if it's eager, or lazy (as long as lazy still means it will refresh prior to reading).",
"msg_date": "Tue, 1 Jan 2019 14:46:25 +0700",
"msg_from": "=?UTF-8?B?Tmd1eeG7hW4gVHLhuqduIFF14buRYyBWaW5o?= <ntquocvinh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Hi all, just wanted to say I am very happy to see progress made on this,\n> my codebase has multiple \"materialized tables\" which are maintained with\n> statement triggers (transition tables) and custom functions. They are ugly\n> and a pain to maintain, but they work because I have no other\n> solution...for now at least.\n> \n> I am concerned that the eager approach only addresses a subset of the MV use\n>> case space, though. For example, if we presume that an MV is present\n>> because\n>> the underlying direct query would be non-performant, then we have to at\n>> least question whether applying the delta-update would also be detrimental\n>> to some use cases.\n>>\n> \n> I will say that in my case, as long as my reads of the materialized view\n> are always consistent with the underlying data, that's what's important. I\n> don't mind if it's eager, or lazy (as long as lazy still means it will\n> refresh prior to reading).\n\nAssuming that we want to implement IVM incrementally (that means, for\nexample, we implement DELETE for IVM in PostgreSQL XX, then INSERT for\nIVM for PostgreSQL XX+1... etc.), I think it's hard to do it with an\neager approach if want to MV is always consistent with base tables.\n\nOn the other hand, a lazy approach allows to implement IVM\nincrementally because we could always let full MV build from scratch\nif operations on MV include queries we do not support.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Mon, 07 Jan 2019 10:59:56 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Dear All,\n\nThe tool analyzes the input query and then generates triggers (trigger\nfunctions and pl/pgsql scripts as well) on all manipulating events\n(insert/updates/delete) for all underlying base tables. The triggers do\nincremental updates to the table that contains the query result (MV). You\ncan build the tool, then see the provided example and try the tool. It is\nfor synchronous maintenance. It was hard tested but you can use it with\nyour own risk.\n\nFor Asynchronous maintenance, we generate 1) triggers on all manipulating\nevents on base tables to collect all the data changes and save to the\n'special' tables; then 2) the tool to do incremental updates of MVs.\n\nBest regards,\n\nVinh\n\nTS. Nguyễn Trần Quốc Vinh\n-----------------------------------------------\nChủ nhiệm khoa Tin học\nTrường ĐH Sư phạm - ĐH Đà Nẵng\n------------------------------------------------\nNguyen Tran Quoc Vinh, PhD\nDean\nFaculty of Information Technology\nDanang University of Education\nWebsite: http://it.ued.udn.vn; http://www.ued.vn <http://www.ued.udn.vn/>;\nhttp://www.ued.udn.vn\nSCV: http://scv.ued.vn/~ntquocvinh <http://scv.ued.udn.vn/~ntquocvinh>\nPhone: (+84) 511.6-512-586\nMobile: (+84) 914.78-08-98\n\n\nOn Mon, Jan 7, 2019 at 9:00 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Hi all, just wanted to say I am very happy to see progress made on this,\n> > my codebase has multiple \"materialized tables\" which are maintained with\n> > statement triggers (transition tables) and custom functions. They are\n> ugly\n> > and a pain to maintain, but they work because I have no other\n> > solution...for now at least.\n> >\n> > I am concerned that the eager approach only addresses a subset of the MV\n> use\n> >> case space, though. For example, if we presume that an MV is present\n> >> because\n> >> the underlying direct query would be non-performant, then we have to at\n> >> least question whether applying the delta-update would also be\n> detrimental\n> >> to some use cases.\n> >>\n> >\n> > I will say that in my case, as long as my reads of the materialized view\n> > are always consistent with the underlying data, that's what's\n> important. I\n> > don't mind if it's eager, or lazy (as long as lazy still means it will\n> > refresh prior to reading).\n>\n> Assuming that we want to implement IVM incrementally (that means, for\n> example, we implement DELETE for IVM in PostgreSQL XX, then INSERT for\n> IVM for PostgreSQL XX+1... etc.), I think it's hard to do it with an\n> eager approach if want to MV is always consistent with base tables.\n>\n> On the other hand, a lazy approach allows to implement IVM\n> incrementally because we could always let full MV build from scratch\n> if operations on MV include queries we do not support.\n>\n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n>\n>\n\nDear All,The tool analyzes the input query and then generates triggers (trigger functions and pl/pgsql scripts as well) on all manipulating events (insert/updates/delete) for all underlying base tables. The triggers do incremental updates to the table that contains the query result (MV). You can build the tool, then see the provided example and try the tool. It is for synchronous maintenance. It was hard tested but you can use it with your own risk.For Asynchronous maintenance, we generate 1) triggers on all manipulating events on base tables to collect all the data changes and save to the 'special' tables; then 2) the tool to do incremental updates of MVs.Best regards,VinhTS. Nguyễn Trần Quốc Vinh\n-----------------------------------------------\nChủ nhiệm khoa Tin học\nTrường ĐH Sư phạm - ĐH Đà Nẵng------------------------------------------------Nguyen Tran Quoc Vinh, PhDDeanFaculty of Information TechnologyDanang University of EducationWebsite: http://it.ued.udn.vn; http://www.ued.vn; http://www.ued.udn.vnSCV: http://scv.ued.vn/~ntquocvinhPhone: (+84) 511.6-512-586Mobile: (+84) 914.78-08-98On Mon, Jan 7, 2019 at 9:00 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:> Hi all, just wanted to say I am very happy to see progress made on this,\n> my codebase has multiple \"materialized tables\" which are maintained with\n> statement triggers (transition tables) and custom functions. They are ugly\n> and a pain to maintain, but they work because I have no other\n> solution...for now at least.\n> \n> I am concerned that the eager approach only addresses a subset of the MV use\n>> case space, though. For example, if we presume that an MV is present\n>> because\n>> the underlying direct query would be non-performant, then we have to at\n>> least question whether applying the delta-update would also be detrimental\n>> to some use cases.\n>>\n> \n> I will say that in my case, as long as my reads of the materialized view\n> are always consistent with the underlying data, that's what's important. I\n> don't mind if it's eager, or lazy (as long as lazy still means it will\n> refresh prior to reading).\n\nAssuming that we want to implement IVM incrementally (that means, for\nexample, we implement DELETE for IVM in PostgreSQL XX, then INSERT for\nIVM for PostgreSQL XX+1... etc.), I think it's hard to do it with an\neager approach if want to MV is always consistent with base tables.\n\nOn the other hand, a lazy approach allows to implement IVM\nincrementally because we could always let full MV build from scratch\nif operations on MV include queries we do not support.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 7 Jan 2019 09:51:41 +0700",
"msg_from": "=?UTF-8?B?Tmd1eeG7hW4gVHLhuqduIFF14buRYyBWaW5o?= <ntquocvinh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi!\n\nOn Thu, Dec 27, 2018 at 4:57 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> I would like to implement Incremental View Maintenance (IVM) on PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n\nThat sounds great! I am interested in this topic because I am\ninterested in reactive/live queries and support for them in\nPostgreSQL. [1]\n\nIn that context, the problem is very similar: based on some state of\nquery results and updated source tables, determine what should be new\nupdates to send to the client describing changes to the query results.\nSo after computing those incremental changes, instead of applying them\nto materialized view I would send them to the client. One could see\nmaterialized views only type of consumers of such information about\nincremental change.\n\nSo I would like to ask if whatever is done in this setting is done in\na way that one could also outside of the context of materialized view.\nNot sure what would API be thought.\n\n From the perspective of reactive/live queries, this package [2] is\ninteresting. To my understanding, it adds to all base tables two\ncolumns, one for unique ID and one for revision of the row. And then\nrewrites queries so that this information is passed all the way to\nquery results. In this way it can then determine mapping between\ninputs and outputs. I am not sure if it then does incremental update\nor just uses that to determine if view is invalidated. Not sure if\nthere is anything about such approach in literature. Or why both index\nand revision columns are needed.\n\n> For these reasons, we started to think to implement IVM without relying on OIDs\n> and made a bit more surveys.\n\nI also do not see much difference between asking users to have primary\nkey on base tables or asking them to have OIDs. Why do you think that\na requirement for primary keys is a hard one? I think we should first\nfocus on having IVM with base tables with primary keys. Maybe then\nlater on we could improve on that and make it also work without.\n\nTo me personally, having unique index on source tables and also on\nmaterialized view is a reasonable restriction for this feature.\nEspecially for initial versions of it.\n\n> However, the discussion about IVM is now stoped, so we would like to restart and\n> progress this.\n\nWhat would be next steps in your view to move this further?\n\n> If we can represent a change of UPDATE on a base table as query-like rather than\n> OLD and NEW, it may be possible to update the materialized view directly instead\n> of performing delete & insert.\n\nWhy do you need OLD and NEW? Don't you need just NEW and a list of\ncolumns which changed from those in NEW? I use such diffing query [4]\nto represent changes: first column has a flag telling if the row is\nrepresenting insert, update, and remove, the second column tells which\ncolumn are being changed in the case of the update, and then the NEW\ncolumns follow.\n\nI think that maybe standardizing structure for representing those\nchanges would be a good step towards making this modular and reusable.\nBecause then we can have three parts:\n\n* Recording and storing changes in a standard format.\n* A function which given original data, stored changes, computes\nupdates needed, also in some standard format.\n* A function which given original data and updates needed, applies them.\n\n> In the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that the eager\n> approach may be more simple since we don't have to maintain base table changes in other\n> past transactions.\n\nI think if we split things into three parts as I described above, then\nthis is just a question of configuration. Or you call all three inside\none trigger to update in \"eager\" fashion. Or you store computed\nupdates somewhere and then on demand apply those in \"lazy\" fashion.\n\n> In the eager maintenance approache, we have to consider a race condition where two\n> different transactions change base tables simultaneously as discussed in [4].\n\nBut in the case of \"lazy\" maintenance there is a mirror problem: what\nif later changes to base tables invalidate some previous change to the\nmaterialized view. Imagine that one cell in a base table is first\nupdated too \"foo\" and we compute an update for the materialized view\nto set it to \"foo\". And then the same cell is updated to \"bar\" and we\ncompute an update for the materialized view again. If we have not\napplied any of those updates (because we are \"lazy\") now the\npreviously computed update can be discarded. We could still apply\nboth, but it would not be efficient.\n\n[1] https://www.postgresql.org/message-id/flat/CAKLmikP%2BPPB49z8rEEvRjFOD0D2DV72KdqYN7s9fjh9sM_32ZA%40mail.gmail.com\n[2] https://github.com/nothingisdead/pg-live-query\n[3] https://www.postgresql.org/docs/devel/sql-createtable.html\n[4] https://github.com/tozd/node-reactive-postgres/blob/eeda4f28d096b6e552d04c5ea138c258cb5b9389/index.js#L329-L340\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Mon, 7 Jan 2019 00:39:00 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 1 Jan 2019 14:46:25 +0700\nNguyễn Trần Quốc Vinh <ntquocvinh@gmail.com> wrote:\n\n> We have some result on incremental update for MVs. We generate triggers on\n> C to do the incremental maintenance. We posted the code to github about 1\n> year ago, but unfortunately i posted a not-right ctrigger.h header. The\n> mistake was exposed to me when a person could not compile the generated\n> triggers and reported to me. And now i re-posted with the right ctrigger.h\n> file.\n> \n> You can find the codes of the generator here:\n> https://github.com/ntqvinh/PgMvIncrementalUpdate/commits/master. You can\n> find how did we do here:\n> https://link.springer.com/article/10.1134/S0361768816050066. The paper is\n> about generating of codes in pl/pgsql. Anyway i see it is useful for\n> reading the codes. I don't know if i can share the paper or not so that i\n> don't publish anywhere else. The text about how to generate triggers in C\n> was published with open-access but unfortunately, it is in Vietnamese.\n> \n> We are happy if the codes are useful for someone.\n\nI have read your paper. It is interesting and great so that the algorithm\nis described concretely.\n\nAfter reading this, I have a few questions about your implementation.\nAlthough I may be able to understand by reading your paper and code carefully,\nI would appreciate it if you could answer these.\n\n- It is said there are many limitations on the view definition query.\n How does this work when the query is not supported?\n\n- Is it possible to support materialized views that have DISTINCT,\n OUTER JOIN, or sub-query in your approach?\n\n- It is said that AVG is splitted to SUM and COUNT. Are these new additional\n columns in MV visible for users?\n\n- Does this can handle the race condition discussed in [1], that is,\n if concurrent transactions update different two tables in the join\n view definition, is MV updated sucessfully?\n\n[1] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n",
"msg_date": "Thu, 31 Jan 2019 21:38:58 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 7 Jan 2019 00:39:00 -0800\nMitar <mmitar@gmail.com> wrote:\n\n> That sounds great! I am interested in this topic because I am\n> interested in reactive/live queries and support for them in\n> PostgreSQL. [1]\n> \n> In that context, the problem is very similar: based on some state of\n> query results and updated source tables, determine what should be new\n> updates to send to the client describing changes to the query results.\n> So after computing those incremental changes, instead of applying them\n> to materialized view I would send them to the client. One could see\n> materialized views only type of consumers of such information about\n> incremental change.\n> \n> So I would like to ask if whatever is done in this setting is done in\n> a way that one could also outside of the context of materialized view.\n> Not sure what would API be thought.\n\nI didn't know about reactive/live queries but this seems share a part of\nproblem with IVM, so we might have common API.\n\nBTW, what is uecase of reactive/live queries? (just curious)\n \n> > For these reasons, we started to think to implement IVM without relying on OIDs\n> > and made a bit more surveys.\n> \n> I also do not see much difference between asking users to have primary\n> key on base tables or asking them to have OIDs. Why do you think that\n> a requirement for primary keys is a hard one? I think we should first\n> focus on having IVM with base tables with primary keys. Maybe then\n> later on we could improve on that and make it also work without.\n> \n> To me personally, having unique index on source tables and also on\n> materialized view is a reasonable restriction for this feature.\n> Especially for initial versions of it.\n\nInitially, I chose to use OIDs for theoretical reason, that is, to handle\n\"bag-semantics\" which allows duplicate rows in tables. However, I agree\nthat we start from the restriction of having unique index on base tables.\n \n> > If we can represent a change of UPDATE on a base table as query-like rather than\n> > OLD and NEW, it may be possible to update the materialized view directly instead\n> > of performing delete & insert.\n> \n> Why do you need OLD and NEW? Don't you need just NEW and a list of\n> columns which changed from those in NEW? I use such diffing query [4]\n> to represent changes: first column has a flag telling if the row is\n> representing insert, update, and remove, the second column tells which\n> column are being changed in the case of the update, and then the NEW\n> columns follow.\n\nAccording the change propagation equation approach, OLD is necessary\nto calculate tuples in MV to be deleted or modified. However, if tables\nhas unique keys, such tuples can be identifeid using the keys, so\nOLD may not be needed, at least in eager approach.\n\nIn lazy approach, OLD contents of table is useful. For example, with\na join view MV = R * S, when dR is inserted into R and dS is inserted\ninto S, the delta to be inserted into MV will be\n \n dMV = (R_old * dS) + (dR * S_new)\n = (R_old * dS) + (dR * S_old) + (dR * dS)\n\n, hence the old contents of tables R and S are needed.\n \n> I think that maybe standardizing structure for representing those\n> changes would be a good step towards making this modular and reusable.\n> Because then we can have three parts:\n> \n> * Recording and storing changes in a standard format.\n> * A function which given original data, stored changes, computes\n> updates needed, also in some standard format.\n> * A function which given original data and updates needed, applies them.\n\n> I think if we split things into three parts as I described above, then\n> this is just a question of configuration. Or you call all three inside\n> one trigger to update in \"eager\" fashion. Or you store computed\n> updates somewhere and then on demand apply those in \"lazy\" fashion.\n\nI agree that defining the format to represent changes is important. However,\nI am not sure both of eager and lazy can be handled in the same manner. I'll\nconsider about this more.\n\n> > In the eager maintenance approache, we have to consider a race condition where two\n> > different transactions change base tables simultaneously as discussed in [4].\n> \n> But in the case of \"lazy\" maintenance there is a mirror problem: what\n> if later changes to base tables invalidate some previous change to the\n> materialized view. Imagine that one cell in a base table is first\n> updated too \"foo\" and we compute an update for the materialized view\n> to set it to \"foo\". And then the same cell is updated to \"bar\" and we\n> compute an update for the materialized view again. If we have not\n> applied any of those updates (because we are \"lazy\") now the\n> previously computed update can be discarded. We could still apply\n> both, but it would not be efficient.\n\nIn our PoC implementation, I handled this situation by removing\nold contents from NEW delata table. In your example, when the base\ntable is updated from \"foo\" to \"bar\", the \"foo\" tuple is removed\nfrom and the \"bar\" tuple is inserted in NEW delta and the delta\nof MV is computed using the final NEW delta.\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n",
"msg_date": "Thu, 31 Jan 2019 23:20:32 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi!\n\nOn Thu, Jan 31, 2019 at 6:20 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> BTW, what is uecase of reactive/live queries? (just curious)\n\nIt allows syncing the state between client and server. Client can then\nhave a subset of data and server can push changes as they are\nhappening to the client. Client can in a reactive manner render that\nin the UI to the user. So you can easily create a reactive UI which\nalways shows up-to-date data without having to poll or something\nsimilar.\n\nHow are things progressing? Any news on this topic?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 14 Mar 2019 00:41:49 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 27 Dec 2018 21:57:26 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n> \n> I would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \n\nI am now working on an initial patch for implementing IVM on PostgreSQL.\nThis enables materialized views to be updated incrementally after one\nof their base tables is modified.\n\nAt the first patch, I want to start from very simple features.\n\nFirstly, this will handle simple definition views which includes only\nselection, projection, and join. Standard aggregations (count, sum, avg,\nmin, max) are not planned to be implemented in the first patch, but these\nare commonly used in materialized views, so I'll implement them later on. \nViews which include sub-query, outer-join, CTE, and window functions are also\nout of scope of the first patch. Also, views including self-join or views\nincluding other views in their definition is not considered well, either. \nI need more investigation on these type of views although I found some papers\nexplaining how to handle sub-quries and outer-joins. \n\nNext, this will handle materialized views with no duplicates in their\ntuples. I am thinking of implementing an algorithm to handle duplicates\ncalled \"counting-algorithm\" afterward, but I'll start from this\nno-duplicates assumption in the first patch for simplicity.\n\nIn the first patch, I will implement only \"immediate maintenance\", that is, materialized views are updated immediately in a transaction where a base\ntable is modified. On other hand, in \"deferred maintenance\", materialized\nviews are updated after the transaction, for example, by the user command\nlike REFRESH. Although I plan to implement both eventually, I'll start from \"immediate\" because this seems to need smaller code than \"deferred\". For\nimplementing \"deferred\", it is need to implement a mechanism to maintain logs\nfor recording changes and an algorithm to compute the delta to be applied to\nmaterialized views are necessary. \n \nI plan to implement the immediate maintenance using AFTER triggers created \nautomatically on a materialized view's base tables. In AFTER trigger using \ntransition table features, changes occurs on base tables is recorded ephemeral relations. We can compute the delta to be applied to materialized views by\nusing these ephemeral relations and the view definition query, then update\nthe view by applying this delta.\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 1 Apr 2019 12:11:22 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sun, 31 Mar 2019 at 23:22, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>\n> Firstly, this will handle simple definition views which includes only\n> selection, projection, and join. Standard aggregations (count, sum, avg,\n> min, max) are not planned to be implemented in the first patch, but these\n> are commonly used in materialized views, so I'll implement them later on.\n\nIt's fine to not have all the features from day 1 of course. But I\njust picked up this comment and the followup talking about splitting\nAVG into SUM and COUNT and I had a comment. When you do look at\ntackling aggregates I don't think you should restrict yourself to\nthese specific standard aggregations. We have all the necessary\nabstractions to handle all aggregations that are feasible, see\nhttps://www.postgresql.org/docs/devel/xaggr.html#XAGGR-MOVING-AGGREGATES\n\nWhat you need to do -- I think -- is store the \"moving aggregate\nstate\" before the final function. Then whenever a row is inserted or\ndeleted or updated (or whenever another column is updated which causes\nthe value to row to enter or leave the aggregation) apply either\naggtransfn or aggminvtransfn to the state. I'm not sure if you want to\napply the final function on every update or only lazily either may be\nbetter in some usage.\n\n\n",
"msg_date": "Wed, 3 Apr 2019 17:41:36 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 1 Apr 2019 12:11:22 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> On Thu, 27 Dec 2018 21:57:26 +0900\n> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> > Hi,\n> > \n> > I would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \n> \n> I am now working on an initial patch for implementing IVM on PostgreSQL.\n> This enables materialized views to be updated incrementally after one\n> of their base tables is modified.\n\nAttached is a WIP patch of Incremental View Maintenance (IVM).\nMajor part is written by me, and changes in syntax and pg_class\nare Hoshiai-san's work.\n\nAlthough this is sill a draft patch in work-in-progress, any\nsuggestions or thoughts would be appreciated.\n \n* What it is\n\nThis allows a kind of Immediate Maintenance of materialized views. if a \nmaterialized view is created by CRATE INCREMENTAL MATERIALIZED VIEW command,\nthe contents of the mateview is updated automatically and incrementally\nafter base tables are updated. Noted this syntax is just tentative, so it\nmay be changed.\n\n====== Example 1 ======\npostgres=# CREATE INCREMENTAL MATERIALIZED VIEW m AS SELECT * FROM t0;\nSELECT 3\npostgres=# SELECT * FROM m;\n i \n---\n 3\n 2\n 1\n(3 rows)\n\npostgres=# INSERT INTO t0 VALUES (4);\nINSERT 0 1\npostgres=# SELECt * FROM m; -- automatically updated\n i \n---\n 3\n 2\n 1\n 4\n(4 rows)\n=============================\n\nThis implementation also supports matviews including duplicate tuples or\nDISTINCT clause in its view definition query. For example, even if a matview\nis defined with DISTINCT to remove duplication of tuples in a base table, this\ncan perform incremental update of the matview properly. That is, the contents\nof the matview doesn't change when exiting tuples are inserted into the base\ntables, and a tuple in the matview is deleted only when duplicity of the\ncorresponding tuple in the base table becomes zero.\n\nThis is due to \"colunting alogorithm\" in which the number of each tuple is\nstored in matviews as a special column value.\n\n====== Example 2 ======\npostgres=# SELECT * FROM t1;\n id | t \n----+---\n 1 | A\n 2 | B\n 3 | C\n 4 | A\n(4 rows)\n\npostgres=# CREATE INCREMENTAL MATERIALIZED VIEW m1 AS SELECT t FROM t1;\nSELECT 3\npostgres=# CREATE INCREMENTAL MATERIALIZED VIEW m2 AS SELECT DISTINCT t FROM t1;\nSELECT 3\npostgres=# SELECT * FROM m1; -- with duplicity\n t \n---\n A\n A\n C\n B\n(4 rows)\n\npostgres=# SELECT * FROM m2;\n t \n---\n A\n B\n C\n(3 rows)\n\npostgres=# INSERT INTO t1 VALUES (5, 'B');\nINSERT 0 1\npostgres=# DELETE FROM t1 WHERE id IN (1,3); -- delete (1,A),(3,C)\nDELETE 2\npostgres=# SELECT * FROM m1; -- one A left and one more B\n t \n---\n B\n B\n A\n(3 rows)\n\npostgres=# SELECT * FROM m2; -- only C is removed\n t \n---\n B\n A\n(2 rows)\n=============================\n\n* How it works\n\n1. Creating matview\n\nWhen a matview is created, AFTER triggers are internally created\non its base tables. When the base tables is modified (INSERT, DELETE,\nUPDATE), the matview is updated incrementally in the trigger function.\n\nWhen populating the matview, GROUP BY and count(*) are added to the\nview definition query before this is executed for counting duplicity\nof tuples in the matview. The result of count is stored in the matview\nas a special column named \"__ivm_count__\". \n\n2. Maintenance of matview\n\nWhen base tables are modified, the change set of the table can be\nreferred as Ephemeral Named Relations (ENRs) thanks to Transition Table\n(a feature of trigger implemented since PG10). We can calculate the diff\nset of the matview by replacing the base table in the view definition\nquery with the ENR (at least if it is Selection-Projection -Join view). \nAs well as view definition time, GROUP BY and count(*) is added in order\nto count the duplicity of tuples in the diff set. As a result, two diff\nsets (to be deleted from and to be inserted into the matview) are\ncalculated, and the results are stored into temporary tables respectively.\n\nThe matiview is updated by merging these change sets. Instead of executing\nDELETE or INSERT simply, the values of __ivm_count__ column in the matview\nis decreased or increased. When the values becomes zero, the corresponding\ntuple is deleted from the matview.\n\n3. Access to matview\n\nWhen SELECT is issued for IVM matviews defined with DISTINCT, all columns\nexcept to __ivm_count__ of each tuple in the matview are returned. This is \ncorrect because duplicity of tuples are eliminated by GROUP BY.\n\nWhen DISTINCT is not used, SELECT for the IVM matviews returns each tuple\n__ivm_count__ times. Currently, this is implemented by rewriting the SELECT\nquery to replace the matview RTE with a subquery which joins the matview\nand generate_series function as bellow. \n\n SELECT mv.* FROM mv, generate_series(1, mv.__ivm_count__);\n\n__ivm_count__ column is invisible for users when \"SELECT * FROM ...\" is\nissued, but users can see the value by specifying in target list explicitly.\n\n====== Example 3 ======\npostgres=# \\d+ m1\n Materialized view \"public.m1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n---------------+--------+-----------+----------+---------+----------+--------------+-------------\n t | text | | | | extended | | \n __ivm_count__ | bigint | | | | plain | | \nView definition:\n SELECT t1.t\n FROM t1;\nAccess method: heap\n\npostgres=# \\d+ m2\n Materialized view \"public.m2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n---------------+--------+-----------+----------+---------+----------+--------------+-------------\n t | text | | | | extended | | \n __ivm_count__ | bigint | | | | plain | | \nView definition:\n SELECT DISTINCT t1.t\n FROM t1;\nAccess method: heap\n\npostgres=# SELECT *, __ivm_count__ FROM m1;\n t | __ivm_count__ \n---+---------------\n B | 2\n B | 2\n A | 1\n(3 rows)\n\npostgres=# SELECT *, __ivm_count__ FROM m2;\n t | __ivm_count__ \n---+---------------\n B | 2\n A | 1\n(2 rows)\n\npostgres=# EXPLAIN SELECT * FROM m1;\n QUERY PLAN \n------------------------------------------------------------------------------\n Nested Loop (cost=0.00..61.03 rows=3000 width=2)\n -> Seq Scan on m1 mv (cost=0.00..1.03 rows=3 width=10)\n -> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=0)\n(3 rows)\n=============================\n\n* Simple Performance Evaluation\n\nI confirmed that \"incremental\" update of matviews is more effective\nthan the standard REFRESH by using simple exapmle. I used tables\nof pgbench (SF=100) here.\n\nCreate two matviews, that is, without and with IVM.\n\ntest=# CREATE MATERIALIZED VIEW bench1 AS\n SELECT aid, bid, abalance, bbalance\n FROM pgbench_accounts JOIN pgbench_branches USING (bid)\n WHERE abalance > 0 OR bbalance > 0;\nSELECT 5001054\ntest=# CREATE INCREMENTAL MATERIALIZED VIEW bench2 AS\n SELECT aid, bid, abalance, bbalance\n FROM pgbench_accounts JOIN pgbench_branches USING (bid)\n WHERE abalance > 0 OR bbalance > 0;\nSELECT 5001054\n\nThe standard REFRESH of bench1 took more than 10 seconds.\n\ntest=# \\timing \nTiming is on.\ntest=# REFRESH MATERIALIZED VIEW bench1 ;\nREFRESH MATERIALIZED VIEW\nTime: 11210.563 ms (00:11.211)\n\nCreate an index on the IVM matview (bench2).\n\ntest=# CREATE INDEX on bench2(aid,bid);\nCREATE INDEX\n\nUpdating a tuple in pgbench_accounts took 18ms. After this, bench2\nwas updated automatically and correctly.\n\ntest=# SELECT * FROM bench2 WHERE aid = 1;\n aid | bid | abalance | bbalance \n-----+-----+----------+----------\n 1 | 1 | 10 | 10\n(1 row)\n\nTime: 2.498 ms\ntest=# UPDATE pgbench_accounts SET abalance = 1000 WHERE aid = 1;\nUPDATE 1\nTime: 18.634 ms\ntest=# SELECT * FROM bench2 WHERE aid = 1;\n aid | bid | abalance | bbalance \n-----+-----+----------+----------\n 1 | 1 | 1000 | 10\n(1 row)\n\nHowever, if there is not the index on bench2, it took 4 sec, so\nappropriate indexes are needed on IVM matviews.\n\ntest=# DROP INDEX bench2_aid_bid_idx ;\nDROP INDEX\nTime: 10.613 ms\ntest=# UPDATE pgbench_accounts SET abalance = 2000 WHERE aid = 1;\nUPDATE 1\nTime: 3931.274 ms (00:03.931)\n\n* Restrictions on view definition\n\nThis patch is still in Work-in-Progress and there are many restrictions\non the view definition query of matviews.\n\nThe current implementation supports views including selection, projection,\nand inner join with or without DISTINCT. Aggregation and GROUP BY are not\nsupported yet, but I plan to deal with these by the first release. \nSelf-join, subqueries, OUTER JOIN, CTE, window functions are not\nconsidered well, either. I need more investigation on these type of views\nalthough I found some papers explaining how to handle sub-queries and\nouter-joins. \n\nThese unsupported views should be checked when a matview is created, but\nthis is not implemented yet. Hoshiai-san are working on this.\n\n* Timing of view maintenance\n\nThis patch implements a kind of Immediate Maintenance, that is, a matview\nis updated immediately when a base table is modified. On other hand, in\n\"Deferred Maintenance\", matviews are updated after the transaction, for\nexample, by the user command like REFRESH. \n\nFor implementing \"deferred\", it is need to implement a mechanism to maintain\nlogs for recording changes of base tables and an algorithm to compute the\ndelta to be applied to matviews. \n \nIn addition, there could be another implementation of Immediate Maintenance\nin which matview is updated at the end of a transaction that modified base\ntable, rather than in AFTER trigger. Oracle supports this type of IVM. To\nimplement this, we will need a mechanism to maintain change logs on base\ntables as well as Deferred maintenance.\n\n* Counting algorithm implementation\n\nThere will be also discussions on counting-algorithm implementation.\nFirstly, the current patch treats \"__ivm_count__\" as a special column name\nin a somewhat ad hoc way. This is used when maintaining and accessing matviews,\nand when \"SELECT * FROM ...\" is issued, __ivm_count__ column is invisible for\nusers. Maybe this name has to be inhibited in user tables. Is it acceptable\nto use such columns for IVM, and is there better way, if not?\n\nSecondly, a matview with duplicate tuples is replaces with a subquery which\nuses generate_series function. It does not have to be generate_series, and we\ncan make a new set returning function for this. Anyway, this internal behaviour\nis visible in EXPLAIN results as shown in Example 3. Also, there is a\nperformance impact because estimated rows number is wrong, and what is worse,\nthe cost of join is not small when the size of matview is large. Therefore, we\nmight have to add a new plan node for selecting from matviews rather than using\nsuch a special set returning function.\n\n\nRagards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Tue, 14 May 2019 15:46:48 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi hackers,\n\nThank you for your many questions and feedbacks at PGCon 2019.\nAttached is the patch rebased for the current master branch.\n\nRegards,\nYugo Nagata\n\nOn Tue, 14 May 2019 15:46:48 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 1 Apr 2019 12:11:22 +0900\n> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> > On Thu, 27 Dec 2018 21:57:26 +0900\n> > Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > \n> > > Hi,\n> > > \n> > > I would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \n> > \n> > I am now working on an initial patch for implementing IVM on PostgreSQL.\n> > This enables materialized views to be updated incrementally after one\n> > of their base tables is modified.\n> \n> Attached is a WIP patch of Incremental View Maintenance (IVM).\n> Major part is written by me, and changes in syntax and pg_class\n> are Hoshiai-san's work.\n> \n> Although this is sill a draft patch in work-in-progress, any\n> suggestions or thoughts would be appreciated.\n> \n> * What it is\n> \n> This allows a kind of Immediate Maintenance of materialized views. if a \n> materialized view is created by CRATE INCREMENTAL MATERIALIZED VIEW command,\n> the contents of the mateview is updated automatically and incrementally\n> after base tables are updated. Noted this syntax is just tentative, so it\n> may be changed.\n> \n> ====== Example 1 ======\n> postgres=# CREATE INCREMENTAL MATERIALIZED VIEW m AS SELECT * FROM t0;\n> SELECT 3\n> postgres=# SELECT * FROM m;\n> i \n> ---\n> 3\n> 2\n> 1\n> (3 rows)\n> \n> postgres=# INSERT INTO t0 VALUES (4);\n> INSERT 0 1\n> postgres=# SELECt * FROM m; -- automatically updated\n> i \n> ---\n> 3\n> 2\n> 1\n> 4\n> (4 rows)\n> =============================\n> \n> This implementation also supports matviews including duplicate tuples or\n> DISTINCT clause in its view definition query. For example, even if a matview\n> is defined with DISTINCT to remove duplication of tuples in a base table, this\n> can perform incremental update of the matview properly. That is, the contents\n> of the matview doesn't change when exiting tuples are inserted into the base\n> tables, and a tuple in the matview is deleted only when duplicity of the\n> corresponding tuple in the base table becomes zero.\n> \n> This is due to \"colunting alogorithm\" in which the number of each tuple is\n> stored in matviews as a special column value.\n> \n> ====== Example 2 ======\n> postgres=# SELECT * FROM t1;\n> id | t \n> ----+---\n> 1 | A\n> 2 | B\n> 3 | C\n> 4 | A\n> (4 rows)\n> \n> postgres=# CREATE INCREMENTAL MATERIALIZED VIEW m1 AS SELECT t FROM t1;\n> SELECT 3\n> postgres=# CREATE INCREMENTAL MATERIALIZED VIEW m2 AS SELECT DISTINCT t FROM t1;\n> SELECT 3\n> postgres=# SELECT * FROM m1; -- with duplicity\n> t \n> ---\n> A\n> A\n> C\n> B\n> (4 rows)\n> \n> postgres=# SELECT * FROM m2;\n> t \n> ---\n> A\n> B\n> C\n> (3 rows)\n> \n> postgres=# INSERT INTO t1 VALUES (5, 'B');\n> INSERT 0 1\n> postgres=# DELETE FROM t1 WHERE id IN (1,3); -- delete (1,A),(3,C)\n> DELETE 2\n> postgres=# SELECT * FROM m1; -- one A left and one more B\n> t \n> ---\n> B\n> B\n> A\n> (3 rows)\n> \n> postgres=# SELECT * FROM m2; -- only C is removed\n> t \n> ---\n> B\n> A\n> (2 rows)\n> =============================\n> \n> * How it works\n> \n> 1. Creating matview\n> \n> When a matview is created, AFTER triggers are internally created\n> on its base tables. When the base tables is modified (INSERT, DELETE,\n> UPDATE), the matview is updated incrementally in the trigger function.\n> \n> When populating the matview, GROUP BY and count(*) are added to the\n> view definition query before this is executed for counting duplicity\n> of tuples in the matview. The result of count is stored in the matview\n> as a special column named \"__ivm_count__\". \n> \n> 2. Maintenance of matview\n> \n> When base tables are modified, the change set of the table can be\n> referred as Ephemeral Named Relations (ENRs) thanks to Transition Table\n> (a feature of trigger implemented since PG10). We can calculate the diff\n> set of the matview by replacing the base table in the view definition\n> query with the ENR (at least if it is Selection-Projection -Join view). \n> As well as view definition time, GROUP BY and count(*) is added in order\n> to count the duplicity of tuples in the diff set. As a result, two diff\n> sets (to be deleted from and to be inserted into the matview) are\n> calculated, and the results are stored into temporary tables respectively.\n> \n> The matiview is updated by merging these change sets. Instead of executing\n> DELETE or INSERT simply, the values of __ivm_count__ column in the matview\n> is decreased or increased. When the values becomes zero, the corresponding\n> tuple is deleted from the matview.\n> \n> 3. Access to matview\n> \n> When SELECT is issued for IVM matviews defined with DISTINCT, all columns\n> except to __ivm_count__ of each tuple in the matview are returned. This is \n> correct because duplicity of tuples are eliminated by GROUP BY.\n> \n> When DISTINCT is not used, SELECT for the IVM matviews returns each tuple\n> __ivm_count__ times. Currently, this is implemented by rewriting the SELECT\n> query to replace the matview RTE with a subquery which joins the matview\n> and generate_series function as bellow. \n> \n> SELECT mv.* FROM mv, generate_series(1, mv.__ivm_count__);\n> \n> __ivm_count__ column is invisible for users when \"SELECT * FROM ...\" is\n> issued, but users can see the value by specifying in target list explicitly.\n> \n> ====== Example 3 ======\n> postgres=# \\d+ m1\n> Materialized view \"public.m1\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> ---------------+--------+-----------+----------+---------+----------+--------------+-------------\n> t | text | | | | extended | | \n> __ivm_count__ | bigint | | | | plain | | \n> View definition:\n> SELECT t1.t\n> FROM t1;\n> Access method: heap\n> \n> postgres=# \\d+ m2\n> Materialized view \"public.m2\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> ---------------+--------+-----------+----------+---------+----------+--------------+-------------\n> t | text | | | | extended | | \n> __ivm_count__ | bigint | | | | plain | | \n> View definition:\n> SELECT DISTINCT t1.t\n> FROM t1;\n> Access method: heap\n> \n> postgres=# SELECT *, __ivm_count__ FROM m1;\n> t | __ivm_count__ \n> ---+---------------\n> B | 2\n> B | 2\n> A | 1\n> (3 rows)\n> \n> postgres=# SELECT *, __ivm_count__ FROM m2;\n> t | __ivm_count__ \n> ---+---------------\n> B | 2\n> A | 1\n> (2 rows)\n> \n> postgres=# EXPLAIN SELECT * FROM m1;\n> QUERY PLAN \n> ------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..61.03 rows=3000 width=2)\n> -> Seq Scan on m1 mv (cost=0.00..1.03 rows=3 width=10)\n> -> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=0)\n> (3 rows)\n> =============================\n> \n> * Simple Performance Evaluation\n> \n> I confirmed that \"incremental\" update of matviews is more effective\n> than the standard REFRESH by using simple exapmle. I used tables\n> of pgbench (SF=100) here.\n> \n> Create two matviews, that is, without and with IVM.\n> \n> test=# CREATE MATERIALIZED VIEW bench1 AS\n> SELECT aid, bid, abalance, bbalance\n> FROM pgbench_accounts JOIN pgbench_branches USING (bid)\n> WHERE abalance > 0 OR bbalance > 0;\n> SELECT 5001054\n> test=# CREATE INCREMENTAL MATERIALIZED VIEW bench2 AS\n> SELECT aid, bid, abalance, bbalance\n> FROM pgbench_accounts JOIN pgbench_branches USING (bid)\n> WHERE abalance > 0 OR bbalance > 0;\n> SELECT 5001054\n> \n> The standard REFRESH of bench1 took more than 10 seconds.\n> \n> test=# \\timing \n> Timing is on.\n> test=# REFRESH MATERIALIZED VIEW bench1 ;\n> REFRESH MATERIALIZED VIEW\n> Time: 11210.563 ms (00:11.211)\n> \n> Create an index on the IVM matview (bench2).\n> \n> test=# CREATE INDEX on bench2(aid,bid);\n> CREATE INDEX\n> \n> Updating a tuple in pgbench_accounts took 18ms. After this, bench2\n> was updated automatically and correctly.\n> \n> test=# SELECT * FROM bench2 WHERE aid = 1;\n> aid | bid | abalance | bbalance \n> -----+-----+----------+----------\n> 1 | 1 | 10 | 10\n> (1 row)\n> \n> Time: 2.498 ms\n> test=# UPDATE pgbench_accounts SET abalance = 1000 WHERE aid = 1;\n> UPDATE 1\n> Time: 18.634 ms\n> test=# SELECT * FROM bench2 WHERE aid = 1;\n> aid | bid | abalance | bbalance \n> -----+-----+----------+----------\n> 1 | 1 | 1000 | 10\n> (1 row)\n> \n> However, if there is not the index on bench2, it took 4 sec, so\n> appropriate indexes are needed on IVM matviews.\n> \n> test=# DROP INDEX bench2_aid_bid_idx ;\n> DROP INDEX\n> Time: 10.613 ms\n> test=# UPDATE pgbench_accounts SET abalance = 2000 WHERE aid = 1;\n> UPDATE 1\n> Time: 3931.274 ms (00:03.931)\n> \n> * Restrictions on view definition\n> \n> This patch is still in Work-in-Progress and there are many restrictions\n> on the view definition query of matviews.\n> \n> The current implementation supports views including selection, projection,\n> and inner join with or without DISTINCT. Aggregation and GROUP BY are not\n> supported yet, but I plan to deal with these by the first release. \n> Self-join, subqueries, OUTER JOIN, CTE, window functions are not\n> considered well, either. I need more investigation on these type of views\n> although I found some papers explaining how to handle sub-queries and\n> outer-joins. \n> \n> These unsupported views should be checked when a matview is created, but\n> this is not implemented yet. Hoshiai-san are working on this.\n> \n> * Timing of view maintenance\n> \n> This patch implements a kind of Immediate Maintenance, that is, a matview\n> is updated immediately when a base table is modified. On other hand, in\n> \"Deferred Maintenance\", matviews are updated after the transaction, for\n> example, by the user command like REFRESH. \n> \n> For implementing \"deferred\", it is need to implement a mechanism to maintain\n> logs for recording changes of base tables and an algorithm to compute the\n> delta to be applied to matviews. \n> \n> In addition, there could be another implementation of Immediate Maintenance\n> in which matview is updated at the end of a transaction that modified base\n> table, rather than in AFTER trigger. Oracle supports this type of IVM. To\n> implement this, we will need a mechanism to maintain change logs on base\n> tables as well as Deferred maintenance.\n> \n> * Counting algorithm implementation\n> \n> There will be also discussions on counting-algorithm implementation.\n> Firstly, the current patch treats \"__ivm_count__\" as a special column name\n> in a somewhat ad hoc way. This is used when maintaining and accessing matviews,\n> and when \"SELECT * FROM ...\" is issued, __ivm_count__ column is invisible for\n> users. Maybe this name has to be inhibited in user tables. Is it acceptable\n> to use such columns for IVM, and is there better way, if not?\n> \n> Secondly, a matview with duplicate tuples is replaces with a subquery which\n> uses generate_series function. It does not have to be generate_series, and we\n> can make a new set returning function for this. Anyway, this internal behaviour\n> is visible in EXPLAIN results as shown in Example 3. Also, there is a\n> performance impact because estimated rows number is wrong, and what is worse,\n> the cost of join is not small when the size of matview is large. Therefore, we\n> might have to add a new plan node for selecting from matviews rather than using\n> such a special set returning function.\n> \n> \n> Ragards,\n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Thu, 20 Jun 2019 16:44:10 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Yugo,\n\n I'd like to compare the performance of your MV refresh algorithm versus\nan approach that logs changes into an mv log table, and can then apply the\nchanges at some later point in time. I'd like to handle the materialized\njoin view (mjv) case first, specifically a 2-way left outer join, with a UDF\nin the SELECT list of the mjv.\n\n Does your refresh algorithm handle mjv's with connected join graphs that\nconsist entirely of inner and left outer joins?\n\n If so, I'd like to measure the overhead of your refresh algorithm on\npgbench, modified to include an mjv, versus a (hand coded) incremental\nmaintenance algorithm that uses mv log tables populated by ordinary\ntriggers. We may also want to look at capturing the deltas using logical\nreplication, which ought to be faster than a trigger-based solution. \n\n I have someone available to do the performance testing for another 2\nmonths, so if you can connect with me off-list to coordinate, we can set up\nthe performance experiments and run them on our AWS clusters.\n\nbest regards,\n\n /Jim F\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 21 Jun 2019 08:41:11 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Jim,\n\nOn Fri, 21 Jun 2019 08:41:11 -0700 (MST)\nJim Finnerty <jfinnert@amazon.com> wrote:\n\n> Hi Yugo,\n> \n> I'd like to compare the performance of your MV refresh algorithm versus\n> an approach that logs changes into an mv log table, and can then apply the\n> changes at some later point in time. I'd like to handle the materialized\n> join view (mjv) case first, specifically a 2-way left outer join, with a UDF\n> in the SELECT list of the mjv.\n\nDo you mean you have your implementation of IVM that using log tables?\nI'm so interested in this, and I would appreciate it if you explain the \ndetail.\n \n> Does your refresh algorithm handle mjv's with connected join graphs that\n> consist entirely of inner and left outer joins?\n\n> If so, I'd like to measure the overhead of your refresh algorithm on\n> pgbench, modified to include an mjv, versus a (hand coded) incremental\n> maintenance algorithm that uses mv log tables populated by ordinary\n> triggers. We may also want to look at capturing the deltas using logical\n> replication, which ought to be faster than a trigger-based solution. \n\nIn the current our implementation, outer joins is not yet supported though\nwe plan to handle this in future. So,we would not be able to compare these \ndirectly in the same workload in the current status.\n\nHowever, the current our implementation supports only the way to update \nmaterialized views in a trigger, and the performance of modifying base tables \nwill be lower than the approach which uses log tables. This is because queries \nto update materialized views are issued in the trigger. This is not only a \noverhead itself, but also takes a lock on a materialized view, which has an ]\nimpact on concurrent execution performance. \n\nIn the previous our PoC, we implemented IVM using log tables, in which logs are \ncaptured by triggers and materialized views are update incrementally by a user \ncommand[1]. However, to implement log table approach, we need a infrastructure \nto maintain these logs. For example, which logs are necessary and which logs \ncan be discarded, etc. We thought this is not trivial work, so we decided to \nstart from the current approach which doesn't use log tables. We are now \npreparing to implement this in the next step because this is also needed to \nsupport deferred maintenance of views.\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n\nI agree that capturing the deltas using logical decoding will be faster than \nusing a trigger although we haven't yet consider this well.\n\nBest regadrds,\nYugo Nagata\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 28 Jun 2019 19:01:43 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is a WIP patch of IVM which supports some aggregate functions.\n\nCurrently, only count and sum are supported. Avg, min, or max is not supported \nalthough I think supporting this would not be so hard. \n\nAs a restriction, expressions specified in GROUP BY must appear in the target \nlist of views because tuples to be updated in MV are identified by using this \ngroup keys.\n\n\nIn the case of views without aggregate functions, only the number of tuple \nduplicates (__ivm_count__) are updated at incremental maintenance. On the other \nhand, in the case of vies with aggregations, the aggregated values are also \nupdated. The way of update depends the kind of aggregate function.\n\nIn the case of sum (or agg functions except to count), NULL in input values is \nignored, and this returns a null value when no rows are selected. To support \nthis specification, the number of not-NULL input values is counted and stored \nin MV as a hidden column whose name is like __ivm_count_sum__, for example.\n\nIn the case of count, this returns zero when no rows are selected, and count(*) \ndoesn't ignore NULL input. These specification are also supported.\n\nTuples to be updated in MV are identified by using keys specified by GROUP BY \nclause. However, in the case of aggregation without GROUP BY, there is only one \ntuple in the view, so keys are not uses to identify tuples.\n\n\nIn addition, a race condition which occurred in the previous version is \nprevented in this patch. In the previous version, when two translocations \nchange a base tables concurrently, an anormal update of MV was possible because \na change in one transaction was not visible for another transaction even in \nREAD COMMITTED level. \n\nTo prevent this, I fix this to take a lock in early stage of view maintenance \nto wait for concurrent transactions which are updating the same MV end. Also, \nwe have to get the latest snapshot before computting delta tables because any \nchanges which occurs in other transaction during lock waiting is not visible \neven in READ COMMITTED level.\n\nIn REPEATABLE READ or SERIALIZABLE level, don't wait a lock, and raise an error \nimmediately to prevent anormal update. These solutions might be ugly, but \nsomething to prevent anormal update is anyway necessary. There may be better \nway.\n\n\nMoreover, some regression test are added for aggregate functions support.\nThis is Hoshiai-san's work.\n\nAlthough the code is not refined yet and I will need a deal of refactoring\nand reorganizing, I submitted this to share the current status.\n\n\n* Exapmle (from regression test)\n\n=======================================================================\n(1) creating tables\n\nCREATE TABLE mv_base_a (i int, j int);\nINSERT INTO mv_base_a VALUES\n (1,10),\n (2,20),\n (3,30),\n (4,40),\n (5,50);\n\n\n(2) Views sith SUM() and COUNT() aggregation function\n\nBEGIN;\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_agg AS SELECT i, SUM(j), COUNT(i) FROM mv_base_a GROUP BY i;\nSELECT * FROM mv_ivm_agg ORDER BY 1,2,3;\n i | sum | count\n---+-----+-------\n 1 | 10 | 1\n 2 | 20 | 1\n 3 | 30 | 1\n 4 | 40 | 1\n 5 | 50 | 1\n(5 rows)\n\nINSERT INTO mv_base_a VALUES(2,100);\nSELECT * FROM mv_ivm_agg ORDER BY 1,2,3;\n i | sum | count\n---+-----+-------\n 1 | 10 | 1\n 2 | 120 | 2\n 3 | 30 | 1\n 4 | 40 | 1\n 5 | 50 | 1\n(5 rows)\n\nUPDATE mv_base_a SET j = 200 WHERE (i,j) = (2,100);\nSELECT * FROM mv_ivm_agg ORDER BY 1,2,3;\n i | sum | count\n---+-----+-------\n 1 | 10 | 1\n 2 | 220 | 2\n 3 | 30 | 1\n 4 | 40 | 1\n 5 | 50 | 1\n(5 rows)\n\nDELETE FROM mv_base_a WHERE (i,j) = (2,200);\nSELECT * FROM mv_ivm_agg ORDER BY 1,2,3;\n i | sum | count\n---+-----+-------\n 1 | 10 | 1\n 2 | 20 | 1\n 3 | 30 | 1\n 4 | 40 | 1\n 5 | 50 | 1\n(5 rows)\n\nROLLBACK;\n\n\n(3) Views with COUNT(*) aggregation function\n\nBEGIN;\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_agg AS SELECT i, SUM(j),COUNT(*) FROM mv_base_a GROUP BY i;\nSELECT * FROM mv_ivm_agg ORDER BY 1,2,3;\n i | sum | count\n---+-----+-------\n 1 | 10 | 1\n 2 | 20 | 1\n 3 | 30 | 1\n 4 | 40 | 1\n 5 | 50 | 1\n(5 rows)\n\nINSERT INTO mv_base_a VALUES(2,100);\nSELECT * FROM mv_ivm_agg ORDER BY 1,2,3;\n i | sum | count\n---+-----+-------\n 1 | 10 | 1\n 2 | 120 | 2\n 3 | 30 | 1\n 4 | 40 | 1\n 5 | 50 | 1\n(5 rows)\n\nROLLBACK;\n\n(4) Views with aggregation function without GROUP clause\n\nBEGIN;\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_group AS SELECT SUM(j)FROM mv_base_a;\nSELECT * FROM mv_ivm_group ORDER BY 1;\n sum\n-----\n 150\n(1 row)\n\nINSERT INTO mv_base_a VALUES(6,20);\nSELECT * FROM mv_ivm_group ORDER BY 1;\n sum\n-----\n 170\n(1 row)\n=======================================================================\n\n\n\nOn Thu, 20 Jun 2019 16:44:10 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> Hi hackers,\n> \n> Thank you for your many questions and feedbacks at PGCon 2019.\n> Attached is the patch rebased for the current master branch.\n> \n> Regards,\n> Yugo Nagata\n> \n> On Tue, 14 May 2019 15:46:48 +0900\n> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> > On Mon, 1 Apr 2019 12:11:22 +0900\n> > Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > \n> > > On Thu, 27 Dec 2018 21:57:26 +0900\n> > > Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > > \n> > > > Hi,\n> > > > \n> > > > I would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \n> > > \n> > > I am now working on an initial patch for implementing IVM on PostgreSQL.\n> > > This enables materialized views to be updated incrementally after one\n> > > of their base tables is modified.\n> > \n> > Attached is a WIP patch of Incremental View Maintenance (IVM).\n> > Major part is written by me, and changes in syntax and pg_class\n> > are Hoshiai-san's work.\n> > \n> > Although this is sill a draft patch in work-in-progress, any\n> > suggestions or thoughts would be appreciated.\n> > \n> > * What it is\n> > \n> > This allows a kind of Immediate Maintenance of materialized views. if a \n> > materialized view is created by CRATE INCREMENTAL MATERIALIZED VIEW command,\n> > the contents of the mateview is updated automatically and incrementally\n> > after base tables are updated. Noted this syntax is just tentative, so it\n> > may be changed.\n> > \n> > ====== Example 1 ======\n> > postgres=# CREATE INCREMENTAL MATERIALIZED VIEW m AS SELECT * FROM t0;\n> > SELECT 3\n> > postgres=# SELECT * FROM m;\n> > i \n> > ---\n> > 3\n> > 2\n> > 1\n> > (3 rows)\n> > \n> > postgres=# INSERT INTO t0 VALUES (4);\n> > INSERT 0 1\n> > postgres=# SELECt * FROM m; -- automatically updated\n> > i \n> > ---\n> > 3\n> > 2\n> > 1\n> > 4\n> > (4 rows)\n> > =============================\n> > \n> > This implementation also supports matviews including duplicate tuples or\n> > DISTINCT clause in its view definition query. For example, even if a matview\n> > is defined with DISTINCT to remove duplication of tuples in a base table, this\n> > can perform incremental update of the matview properly. That is, the contents\n> > of the matview doesn't change when exiting tuples are inserted into the base\n> > tables, and a tuple in the matview is deleted only when duplicity of the\n> > corresponding tuple in the base table becomes zero.\n> > \n> > This is due to \"colunting alogorithm\" in which the number of each tuple is\n> > stored in matviews as a special column value.\n> > \n> > ====== Example 2 ======\n> > postgres=# SELECT * FROM t1;\n> > id | t \n> > ----+---\n> > 1 | A\n> > 2 | B\n> > 3 | C\n> > 4 | A\n> > (4 rows)\n> > \n> > postgres=# CREATE INCREMENTAL MATERIALIZED VIEW m1 AS SELECT t FROM t1;\n> > SELECT 3\n> > postgres=# CREATE INCREMENTAL MATERIALIZED VIEW m2 AS SELECT DISTINCT t FROM t1;\n> > SELECT 3\n> > postgres=# SELECT * FROM m1; -- with duplicity\n> > t \n> > ---\n> > A\n> > A\n> > C\n> > B\n> > (4 rows)\n> > \n> > postgres=# SELECT * FROM m2;\n> > t \n> > ---\n> > A\n> > B\n> > C\n> > (3 rows)\n> > \n> > postgres=# INSERT INTO t1 VALUES (5, 'B');\n> > INSERT 0 1\n> > postgres=# DELETE FROM t1 WHERE id IN (1,3); -- delete (1,A),(3,C)\n> > DELETE 2\n> > postgres=# SELECT * FROM m1; -- one A left and one more B\n> > t \n> > ---\n> > B\n> > B\n> > A\n> > (3 rows)\n> > \n> > postgres=# SELECT * FROM m2; -- only C is removed\n> > t \n> > ---\n> > B\n> > A\n> > (2 rows)\n> > =============================\n> > \n> > * How it works\n> > \n> > 1. Creating matview\n> > \n> > When a matview is created, AFTER triggers are internally created\n> > on its base tables. When the base tables is modified (INSERT, DELETE,\n> > UPDATE), the matview is updated incrementally in the trigger function.\n> > \n> > When populating the matview, GROUP BY and count(*) are added to the\n> > view definition query before this is executed for counting duplicity\n> > of tuples in the matview. The result of count is stored in the matview\n> > as a special column named \"__ivm_count__\". \n> > \n> > 2. Maintenance of matview\n> > \n> > When base tables are modified, the change set of the table can be\n> > referred as Ephemeral Named Relations (ENRs) thanks to Transition Table\n> > (a feature of trigger implemented since PG10). We can calculate the diff\n> > set of the matview by replacing the base table in the view definition\n> > query with the ENR (at least if it is Selection-Projection -Join view). \n> > As well as view definition time, GROUP BY and count(*) is added in order\n> > to count the duplicity of tuples in the diff set. As a result, two diff\n> > sets (to be deleted from and to be inserted into the matview) are\n> > calculated, and the results are stored into temporary tables respectively.\n> > \n> > The matiview is updated by merging these change sets. Instead of executing\n> > DELETE or INSERT simply, the values of __ivm_count__ column in the matview\n> > is decreased or increased. When the values becomes zero, the corresponding\n> > tuple is deleted from the matview.\n> > \n> > 3. Access to matview\n> > \n> > When SELECT is issued for IVM matviews defined with DISTINCT, all columns\n> > except to __ivm_count__ of each tuple in the matview are returned. This is \n> > correct because duplicity of tuples are eliminated by GROUP BY.\n> > \n> > When DISTINCT is not used, SELECT for the IVM matviews returns each tuple\n> > __ivm_count__ times. Currently, this is implemented by rewriting the SELECT\n> > query to replace the matview RTE with a subquery which joins the matview\n> > and generate_series function as bellow. \n> > \n> > SELECT mv.* FROM mv, generate_series(1, mv.__ivm_count__);\n> > \n> > __ivm_count__ column is invisible for users when \"SELECT * FROM ...\" is\n> > issued, but users can see the value by specifying in target list explicitly.\n> > \n> > ====== Example 3 ======\n> > postgres=# \\d+ m1\n> > Materialized view \"public.m1\"\n> > Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> > ---------------+--------+-----------+----------+---------+----------+--------------+-------------\n> > t | text | | | | extended | | \n> > __ivm_count__ | bigint | | | | plain | | \n> > View definition:\n> > SELECT t1.t\n> > FROM t1;\n> > Access method: heap\n> > \n> > postgres=# \\d+ m2\n> > Materialized view \"public.m2\"\n> > Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> > ---------------+--------+-----------+----------+---------+----------+--------------+-------------\n> > t | text | | | | extended | | \n> > __ivm_count__ | bigint | | | | plain | | \n> > View definition:\n> > SELECT DISTINCT t1.t\n> > FROM t1;\n> > Access method: heap\n> > \n> > postgres=# SELECT *, __ivm_count__ FROM m1;\n> > t | __ivm_count__ \n> > ---+---------------\n> > B | 2\n> > B | 2\n> > A | 1\n> > (3 rows)\n> > \n> > postgres=# SELECT *, __ivm_count__ FROM m2;\n> > t | __ivm_count__ \n> > ---+---------------\n> > B | 2\n> > A | 1\n> > (2 rows)\n> > \n> > postgres=# EXPLAIN SELECT * FROM m1;\n> > QUERY PLAN \n> > ------------------------------------------------------------------------------\n> > Nested Loop (cost=0.00..61.03 rows=3000 width=2)\n> > -> Seq Scan on m1 mv (cost=0.00..1.03 rows=3 width=10)\n> > -> Function Scan on generate_series (cost=0.00..10.00 rows=1000 width=0)\n> > (3 rows)\n> > =============================\n> > \n> > * Simple Performance Evaluation\n> > \n> > I confirmed that \"incremental\" update of matviews is more effective\n> > than the standard REFRESH by using simple exapmle. I used tables\n> > of pgbench (SF=100) here.\n> > \n> > Create two matviews, that is, without and with IVM.\n> > \n> > test=# CREATE MATERIALIZED VIEW bench1 AS\n> > SELECT aid, bid, abalance, bbalance\n> > FROM pgbench_accounts JOIN pgbench_branches USING (bid)\n> > WHERE abalance > 0 OR bbalance > 0;\n> > SELECT 5001054\n> > test=# CREATE INCREMENTAL MATERIALIZED VIEW bench2 AS\n> > SELECT aid, bid, abalance, bbalance\n> > FROM pgbench_accounts JOIN pgbench_branches USING (bid)\n> > WHERE abalance > 0 OR bbalance > 0;\n> > SELECT 5001054\n> > \n> > The standard REFRESH of bench1 took more than 10 seconds.\n> > \n> > test=# \\timing \n> > Timing is on.\n> > test=# REFRESH MATERIALIZED VIEW bench1 ;\n> > REFRESH MATERIALIZED VIEW\n> > Time: 11210.563 ms (00:11.211)\n> > \n> > Create an index on the IVM matview (bench2).\n> > \n> > test=# CREATE INDEX on bench2(aid,bid);\n> > CREATE INDEX\n> > \n> > Updating a tuple in pgbench_accounts took 18ms. After this, bench2\n> > was updated automatically and correctly.\n> > \n> > test=# SELECT * FROM bench2 WHERE aid = 1;\n> > aid | bid | abalance | bbalance \n> > -----+-----+----------+----------\n> > 1 | 1 | 10 | 10\n> > (1 row)\n> > \n> > Time: 2.498 ms\n> > test=# UPDATE pgbench_accounts SET abalance = 1000 WHERE aid = 1;\n> > UPDATE 1\n> > Time: 18.634 ms\n> > test=# SELECT * FROM bench2 WHERE aid = 1;\n> > aid | bid | abalance | bbalance \n> > -----+-----+----------+----------\n> > 1 | 1 | 1000 | 10\n> > (1 row)\n> > \n> > However, if there is not the index on bench2, it took 4 sec, so\n> > appropriate indexes are needed on IVM matviews.\n> > \n> > test=# DROP INDEX bench2_aid_bid_idx ;\n> > DROP INDEX\n> > Time: 10.613 ms\n> > test=# UPDATE pgbench_accounts SET abalance = 2000 WHERE aid = 1;\n> > UPDATE 1\n> > Time: 3931.274 ms (00:03.931)\n> > \n> > * Restrictions on view definition\n> > \n> > This patch is still in Work-in-Progress and there are many restrictions\n> > on the view definition query of matviews.\n> > \n> > The current implementation supports views including selection, projection,\n> > and inner join with or without DISTINCT. Aggregation and GROUP BY are not\n> > supported yet, but I plan to deal with these by the first release. \n> > Self-join, subqueries, OUTER JOIN, CTE, window functions are not\n> > considered well, either. I need more investigation on these type of views\n> > although I found some papers explaining how to handle sub-queries and\n> > outer-joins. \n> > \n> > These unsupported views should be checked when a matview is created, but\n> > this is not implemented yet. Hoshiai-san are working on this.\n> > \n> > * Timing of view maintenance\n> > \n> > This patch implements a kind of Immediate Maintenance, that is, a matview\n> > is updated immediately when a base table is modified. On other hand, in\n> > \"Deferred Maintenance\", matviews are updated after the transaction, for\n> > example, by the user command like REFRESH. \n> > \n> > For implementing \"deferred\", it is need to implement a mechanism to maintain\n> > logs for recording changes of base tables and an algorithm to compute the\n> > delta to be applied to matviews. \n> > \n> > In addition, there could be another implementation of Immediate Maintenance\n> > in which matview is updated at the end of a transaction that modified base\n> > table, rather than in AFTER trigger. Oracle supports this type of IVM. To\n> > implement this, we will need a mechanism to maintain change logs on base\n> > tables as well as Deferred maintenance.\n> > \n> > * Counting algorithm implementation\n> > \n> > There will be also discussions on counting-algorithm implementation.\n> > Firstly, the current patch treats \"__ivm_count__\" as a special column name\n> > in a somewhat ad hoc way. This is used when maintaining and accessing matviews,\n> > and when \"SELECT * FROM ...\" is issued, __ivm_count__ column is invisible for\n> > users. Maybe this name has to be inhibited in user tables. Is it acceptable\n> > to use such columns for IVM, and is there better way, if not?\n> > \n> > Secondly, a matview with duplicate tuples is replaces with a subquery which\n> > uses generate_series function. It does not have to be generate_series, and we\n> > can make a new set returning function for this. Anyway, this internal behaviour\n> > is visible in EXPLAIN results as shown in Example 3. Also, there is a\n> > performance impact because estimated rows number is wrong, and what is worse,\n> > the cost of join is not small when the size of matview is large. Therefore, we\n> > might have to add a new plan node for selecting from matviews rather than using\n> > such a special set returning function.\n> > \n> > \n> > Ragards,\n> > -- \n> > Yugo Nagata <nagata@sraoss.co.jp>\n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Fri, 28 Jun 2019 19:56:20 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Greg,\n\nOn Wed, 3 Apr 2019 17:41:36 -0400\nGreg Stark <stark@mit.edu> wrote:\n\n> On Sun, 31 Mar 2019 at 23:22, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> >\n> > Firstly, this will handle simple definition views which includes only\n> > selection, projection, and join. Standard aggregations (count, sum, avg,\n> > min, max) are not planned to be implemented in the first patch, but these\n> > are commonly used in materialized views, so I'll implement them later on.\n> \n> It's fine to not have all the features from day 1 of course. But I\n> just picked up this comment and the followup talking about splitting\n> AVG into SUM and COUNT and I had a comment. When you do look at\n> tackling aggregates I don't think you should restrict yourself to\n> these specific standard aggregations. We have all the necessary\n> abstractions to handle all aggregations that are feasible, see\n> https://www.postgresql.org/docs/devel/xaggr.html#XAGGR-MOVING-AGGREGATES\n> \n> What you need to do -- I think -- is store the \"moving aggregate\n> state\" before the final function. Then whenever a row is inserted or\n> deleted or updated (or whenever another column is updated which causes\n> the value to row to enter or leave the aggregation) apply either\n> aggtransfn or aggminvtransfn to the state. I'm not sure if you want to\n> apply the final function on every update or only lazily either may be\n> better in some usage.\n\nThank you for your suggestion! I submitted the latest patch just now supporting \nsome aggregate functions, but this supports only sum and count, and lacking a \nkind of generalization. However, I would like to refine this to support more \ngeneral aggregate functions. I think your suggestions is helpful for me to do \nthis. Thank you!\n\nBest regards,\nYugo Nagata\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 28 Jun 2019 20:03:54 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> Attached is a WIP patch of IVM which supports some aggregate functions.\n\nHi Nagata-san and Hoshiai-san,\n\nThank you for working on this. I enjoyed your talk at PGCon. I've\nadded Kevin Grittner just in case he missed this thread; he has talked\noften about implementing the counting algorithm, and he wrote the\n\"trigger transition tables\" feature to support exactly this. While\nintegrating trigger transition tables with the new partition features,\nwe had to make a number of decisions about how that should work, and\nwe tried to come up with answers that would work for IMV, and I hope\nwe made the right choices!\n\nI am quite interested to learn how IVM interacts with SERIALIZABLE.\n\nA couple of superficial review comments:\n\n+ const char *aggname = get_func_name(aggref->aggfnoid);\n...\n+ else if (!strcmp(aggname, \"sum\"))\n\nI guess you need a more robust way to detect the supported aggregates\nthan their name, or I guess some way for aggregates themselves to\nspecify that they support this and somehow supply the extra logic.\nPerhaps I just waid what Greg Stark already said, except not as well.\n\n+ elog(ERROR, \"Aggrege function %s is not\nsupported\", aggname);\n\ns/Aggrege/aggregate/\n\nOf course it is not helpful to comment on typos at this early stage,\nit's just that this one appears many times in the test output :-)\n\n+static bool\n+isIvmColumn(const char *s)\n+{\n+ char pre[7];\n+\n+ strlcpy(pre, s, sizeof(pre));\n+ return (strcmp(pre, \"__ivm_\") == 0);\n+}\n\nWhat about strncmp(s, \"__ivm_\", 6) == 0)? As for the question of how\nto reserve a namespace for system columns that won't clash with user\ncolumns, according to our manual the SQL standard doesn't allow $ in\nidentifier names, and according to my copy SQL92 \"intermediate SQL\"\ndoesn't allow identifiers that end in an underscore. I don't know\nwhat the best answer is but we should probably decide on a something\nbased the standard.\n\nAs for how to make internal columns invisible to SELECT *, previously\nthere have been discussions about doing that using a new flag in\npg_attribute:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n\n+ \"WITH t AS (\"\n+ \" SELECT diff.__ivm_count__,\n(diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n+ \", %s\"\n+ \" FROM %s AS mv, %s AS diff WHERE (%s) = (%s)\"\n+ \"), updt AS (\"\n+ \" UPDATE %s AS mv SET __ivm_count__ =\nmv.__ivm_count__ - t.__ivm_count__\"\n+ \", %s \"\n+ \" FROM t WHERE mv.ctid = t.ctid AND NOT for_dlt\"\n+ \") DELETE FROM %s AS mv USING t WHERE\nmv.ctid = t.ctid AND for_dlt;\",\n\nI fully understand that this is POC code, but I am curious about one\nthing. These queries that are executed by apply_delta() would need to\nbe converted to C, or at least used reusable plans, right? Hmm,\ncreating and dropping temporary tables every time is a clue that the\nultimate form of this should be tuplestores and C code, I think,\nright?\n\n> Moreover, some regression test are added for aggregate functions support.\n> This is Hoshiai-san's work.\n\nGreat. Next time you post a WIP patch, could you please fix this\nsmall compiler warning?\n\ndescribe.c: In function ‘describeOneTableDetails’:\ndescribe.c:3270:55: error: ‘*((void *)&tableinfo+48)’ may be used\nuninitialized in this function [-Werror=maybe-uninitialized]\nif (verbose && tableinfo.relkind == RELKIND_MATVIEW && tableinfo.isivm)\n^\ndescribe.c:1495:4: note: ‘*((void *)&tableinfo+48)’ was declared here\n} tableinfo;\n^\n\nThen our unofficial automatic CI system[1] will run these tests every\nday, which sometimes finds problems.\n\n[1] cfbot.cputube.org\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 18:31:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> As for how to make internal columns invisible to SELECT *, previously\n> there have been discussions about doing that using a new flag in\n> pg_attribute:\n> \n> https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n\nNow that I realized that there are several use cases for invisible\ncolumns, I think this is the way what we shoud go for.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 08 Jul 2019 17:04:38 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Thomas,\n\n2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:\n\n> On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > Attached is a WIP patch of IVM which supports some aggregate functions.\n>\n> Hi Nagata-san and Hoshiai-san,\n>\n> Thank you for working on this. I enjoyed your talk at PGCon. I've\n> added Kevin Grittner just in case he missed this thread; he has talked\n> often about implementing the counting algorithm, and he wrote the\n> \"trigger transition tables\" feature to support exactly this. While\n> integrating trigger transition tables with the new partition features,\n> we had to make a number of decisions about how that should work, and\n> we tried to come up with answers that would work for IMV, and I hope\n> we made the right choices!\n>\n> I am quite interested to learn how IVM interacts with SERIALIZABLE.\n>\n\n Nagata-san has been studying this. Nagata-san, any comment?\n\nA couple of superficial review comments:\n>\n\nThank you for your review comments.\nPlease find attached patches. The some of your review is reflected in patch\ntoo.\n\nWe manage and update IVM on following github repository.\nhttps://github.com/sraoss/pgsql-ivm\nyou also can found latest WIP patch here.\n\n\n> + const char *aggname = get_func_name(aggref->aggfnoid);\n> ...\n> + else if (!strcmp(aggname, \"sum\"))\n>\n> I guess you need a more robust way to detect the supported aggregates\n> than their name, or I guess some way for aggregates themselves to\n> specify that they support this and somehow supply the extra logic.\n> Perhaps I just waid what Greg Stark already said, except not as well.\n>\n\nWe have recognized the issue and are welcome any input.\n\n+ elog(ERROR, \"Aggrege function %s is not\n> supported\", aggname);\n>\n> s/Aggrege/aggregate/\n>\n\nI fixed this typo.\n\nOf course it is not helpful to comment on typos at this early stage,\n> it's just that this one appears many times in the test output :-)\n>\n> +static bool\n> +isIvmColumn(const char *s)\n> +{\n> + char pre[7];\n> +\n> + strlcpy(pre, s, sizeof(pre));\n> + return (strcmp(pre, \"__ivm_\") == 0);\n> +}\n>\n> What about strncmp(s, \"__ivm_\", 6) == 0)?\n\n\nI agree with you, I fixed it.\n\nAs for the question of how\n> to reserve a namespace for system columns that won't clash with user\n> columns, according to our manual the SQL standard doesn't allow $ in\n> identifier names, and according to my copy SQL92 \"intermediate SQL\"\n> doesn't allow identifiers that end in an underscore. I don't know\n> what the best answer is but we should probably decide on a something\n> based the standard.\n>\n> As for how to make internal columns invisible to SELECT *, previously\n> there have been discussions about doing that using a new flag in\n> pg_attribute:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n>\n> + \"WITH t AS (\"\n> + \" SELECT diff.__ivm_count__,\n> (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n> + \", %s\"\n> + \" FROM %s AS mv, %s AS diff WHERE (%s) =\n> (%s)\"\n> + \"), updt AS (\"\n> + \" UPDATE %s AS mv SET __ivm_count__ =\n> mv.__ivm_count__ - t.__ivm_count__\"\n> + \", %s \"\n> + \" FROM t WHERE mv.ctid = t.ctid AND NOT\n> for_dlt\"\n> + \") DELETE FROM %s AS mv USING t WHERE\n> mv.ctid = t.ctid AND for_dlt;\",\n>\n> I fully understand that this is POC code, but I am curious about one\n> thing. These queries that are executed by apply_delta() would need to\n> be converted to C, or at least used reusable plans, right? Hmm,\n> creating and dropping temporary tables every time is a clue that the\n> ultimate form of this should be tuplestores and C code, I think,\n> right?\n>\n\nNagata-san is investing the issue.\n\n\n> > Moreover, some regression test are added for aggregate functions support.\n> > This is Hoshiai-san's work.\n>\n> Great. Next time you post a WIP patch, could you please fix this\n> small compiler warning?\n>\n> describe.c: In function ‘describeOneTableDetails’:\n> describe.c:3270:55: error: ‘*((void *)&tableinfo+48)’ may be used\n> uninitialized in this function [-Werror=maybe-uninitialized]\n> if (verbose && tableinfo.relkind == RELKIND_MATVIEW && tableinfo.isivm)\n> ^\n> describe.c:1495:4: note: ‘*((void *)&tableinfo+48)’ was declared here\n> } tableinfo;\n> ^\n>\n\nIt is fixed too.\n\nThen our unofficial automatic CI system[1] will run these tests every\n> day, which sometimes finds problems.\n>\n> [1] cfbot.cputube.org\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n>\nBest regards,\n\nTakuma Hoshiai\n\nHi Thomas,2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> Attached is a WIP patch of IVM which supports some aggregate functions.\n\nHi Nagata-san and Hoshiai-san,\n\nThank you for working on this. I enjoyed your talk at PGCon. I've\nadded Kevin Grittner just in case he missed this thread; he has talked\noften about implementing the counting algorithm, and he wrote the\n\"trigger transition tables\" feature to support exactly this. While\nintegrating trigger transition tables with the new partition features,\nwe had to make a number of decisions about how that should work, and\nwe tried to come up with answers that would work for IMV, and I hope\nwe made the right choices!\n\nI am quite interested to learn how IVM interacts with SERIALIZABLE. Nagata-san has been studying this. Nagata-san, any comment?\nA couple of superficial review comments: Thank you for your review comments.Please find attached patches. The some of your review is reflected in patch too.We manage and update IVM on following github repository.https://github.com/sraoss/pgsql-ivmyou also can found latest WIP patch here. \n+ const char *aggname = get_func_name(aggref->aggfnoid);\n...\n+ else if (!strcmp(aggname, \"sum\"))\n\nI guess you need a more robust way to detect the supported aggregates\nthan their name, or I guess some way for aggregates themselves to\nspecify that they support this and somehow supply the extra logic.\nPerhaps I just waid what Greg Stark already said, except not as well.We have recognized the issue and are welcome any input. \n+ elog(ERROR, \"Aggrege function %s is not\nsupported\", aggname);\n\ns/Aggrege/aggregate/ I fixed this typo. \nOf course it is not helpful to comment on typos at this early stage,\nit's just that this one appears many times in the test output :-)\n\n+static bool\n+isIvmColumn(const char *s)\n+{\n+ char pre[7];\n+\n+ strlcpy(pre, s, sizeof(pre));\n+ return (strcmp(pre, \"__ivm_\") == 0);\n+}\n\nWhat about strncmp(s, \"__ivm_\", 6) == 0)? I agree with you, I fixed it. As for the question of how\nto reserve a namespace for system columns that won't clash with user\ncolumns, according to our manual the SQL standard doesn't allow $ in\nidentifier names, and according to my copy SQL92 \"intermediate SQL\"\ndoesn't allow identifiers that end in an underscore. I don't know\nwhat the best answer is but we should probably decide on a something\nbased the standard.\n\nAs for how to make internal columns invisible to SELECT *, previously\nthere have been discussions about doing that using a new flag in\npg_attribute:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n\n+ \"WITH t AS (\"\n+ \" SELECT diff.__ivm_count__,\n(diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n+ \", %s\"\n+ \" FROM %s AS mv, %s AS diff WHERE (%s) = (%s)\"\n+ \"), updt AS (\"\n+ \" UPDATE %s AS mv SET __ivm_count__ =\nmv.__ivm_count__ - t.__ivm_count__\"\n+ \", %s \"\n+ \" FROM t WHERE mv.ctid = t.ctid AND NOT for_dlt\"\n+ \") DELETE FROM %s AS mv USING t WHERE\nmv.ctid = t.ctid AND for_dlt;\",\n\nI fully understand that this is POC code, but I am curious about one\nthing. These queries that are executed by apply_delta() would need to\nbe converted to C, or at least used reusable plans, right? Hmm,\ncreating and dropping temporary tables every time is a clue that the\nultimate form of this should be tuplestores and C code, I think,\nright? Nagata-san is investing the issue. \n> Moreover, some regression test are added for aggregate functions support.\n> This is Hoshiai-san's work.\n\nGreat. Next time you post a WIP patch, could you please fix this\nsmall compiler warning?\n\ndescribe.c: In function ‘describeOneTableDetails’:\ndescribe.c:3270:55: error: ‘*((void *)&tableinfo+48)’ may be used\nuninitialized in this function [-Werror=maybe-uninitialized]\nif (verbose && tableinfo.relkind == RELKIND_MATVIEW && tableinfo.isivm)\n^\ndescribe.c:1495:4: note: ‘*((void *)&tableinfo+48)’ was declared here\n} tableinfo;\n^ It is fixed too. \nThen our unofficial automatic CI system[1] will run these tests every\nday, which sometimes finds problems.\n\n[1] cfbot.cputube.org\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n Best regards, Takuma Hoshiai",
"msg_date": "Wed, 10 Jul 2019 11:07:15 +0900",
"msg_from": "Takuma Hoshiai <takuma.hoshiai@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 10 Jul 2019 11:07:15 +0900\nTakuma Hoshiai <takuma.hoshiai@gmail.com> wrote:\n\n> Hi Thomas,\n> \n> 2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:\n> \n> > On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > > Attached is a WIP patch of IVM which supports some aggregate functions.\n> >\n> > Hi Nagata-san and Hoshiai-san,\n> >\n> > Thank you for working on this. I enjoyed your talk at PGCon. I've\n> > added Kevin Grittner just in case he missed this thread; he has talked\n> > often about implementing the counting algorithm, and he wrote the\n> > \"trigger transition tables\" feature to support exactly this. While\n> > integrating trigger transition tables with the new partition features,\n> > we had to make a number of decisions about how that should work, and\n> > we tried to come up with answers that would work for IMV, and I hope\n> > we made the right choices!\n> >\n> > I am quite interested to learn how IVM interacts with SERIALIZABLE.\n> >\n> \n> Nagata-san has been studying this. Nagata-san, any comment?\n> \n> > A couple of superficial review comments:\n> \n> Thank you for your review comments.\n> Please find attached patches. The some of your review is reflected in patch\n> too.\n\nSorry, I forgot attaching patch.\nIn addition, avg() function is supported newly. We found a issue\nwhen use avg() with IVM, added its reproduction case in\nregressio test. We are being to fix now.\n\n> We manage and update IVM on following github repository.\n> https://github.com/sraoss/pgsql-ivm\n> you also can found latest WIP patch here.\n> \n> \n> > + const char *aggname = get_func_name(aggref->aggfnoid);\n> > ...\n> > + else if (!strcmp(aggname, \"sum\"))\n> >\n> > I guess you need a more robust way to detect the supported aggregates\n> > than their name, or I guess some way for aggregates themselves to\n> > specify that they support this and somehow supply the extra logic.\n> > Perhaps I just waid what Greg Stark already said, except not as well.\n> >\n> \n> We have recognized the issue and are welcome any input.\n> \n> > + elog(ERROR, \"Aggrege function %s is not\n> > supported\", aggname);\n> >\n> > s/Aggrege/aggregate/\n> >\n> \n> I fixed this typo.\n> \n> > Of course it is not helpful to comment on typos at this early stage,\n> > it's just that this one appears many times in the test output :-)\n> >\n> > +static bool\n> > +isIvmColumn(const char *s)\n> > +{\n> > + char pre[7];\n> > +\n> > + strlcpy(pre, s, sizeof(pre));\n> > + return (strcmp(pre, \"__ivm_\") == 0);\n> > +}\n> >\n> > What about strncmp(s, \"__ivm_\", 6) == 0)?\n> \n> \n> I agree with you, I fixed it.\n> \n> > As for the question of how\n> > to reserve a namespace for system columns that won't clash with user\n> > columns, according to our manual the SQL standard doesn't allow $ in\n> > identifier names, and according to my copy SQL92 \"intermediate SQL\"\n> > doesn't allow identifiers that end in an underscore. I don't know\n> > what the best answer is but we should probably decide on a something\n> > based the standard.\n> >\n> > As for how to make internal columns invisible to SELECT *, previously\n> > there have been discussions about doing that using a new flag in\n> > pg_attribute:\n> >\n> >\n> > https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> >\n> > + \"WITH t AS (\"\n> > + \" SELECT diff.__ivm_count__,\n> > (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n> > + \", %s\"\n> > + \" FROM %s AS mv, %s AS diff WHERE (%s) =\n> > (%s)\"\n> > + \"), updt AS (\"\n> > + \" UPDATE %s AS mv SET __ivm_count__ =\n> > mv.__ivm_count__ - t.__ivm_count__\"\n> > + \", %s \"\n> > + \" FROM t WHERE mv.ctid = t.ctid AND NOT\n> > for_dlt\"\n> > + \") DELETE FROM %s AS mv USING t WHERE\n> > mv.ctid = t.ctid AND for_dlt;\",\n> >\n> > I fully understand that this is POC code, but I am curious about one\n> > thing. These queries that are executed by apply_delta() would need to\n> > be converted to C, or at least used reusable plans, right? Hmm,\n> > creating and dropping temporary tables every time is a clue that the\n> > ultimate form of this should be tuplestores and C code, I think,\n> > right?\n> >\n> \n> Nagata-san is investing the issue.\n> \n> \n> > > Moreover, some regression test are added for aggregate functions support.\n> > > This is Hoshiai-san's work.\n> >\n> > Great. Next time you post a WIP patch, could you please fix this\n> > small compiler warning?\n> >\n> > describe.c: In function ‘describeOneTableDetails’:\n> > describe.c:3270:55: error: ‘*((void *)&tableinfo+48)’ may be used\n> > uninitialized in this function [-Werror=maybe-uninitialized]\n> > if (verbose && tableinfo.relkind == RELKIND_MATVIEW && tableinfo.isivm)\n> > ^\n> > describe.c:1495:4: note: ‘*((void *)&tableinfo+48)’ was declared here\n> > } tableinfo;\n> > ^\n> >\n> \n> It is fixed too.\n> \n> > Then our unofficial automatic CI system[1] will run these tests every\n> > day, which sometimes finds problems.\n> >\n> > [1] cfbot.cputube.org\n> >\n> > --\n> > Thomas Munro\n> > https://enterprisedb.com\n> >\n> >\n> Best regards,\n> \n> Takuma Hoshiai\n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_date": "Wed, 10 Jul 2019 11:29:38 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> I am quite interested to learn how IVM interacts with SERIALIZABLE.\n\nJust for a fun, I have added:\nSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n\nright after every BEGIN; in incremental_matview.sql in regression test\nand it seems it works.\n\n> A couple of superficial review comments:\n> \n> + const char *aggname = get_func_name(aggref->aggfnoid);\n> ...\n> + else if (!strcmp(aggname, \"sum\"))\n> \n> I guess you need a more robust way to detect the supported aggregates\n> than their name, or I guess some way for aggregates themselves to\n> specify that they support this and somehow supply the extra logic.\n> Perhaps I just waid what Greg Stark already said, except not as well.\n\nI guess we could use moving aggregate (or partial aggregate?)\nfunctions for this purpose, but then we need to run executor directly\nrather using SPI. It needs more codes...\n\n> + \"WITH t AS (\"\n> + \" SELECT diff.__ivm_count__,\n> (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n> + \", %s\"\n> + \" FROM %s AS mv, %s AS diff WHERE (%s) = (%s)\"\n> + \"), updt AS (\"\n> + \" UPDATE %s AS mv SET __ivm_count__ =\n> mv.__ivm_count__ - t.__ivm_count__\"\n> + \", %s \"\n> + \" FROM t WHERE mv.ctid = t.ctid AND NOT for_dlt\"\n> + \") DELETE FROM %s AS mv USING t WHERE\n> mv.ctid = t.ctid AND for_dlt;\",\n> \n> I fully understand that this is POC code, but I am curious about one\n> thing. These queries that are executed by apply_delta() would need to\n> be converted to C, or at least used reusable plans, right? Hmm,\n> creating and dropping temporary tables every time is a clue that the\n> ultimate form of this should be tuplestores and C code, I think,\n> right?\n\nYes, we could reuse the temp tables and plans.\n\n> Then our unofficial automatic CI system[1] will run these tests every\n> day, which sometimes finds problems.\n> \n> [1] cfbot.cputube.org\n\nI appreciate that you provide the system.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 11 Jul 2019 11:39:36 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Thomas,\n\nThank you for your review and discussion on this patch!\n\n> > 2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:\n> > \n> > > On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > > > Attached is a WIP patch of IVM which supports some aggregate functions.\n> > >\n> > > Hi Nagata-san and Hoshiai-san,\n> > >\n> > > Thank you for working on this. I enjoyed your talk at PGCon. I've\n> > > added Kevin Grittner just in case he missed this thread; he has talked\n> > > often about implementing the counting algorithm, and he wrote the\n> > > \"trigger transition tables\" feature to support exactly this. While\n> > > integrating trigger transition tables with the new partition features,\n> > > we had to make a number of decisions about how that should work, and\n> > > we tried to come up with answers that would work for IMV, and I hope\n> > > we made the right choices!\n\nTransition tables is a great feature. I am now using this in my implementation\nof IVM, but the first time I used this feature was when I implemented a PoC\nfor extending view updatability of PostgreSQL[1]. At that time, I didn't know\nthat this feature is made originally aiming to support IVM. \n\n[1] https://www.pgcon.org/2017/schedule/events/1074.en.html\n\nI think transition tables is a good choice to implement a statement level\nimmediate view maintenance where materialized views are refreshed in a statement\nlevel after trigger. However, when implementing a transaction level immediate\nview maintenance where views are refreshed per transaction, or deferred view\nmaintenance, we can't update views in a after trigger, and we will need an\ninfrastructure to manage change logs of base tables. Transition tables can be\nused to collect these logs, but using logical decoding of WAL is another candidate.\nIn any way, if these logs can be collected in a tuplestore, we might able to\nconvert this to \"ephemeral named relation (ENR)\" and use this to calculate diff\nsets for views.\n\n> > >\n> > > I am quite interested to learn how IVM interacts with SERIALIZABLE.\n> > >\n> > \n> > Nagata-san has been studying this. Nagata-san, any comment?\n\nIn SERIALIZABLE or REPEATABLE READ level, table changes occurred in other \nransactions are not visible, so views can not be maintained correctly in AFTER\ntriggers. If a view is defined on two tables and each table is modified in\ndifferent concurrent transactions respectively, the result of view maintenance\ndone in trigger functions can be incorrect due to the race condition. This is the\nreason why such transactions are aborted immediately in that case in my current\nimplementation.\n\nOne idea to resolve this is performing view maintenance in two phases. Firstly, \nviews are updated using only table changes visible in this transaction. Then, \njust after this transaction is committed, views have to be updated additionally \nusing changes happened in other transactions to keep consistency. This is a just \nidea, but to implement this idea, I think, we will need keep to keep and \nmaintain change logs.\n\n> > > A couple of superficial review comments:\n\n\n \n> > > + const char *aggname = get_func_name(aggref->aggfnoid);\n> > > ...\n> > > + else if (!strcmp(aggname, \"sum\"))\n> > >\n> > > I guess you need a more robust way to detect the supported aggregates\n> > > than their name, or I guess some way for aggregates themselves to\n> > > specify that they support this and somehow supply the extra logic.\n> > > Perhaps I just waid what Greg Stark already said, except not as well.\n\nYes. Using name is not robust because users can make same name aggregates like \nsum(text) (although I am not sure this makes sense). We can use oids instead \nof names, but it would be nice to extend pg_aggregate and add new attributes \nfor informing that this supports IVM and for providing functions for IVM logic.\n\n> > > As for the question of how\n> > > to reserve a namespace for system columns that won't clash with user\n> > > columns, according to our manual the SQL standard doesn't allow $ in\n> > > identifier names, and according to my copy SQL92 \"intermediate SQL\"\n> > > doesn't allow identifiers that end in an underscore. I don't know\n> > > what the best answer is but we should probably decide on a something\n> > > based the standard.\n\nOk, so we should use \"__ivm_count__\" since this ends in \"_\" at least.\n\nAnother idea is that users specify the name of this special column when \ndefining materialized views with IVM support. This way can avoid the conflict \nbecause users will specify a name which does not appear in the target list.\n\nAs for aggregates supports, it may be also possible to make it a restriction \nthat count(expr) must be in the target list explicitly when sum(expr) or \navg(expr) is included, instead of making hidden column like __ivm_count_sum__,\nlike Oracle does.\n\n> > >\n> > > As for how to make internal columns invisible to SELECT *, previously\n> > > there have been discussions about doing that using a new flag in\n> > > pg_attribute:\n> > >\n> > >\n> > > https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n\nI agree implementing this feature in PostgreSQL since there are at least a few\nuse cases, IVM and temporal database.\n\n> > >\n> > > + \"WITH t AS (\"\n> > > + \" SELECT diff.__ivm_count__,\n> > > (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n> > > + \", %s\"\n> > > + \" FROM %s AS mv, %s AS diff WHERE (%s) =\n> > > (%s)\"\n> > > + \"), updt AS (\"\n> > > + \" UPDATE %s AS mv SET __ivm_count__ =\n> > > mv.__ivm_count__ - t.__ivm_count__\"\n> > > + \", %s \"\n> > > + \" FROM t WHERE mv.ctid = t.ctid AND NOT\n> > > for_dlt\"\n> > > + \") DELETE FROM %s AS mv USING t WHERE\n> > > mv.ctid = t.ctid AND for_dlt;\",\n> > >\n> > > I fully understand that this is POC code, but I am curious about one\n> > > thing. These queries that are executed by apply_delta() would need to\n> > > be converted to C, or at least used reusable plans, right? Hmm,\n> > > creating and dropping temporary tables every time is a clue that the\n> > > ultimate form of this should be tuplestores and C code, I think,\n> > > right?\n\nI used SPI just because REFRESH CONCURRENTLY uses this, but, as you said,\nit is inefficient to create/drop temp tables and perform parse/plan every times.\nIt seems to be enough to perform this once when creating materialized views or \nat the first maintenance time.\n\n\nBest regards,\nYugo Nagata\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 11 Jul 2019 13:28:04 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nI've updated the wiki page of Incremental View Maintenance.\n\nhttps://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\nOn Thu, 11 Jul 2019 13:28:04 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> Hi Thomas,\n> \n> Thank you for your review and discussion on this patch!\n> \n> > > 2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:\n> > > \n> > > > On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > > > > Attached is a WIP patch of IVM which supports some aggregate functions.\n> > > >\n> > > > Hi Nagata-san and Hoshiai-san,\n> > > >\n> > > > Thank you for working on this. I enjoyed your talk at PGCon. I've\n> > > > added Kevin Grittner just in case he missed this thread; he has talked\n> > > > often about implementing the counting algorithm, and he wrote the\n> > > > \"trigger transition tables\" feature to support exactly this. While\n> > > > integrating trigger transition tables with the new partition features,\n> > > > we had to make a number of decisions about how that should work, and\n> > > > we tried to come up with answers that would work for IMV, and I hope\n> > > > we made the right choices!\n> \n> Transition tables is a great feature. I am now using this in my implementation\n> of IVM, but the first time I used this feature was when I implemented a PoC\n> for extending view updatability of PostgreSQL[1]. At that time, I didn't know\n> that this feature is made originally aiming to support IVM. \n> \n> [1] https://www.pgcon.org/2017/schedule/events/1074.en.html\n> \n> I think transition tables is a good choice to implement a statement level\n> immediate view maintenance where materialized views are refreshed in a statement\n> level after trigger. However, when implementing a transaction level immediate\n> view maintenance where views are refreshed per transaction, or deferred view\n> maintenance, we can't update views in a after trigger, and we will need an\n> infrastructure to manage change logs of base tables. Transition tables can be\n> used to collect these logs, but using logical decoding of WAL is another candidate.\n> In any way, if these logs can be collected in a tuplestore, we might able to\n> convert this to \"ephemeral named relation (ENR)\" and use this to calculate diff\n> sets for views.\n> \n> > > >\n> > > > I am quite interested to learn how IVM interacts with SERIALIZABLE.\n> > > >\n> > > \n> > > Nagata-san has been studying this. Nagata-san, any comment?\n> \n> In SERIALIZABLE or REPEATABLE READ level, table changes occurred in other \n> ransactions are not visible, so views can not be maintained correctly in AFTER\n> triggers. If a view is defined on two tables and each table is modified in\n> different concurrent transactions respectively, the result of view maintenance\n> done in trigger functions can be incorrect due to the race condition. This is the\n> reason why such transactions are aborted immediately in that case in my current\n> implementation.\n> \n> One idea to resolve this is performing view maintenance in two phases. Firstly, \n> views are updated using only table changes visible in this transaction. Then, \n> just after this transaction is committed, views have to be updated additionally \n> using changes happened in other transactions to keep consistency. This is a just \n> idea, but to implement this idea, I think, we will need keep to keep and \n> maintain change logs.\n> \n> > > > A couple of superficial review comments:\n> \n> \n> \n> > > > + const char *aggname = get_func_name(aggref->aggfnoid);\n> > > > ...\n> > > > + else if (!strcmp(aggname, \"sum\"))\n> > > >\n> > > > I guess you need a more robust way to detect the supported aggregates\n> > > > than their name, or I guess some way for aggregates themselves to\n> > > > specify that they support this and somehow supply the extra logic.\n> > > > Perhaps I just waid what Greg Stark already said, except not as well.\n> \n> Yes. Using name is not robust because users can make same name aggregates like \n> sum(text) (although I am not sure this makes sense). We can use oids instead \n> of names, but it would be nice to extend pg_aggregate and add new attributes \n> for informing that this supports IVM and for providing functions for IVM logic.\n> \n> > > > As for the question of how\n> > > > to reserve a namespace for system columns that won't clash with user\n> > > > columns, according to our manual the SQL standard doesn't allow $ in\n> > > > identifier names, and according to my copy SQL92 \"intermediate SQL\"\n> > > > doesn't allow identifiers that end in an underscore. I don't know\n> > > > what the best answer is but we should probably decide on a something\n> > > > based the standard.\n> \n> Ok, so we should use \"__ivm_count__\" since this ends in \"_\" at least.\n> \n> Another idea is that users specify the name of this special column when \n> defining materialized views with IVM support. This way can avoid the conflict \n> because users will specify a name which does not appear in the target list.\n> \n> As for aggregates supports, it may be also possible to make it a restriction \n> that count(expr) must be in the target list explicitly when sum(expr) or \n> avg(expr) is included, instead of making hidden column like __ivm_count_sum__,\n> like Oracle does.\n> \n> > > >\n> > > > As for how to make internal columns invisible to SELECT *, previously\n> > > > there have been discussions about doing that using a new flag in\n> > > > pg_attribute:\n> > > >\n> > > >\n> > > > https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> \n> I agree implementing this feature in PostgreSQL since there are at least a few\n> use cases, IVM and temporal database.\n> \n> > > >\n> > > > + \"WITH t AS (\"\n> > > > + \" SELECT diff.__ivm_count__,\n> > > > (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n> > > > + \", %s\"\n> > > > + \" FROM %s AS mv, %s AS diff WHERE (%s) =\n> > > > (%s)\"\n> > > > + \"), updt AS (\"\n> > > > + \" UPDATE %s AS mv SET __ivm_count__ =\n> > > > mv.__ivm_count__ - t.__ivm_count__\"\n> > > > + \", %s \"\n> > > > + \" FROM t WHERE mv.ctid = t.ctid AND NOT\n> > > > for_dlt\"\n> > > > + \") DELETE FROM %s AS mv USING t WHERE\n> > > > mv.ctid = t.ctid AND for_dlt;\",\n> > > >\n> > > > I fully understand that this is POC code, but I am curious about one\n> > > > thing. These queries that are executed by apply_delta() would need to\n> > > > be converted to C, or at least used reusable plans, right? Hmm,\n> > > > creating and dropping temporary tables every time is a clue that the\n> > > > ultimate form of this should be tuplestores and C code, I think,\n> > > > right?\n> \n> I used SPI just because REFRESH CONCURRENTLY uses this, but, as you said,\n> it is inefficient to create/drop temp tables and perform parse/plan every times.\n> It seems to be enough to perform this once when creating materialized views or \n> at the first maintenance time.\n> \n> \n> Best regards,\n> Yugo Nagata\n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 26 Jul 2019 11:35:53 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the latest patch for supporting min and max aggregate functions.\n\nWhen new tuples are inserted into base tables, if new values are small\n(for min) or large (for max), matview just have to be updated with these\nnew values. Otherwise, old values just remains.\n \nHowever, in the case of deletion, this is more complicated. If deleted\nvalues exists in matview as current min or max, we have to recomputate\nnew min or max values from base tables for affected groups, and matview\nshould be updated with these recomputated values. \n\nAlso, regression tests for min/max are also added.\n\nIn addition, incremental update algorithm of avg aggregate values is a bit\nimproved. If an avg result in materialized views is updated incrementally\ny using the old avg value, numerical errors in avg values are accumulated\nand the values get wrong eventually. To prevent this, both of sum and count\nvalues are contained in views as hidden columns and use them to calculate\nnew avg value instead of using old avg values.\n\nRegards,\n\nOn Fri, 26 Jul 2019 11:35:53 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n> \n> I've updated the wiki page of Incremental View Maintenance.\n> \n> https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n> \n> On Thu, 11 Jul 2019 13:28:04 +0900\n> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> > Hi Thomas,\n> > \n> > Thank you for your review and discussion on this patch!\n> > \n> > > > 2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:\n> > > > \n> > > > > On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > > > > > Attached is a WIP patch of IVM which supports some aggregate functions.\n> > > > >\n> > > > > Hi Nagata-san and Hoshiai-san,\n> > > > >\n> > > > > Thank you for working on this. I enjoyed your talk at PGCon. I've\n> > > > > added Kevin Grittner just in case he missed this thread; he has talked\n> > > > > often about implementing the counting algorithm, and he wrote the\n> > > > > \"trigger transition tables\" feature to support exactly this. While\n> > > > > integrating trigger transition tables with the new partition features,\n> > > > > we had to make a number of decisions about how that should work, and\n> > > > > we tried to come up with answers that would work for IMV, and I hope\n> > > > > we made the right choices!\n> > \n> > Transition tables is a great feature. I am now using this in my implementation\n> > of IVM, but the first time I used this feature was when I implemented a PoC\n> > for extending view updatability of PostgreSQL[1]. At that time, I didn't know\n> > that this feature is made originally aiming to support IVM. \n> > \n> > [1] https://www.pgcon.org/2017/schedule/events/1074.en.html\n> > \n> > I think transition tables is a good choice to implement a statement level\n> > immediate view maintenance where materialized views are refreshed in a statement\n> > level after trigger. However, when implementing a transaction level immediate\n> > view maintenance where views are refreshed per transaction, or deferred view\n> > maintenance, we can't update views in a after trigger, and we will need an\n> > infrastructure to manage change logs of base tables. Transition tables can be\n> > used to collect these logs, but using logical decoding of WAL is another candidate.\n> > In any way, if these logs can be collected in a tuplestore, we might able to\n> > convert this to \"ephemeral named relation (ENR)\" and use this to calculate diff\n> > sets for views.\n> > \n> > > > >\n> > > > > I am quite interested to learn how IVM interacts with SERIALIZABLE.\n> > > > >\n> > > > \n> > > > Nagata-san has been studying this. Nagata-san, any comment?\n> > \n> > In SERIALIZABLE or REPEATABLE READ level, table changes occurred in other \n> > ransactions are not visible, so views can not be maintained correctly in AFTER\n> > triggers. If a view is defined on two tables and each table is modified in\n> > different concurrent transactions respectively, the result of view maintenance\n> > done in trigger functions can be incorrect due to the race condition. This is the\n> > reason why such transactions are aborted immediately in that case in my current\n> > implementation.\n> > \n> > One idea to resolve this is performing view maintenance in two phases. Firstly, \n> > views are updated using only table changes visible in this transaction. Then, \n> > just after this transaction is committed, views have to be updated additionally \n> > using changes happened in other transactions to keep consistency. This is a just \n> > idea, but to implement this idea, I think, we will need keep to keep and \n> > maintain change logs.\n> > \n> > > > > A couple of superficial review comments:\n> > \n> > \n> > \n> > > > > + const char *aggname = get_func_name(aggref->aggfnoid);\n> > > > > ...\n> > > > > + else if (!strcmp(aggname, \"sum\"))\n> > > > >\n> > > > > I guess you need a more robust way to detect the supported aggregates\n> > > > > than their name, or I guess some way for aggregates themselves to\n> > > > > specify that they support this and somehow supply the extra logic.\n> > > > > Perhaps I just waid what Greg Stark already said, except not as well.\n> > \n> > Yes. Using name is not robust because users can make same name aggregates like \n> > sum(text) (although I am not sure this makes sense). We can use oids instead \n> > of names, but it would be nice to extend pg_aggregate and add new attributes \n> > for informing that this supports IVM and for providing functions for IVM logic.\n> > \n> > > > > As for the question of how\n> > > > > to reserve a namespace for system columns that won't clash with user\n> > > > > columns, according to our manual the SQL standard doesn't allow $ in\n> > > > > identifier names, and according to my copy SQL92 \"intermediate SQL\"\n> > > > > doesn't allow identifiers that end in an underscore. I don't know\n> > > > > what the best answer is but we should probably decide on a something\n> > > > > based the standard.\n> > \n> > Ok, so we should use \"__ivm_count__\" since this ends in \"_\" at least.\n> > \n> > Another idea is that users specify the name of this special column when \n> > defining materialized views with IVM support. This way can avoid the conflict \n> > because users will specify a name which does not appear in the target list.\n> > \n> > As for aggregates supports, it may be also possible to make it a restriction \n> > that count(expr) must be in the target list explicitly when sum(expr) or \n> > avg(expr) is included, instead of making hidden column like __ivm_count_sum__,\n> > like Oracle does.\n> > \n> > > > >\n> > > > > As for how to make internal columns invisible to SELECT *, previously\n> > > > > there have been discussions about doing that using a new flag in\n> > > > > pg_attribute:\n> > > > >\n> > > > >\n> > > > > https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> > \n> > I agree implementing this feature in PostgreSQL since there are at least a few\n> > use cases, IVM and temporal database.\n> > \n> > > > >\n> > > > > + \"WITH t AS (\"\n> > > > > + \" SELECT diff.__ivm_count__,\n> > > > > (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n> > > > > + \", %s\"\n> > > > > + \" FROM %s AS mv, %s AS diff WHERE (%s) =\n> > > > > (%s)\"\n> > > > > + \"), updt AS (\"\n> > > > > + \" UPDATE %s AS mv SET __ivm_count__ =\n> > > > > mv.__ivm_count__ - t.__ivm_count__\"\n> > > > > + \", %s \"\n> > > > > + \" FROM t WHERE mv.ctid = t.ctid AND NOT\n> > > > > for_dlt\"\n> > > > > + \") DELETE FROM %s AS mv USING t WHERE\n> > > > > mv.ctid = t.ctid AND for_dlt;\",\n> > > > >\n> > > > > I fully understand that this is POC code, but I am curious about one\n> > > > > thing. These queries that are executed by apply_delta() would need to\n> > > > > be converted to C, or at least used reusable plans, right? Hmm,\n> > > > > creating and dropping temporary tables every time is a clue that the\n> > > > > ultimate form of this should be tuplestores and C code, I think,\n> > > > > right?\n> > \n> > I used SPI just because REFRESH CONCURRENTLY uses this, but, as you said,\n> > it is inefficient to create/drop temp tables and perform parse/plan every times.\n> > It seems to be enough to perform this once when creating materialized views or \n> > at the first maintenance time.\n> > \n> > \n> > Best regards,\n> > Yugo Nagata\n> > \n> > \n> > -- \n> > Yugo Nagata <nagata@sraoss.co.jp>\n> > \n> > \n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Wed, 31 Jul 2019 18:08:51 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "It's not mentioned below but some bugs including seg fault when\n--enable-casser is enabled was also fixed in this patch.\n\nBTW, I found a bug with min/max support in this patch and I believe\nYugo is working on it. Details:\nhttps://github.com/sraoss/pgsql-ivm/issues/20\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\nFrom: Yugo Nagata <nagata@sraoss.co.jp>\nSubject: Re: Implementing Incremental View Maintenance\nDate: Wed, 31 Jul 2019 18:08:51 +0900\nMessage-ID: <20190731180851.73856441d8abb494bf5e68e7@sraoss.co.jp>\n\n> Hi,\n> \n> Attached is the latest patch for supporting min and max aggregate functions.\n> \n> When new tuples are inserted into base tables, if new values are small\n> (for min) or large (for max), matview just have to be updated with these\n> new values. Otherwise, old values just remains.\n> \n> However, in the case of deletion, this is more complicated. If deleted\n> values exists in matview as current min or max, we have to recomputate\n> new min or max values from base tables for affected groups, and matview\n> should be updated with these recomputated values. \n> \n> Also, regression tests for min/max are also added.\n> \n> In addition, incremental update algorithm of avg aggregate values is a bit\n> improved. If an avg result in materialized views is updated incrementally\n> y using the old avg value, numerical errors in avg values are accumulated\n> and the values get wrong eventually. To prevent this, both of sum and count\n> values are contained in views as hidden columns and use them to calculate\n> new avg value instead of using old avg values.\n> \n> Regards,\n> \n> On Fri, 26 Jul 2019 11:35:53 +0900\n> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n>> Hi,\n>> \n>> I've updated the wiki page of Incremental View Maintenance.\n>> \n>> https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n>> \n>> On Thu, 11 Jul 2019 13:28:04 +0900\n>> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>> \n>> > Hi Thomas,\n>> > \n>> > Thank you for your review and discussion on this patch!\n>> > \n>> > > > 2019年7月8日(月) 15:32 Thomas Munro <thomas.munro@gmail.com>:\n>> > > > \n>> > > > > On Fri, Jun 28, 2019 at 10:56 PM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>> > > > > > Attached is a WIP patch of IVM which supports some aggregate functions.\n>> > > > >\n>> > > > > Hi Nagata-san and Hoshiai-san,\n>> > > > >\n>> > > > > Thank you for working on this. I enjoyed your talk at PGCon. I've\n>> > > > > added Kevin Grittner just in case he missed this thread; he has talked\n>> > > > > often about implementing the counting algorithm, and he wrote the\n>> > > > > \"trigger transition tables\" feature to support exactly this. While\n>> > > > > integrating trigger transition tables with the new partition features,\n>> > > > > we had to make a number of decisions about how that should work, and\n>> > > > > we tried to come up with answers that would work for IMV, and I hope\n>> > > > > we made the right choices!\n>> > \n>> > Transition tables is a great feature. I am now using this in my implementation\n>> > of IVM, but the first time I used this feature was when I implemented a PoC\n>> > for extending view updatability of PostgreSQL[1]. At that time, I didn't know\n>> > that this feature is made originally aiming to support IVM. \n>> > \n>> > [1] https://www.pgcon.org/2017/schedule/events/1074.en.html\n>> > \n>> > I think transition tables is a good choice to implement a statement level\n>> > immediate view maintenance where materialized views are refreshed in a statement\n>> > level after trigger. However, when implementing a transaction level immediate\n>> > view maintenance where views are refreshed per transaction, or deferred view\n>> > maintenance, we can't update views in a after trigger, and we will need an\n>> > infrastructure to manage change logs of base tables. Transition tables can be\n>> > used to collect these logs, but using logical decoding of WAL is another candidate.\n>> > In any way, if these logs can be collected in a tuplestore, we might able to\n>> > convert this to \"ephemeral named relation (ENR)\" and use this to calculate diff\n>> > sets for views.\n>> > \n>> > > > >\n>> > > > > I am quite interested to learn how IVM interacts with SERIALIZABLE.\n>> > > > >\n>> > > > \n>> > > > Nagata-san has been studying this. Nagata-san, any comment?\n>> > \n>> > In SERIALIZABLE or REPEATABLE READ level, table changes occurred in other \n>> > ransactions are not visible, so views can not be maintained correctly in AFTER\n>> > triggers. If a view is defined on two tables and each table is modified in\n>> > different concurrent transactions respectively, the result of view maintenance\n>> > done in trigger functions can be incorrect due to the race condition. This is the\n>> > reason why such transactions are aborted immediately in that case in my current\n>> > implementation.\n>> > \n>> > One idea to resolve this is performing view maintenance in two phases. Firstly, \n>> > views are updated using only table changes visible in this transaction. Then, \n>> > just after this transaction is committed, views have to be updated additionally \n>> > using changes happened in other transactions to keep consistency. This is a just \n>> > idea, but to implement this idea, I think, we will need keep to keep and \n>> > maintain change logs.\n>> > \n>> > > > > A couple of superficial review comments:\n>> > \n>> > \n>> > \n>> > > > > + const char *aggname = get_func_name(aggref->aggfnoid);\n>> > > > > ...\n>> > > > > + else if (!strcmp(aggname, \"sum\"))\n>> > > > >\n>> > > > > I guess you need a more robust way to detect the supported aggregates\n>> > > > > than their name, or I guess some way for aggregates themselves to\n>> > > > > specify that they support this and somehow supply the extra logic.\n>> > > > > Perhaps I just waid what Greg Stark already said, except not as well.\n>> > \n>> > Yes. Using name is not robust because users can make same name aggregates like \n>> > sum(text) (although I am not sure this makes sense). We can use oids instead \n>> > of names, but it would be nice to extend pg_aggregate and add new attributes \n>> > for informing that this supports IVM and for providing functions for IVM logic.\n>> > \n>> > > > > As for the question of how\n>> > > > > to reserve a namespace for system columns that won't clash with user\n>> > > > > columns, according to our manual the SQL standard doesn't allow $ in\n>> > > > > identifier names, and according to my copy SQL92 \"intermediate SQL\"\n>> > > > > doesn't allow identifiers that end in an underscore. I don't know\n>> > > > > what the best answer is but we should probably decide on a something\n>> > > > > based the standard.\n>> > \n>> > Ok, so we should use \"__ivm_count__\" since this ends in \"_\" at least.\n>> > \n>> > Another idea is that users specify the name of this special column when \n>> > defining materialized views with IVM support. This way can avoid the conflict \n>> > because users will specify a name which does not appear in the target list.\n>> > \n>> > As for aggregates supports, it may be also possible to make it a restriction \n>> > that count(expr) must be in the target list explicitly when sum(expr) or \n>> > avg(expr) is included, instead of making hidden column like __ivm_count_sum__,\n>> > like Oracle does.\n>> > \n>> > > > >\n>> > > > > As for how to make internal columns invisible to SELECT *, previously\n>> > > > > there have been discussions about doing that using a new flag in\n>> > > > > pg_attribute:\n>> > > > >\n>> > > > >\n>> > > > > https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n>> > \n>> > I agree implementing this feature in PostgreSQL since there are at least a few\n>> > use cases, IVM and temporal database.\n>> > \n>> > > > >\n>> > > > > + \"WITH t AS (\"\n>> > > > > + \" SELECT diff.__ivm_count__,\n>> > > > > (diff.__ivm_count__ = mv.__ivm_count__) AS for_dlt, mv.ctid\"\n>> > > > > + \", %s\"\n>> > > > > + \" FROM %s AS mv, %s AS diff WHERE (%s) =\n>> > > > > (%s)\"\n>> > > > > + \"), updt AS (\"\n>> > > > > + \" UPDATE %s AS mv SET __ivm_count__ =\n>> > > > > mv.__ivm_count__ - t.__ivm_count__\"\n>> > > > > + \", %s \"\n>> > > > > + \" FROM t WHERE mv.ctid = t.ctid AND NOT\n>> > > > > for_dlt\"\n>> > > > > + \") DELETE FROM %s AS mv USING t WHERE\n>> > > > > mv.ctid = t.ctid AND for_dlt;\",\n>> > > > >\n>> > > > > I fully understand that this is POC code, but I am curious about one\n>> > > > > thing. These queries that are executed by apply_delta() would need to\n>> > > > > be converted to C, or at least used reusable plans, right? Hmm,\n>> > > > > creating and dropping temporary tables every time is a clue that the\n>> > > > > ultimate form of this should be tuplestores and C code, I think,\n>> > > > > right?\n>> > \n>> > I used SPI just because REFRESH CONCURRENTLY uses this, but, as you said,\n>> > it is inefficient to create/drop temp tables and perform parse/plan every times.\n>> > It seems to be enough to perform this once when creating materialized views or \n>> > at the first maintenance time.\n>> > \n>> > \n>> > Best regards,\n>> > Yugo Nagata\n>> > \n>> > \n>> > -- \n>> > Yugo Nagata <nagata@sraoss.co.jp>\n>> > \n>> > \n>> \n>> \n>> -- \n>> Yugo Nagata <nagata@sraoss.co.jp>\n>> \n>> \n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 06 Aug 2019 09:25:02 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On 2019-Aug-06, Tatsuo Ishii wrote:\n\n> It's not mentioned below but some bugs including seg fault when\n> --enable-casser is enabled was also fixed in this patch.\n> \n> BTW, I found a bug with min/max support in this patch and I believe\n> Yugo is working on it. Details:\n> https://github.com/sraoss/pgsql-ivm/issues/20\n\nSo is he posting an updated patch soon?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 12:19:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> On 2019-Aug-06, Tatsuo Ishii wrote:\n> \n>> It's not mentioned below but some bugs including seg fault when\n>> --enable-casser is enabled was also fixed in this patch.\n>> \n>> BTW, I found a bug with min/max support in this patch and I believe\n>> Yugo is working on it. Details:\n>> https://github.com/sraoss/pgsql-ivm/issues/20\n> \n> So is he posting an updated patch soon?\n\nI think he is going to post an updated patch by the end of this month.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 17 Sep 2019 11:49:13 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the latest patch for supporting self-join views. This also\nincluding the following fix mentioned by Tatsuo Ishii.\n\n> > On 2019-Aug-06, Tatsuo Ishii wrote:\n> > \n> >> It's not mentioned below but some bugs including seg fault when\n> >> --enable-casser is enabled was also fixed in this patch.\n> >> \n> >> BTW, I found a bug with min/max support in this patch and I believe\n> >> Yugo is working on it. Details:\n> >> https://github.com/sraoss/pgsql-ivm/issues/20\n\nThis patch allows to support self-join views, simultaneous updates of more\nthan one base tables, and also multiple updates of the same base table. \nI first tried to support just self-join, but I found that this is essentially\nsame as to support simultaneous table updates, so I decided to support them in \nthe same commit. I think this will be a base for implementing\nDeferred-maintenance in future.\n\n\n\nIn the new implementation, AFTER triggers are used to collecting tuplestores \ncontaining transition table contents. When multiple tables are changed, \nmultiple AFTER triggers are invoked, then the final AFTER trigger performs \nactual update of the matview. In addition AFTER trigger, also BEFORE trigger\nis used to handle global information for view maintenance. \n\nFor example, suppose that we have a view V joining table R,S, and new tuples are\ninserted to each table, dR,dS, and dT respectively.\n\n V = R*S*T\n R_new = R + dR\n S_new = S + dS\n T_new = T + dT\n\nIn this situation, we can calculate the new view state as bellow.\n\nV_new \n= R_new * S_new * T_new\n= (R + dR) * (S + dS) * (T + dT)\n= R*S*T + dR*(S + dS)*(T + dT) + R*dS*(T + dT) + R*S*dT\n= V + dR*(S + dS)*(T + dT) + R*dS*(T + dT) + R*S*dT\n= V + (dR *S_new*T_new) + (R*dS*T_new) + (R*S*dT)\n\nTo calculate view deltas, we need both pre-state (R,S, and T) and post-state \n(R_new, S_new, and T_new) of base tables. \n\nPost-update states are available in AFTER trigger, and we calculate pre-update\nstates by filtering inserted tuples using cmin/xmin system columns, and appendding\ndeleted tuples which are contained in a old transition table.\n\nIn the original core implementation, tuplestores of transition tables were \nfreed for each query depth. However, we want to prolong their life plan because\nwe have to preserve these for a whole query assuming some base tables are changed\nin other trigger functions, so I added a hack to trigger.c.\n\nRegression tests are also added for self join view, multiple change on the same\ntable, simultaneous two table changes, and foreign reference constrains.\n\nHere are behavior examples:\n\n1. Table definition\n- t: for self-join\n- r,s: for 2-ways join\n\nCREATE TABLE r (i int, v int);\nCREATE TABLE\nCREATE TABLE s (i int, v int);\nCREATE TABLE\nCREATE TABLE t (i int, v int);\nCREATE TABLE\n\n2. Initial data\n\nINSERT INTO r VALUES (1, 10), (2, 20), (3, 30);\nINSERT 0 3\nINSERT INTO s VALUES (1, 100), (2, 200), (3, 300);\nINSERT 0 3\nINSERT INTO t VALUES (1, 10), (2, 20), (3, 30);\nINSERT 0 3\n\n3. View definition\n\n3.1. self-join(mv_self, v_slef)\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_self(v1, v2) AS\n SELECT t1.v, t2.v FROM t t1 JOIN t t2 ON t1.i = t2.i;\nSELECT 3\nCREATE VIEW v_self(v1, v2) AS\n SELECT t1.v, t2.v FROM t t1 JOIN t t2 ON t1.i = t2.i;\nCREATE VIEW\n\n3.2. 2-ways join (mv, v)\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv(v1, v2) AS\n SELECT r.v, s.v FROM r JOIN s USING(i);\nSELECT 3\nCREATE VIEW v(v1, v2) AS\n SELECT r.v, s.v FROM r JOIN s USING(i);\nCREATE VIEW\n\n3.3 Initial contents\n\nSELECT * FROM mv_self ORDER BY v1;\n v1 | v2 \n----+----\n 10 | 10\n 20 | 20\n 30 | 30\n(3 rows)\n\nSELECT * FROM mv ORDER BY v1;\n v1 | v2 \n----+-----\n 10 | 100\n 20 | 200\n 30 | 300\n(3 rows)\n\n4. Update a base table for the self-join view\n\nINSERT INTO t VALUES (4,40);\nINSERT 0 1\nDELETE FROM t WHERE i = 1;\nDELETE 1\nUPDATE t SET v = v*10 WHERE i=2;\nUPDATE 1\n\n4.1. Results\n- Comparison with the normal view\n\nSELECT * FROM mv_self ORDER BY v1;\n v1 | v2 \n-----+-----\n 30 | 30\n 40 | 40\n 200 | 200\n(3 rows)\n\nSELECT * FROM v_self ORDER BY v1;\n v1 | v2 \n-----+-----\n 30 | 30\n 40 | 40\n 200 | 200\n(3 rows)\n\n5. pdate a base table for the 2-way join view\n\nWITH\n ins_r AS (INSERT INTO r VALUES (1,11) RETURNING 1),\n ins_r2 AS (INSERT INTO r VALUES (3,33) RETURNING 1),\n ins_s AS (INSERT INTO s VALUES (2,222) RETURNING 1),\n upd_r AS (UPDATE r SET v = v + 1000 WHERE i = 2 RETURNING 1),\n dlt_s AS (DELETE FROM s WHERE i = 3 RETURNING 1)\nSELECT NULL;\n ?column? \n----------\n \n(1 row)\n\n5.1. Results\n- Comparison with the normal view\n\nSELECT * FROM mv ORDER BY v1;\n v1 | v2 \n------+-----\n 10 | 100\n 11 | 100\n 1020 | 200\n 1020 | 222\n(4 rows)\n\nSELECT * FROM v ORDER BY v1;\n v1 | v2 \n------+-----\n 10 | 100\n 11 | 100\n 1020 | 200\n 1020 | 222\n(4 rows)\n\n========\n\nBest Regards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Mon, 30 Sep 2019 22:34:14 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Attached is the latest patch to add support for Incremental\nMaterialized View Maintenance (IVM). IVM allows to reflect\nmodifications made on base tables immediately to the target\nmaterialized views.\n\nUp to now, IVM supports materialized views using:\n\n- Inner joins\n- Some aggregate functions (count, sum, min, max, avg)\n- GROUP BY\n- Self joins\n\nWith the latest patch now IVM supports subqueries in addition to\nabove.\n\nKnown limitations are listed here:\n\nhttps://github.com/sraoss/pgsql-ivm/issues\n\nSee more details at:\nhttps://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\nAbout subquery support:\n\nThe patch supports simple subqueries using EXISTS:\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_exists_subquery AS SELECT\na.i, a.j FROM mv_base_a a WHERE EXISTS(SELECT 1 FROM mv_base_b b WHERE\na.i = b.i);\n\nand subqueries in the FROM clause:\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_subquery AS SELECT a.i,a.j\nFROM mv_base_a a,( SELECT * FROM mv_base_b) b WHERE a.i = b.i;\n\nOther form of subqueries such as below are not supported:\n\n-- WHERE IN .. (subquery) is not supported\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm03 AS SELECT i,j FROM\nmv_base_a WHERE i IN (SELECT i FROM mv_base_b WHERE k < 103 );\n\n-- subqueries in target list is not supported\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm05 AS SELECT i,j, (SELECT k\nFROM mv_base_b b WHERE a.i = b.i) FROM mv_base_a a;\n\n-- nested EXISTS subqueries is not supported\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm11 AS SELECT a.i,a.j FROM\nmv_base_a a WHERE EXISTS(SELECT 1 FROM mv_base_b b WHERE EXISTS(SELECT\n1 FROM mv_base_b c WHERE b.i = c.i));\n\n-- EXISTS subquery with aggragate function is not supported\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_exists AS SELECT COUNT(*)\nFROM mv_base_a a WHERE EXISTS(SELECT 1 FROM mv_base_b b WHERE a.i =\nb.i) OR a.i > 5;\n\n-- EXISTS subquery with condition except AND is not supported.\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm10 AS SELECT a.i,a.j FROM\nmv_base_a a WHERE EXISTS(SELECT 1 FROM mv_base_b b WHERE a.i = b.i) OR\na.i > 5;\n\nThis work has been done by Yugo Nagata (nagata@sraoss.co.jp), Takuma\nHoshiai (hoshiai@sraoss.co.jp). Adding support for EXISTS clause has\nbeen done by Takuma.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 22 Nov 2019 15:29:45 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the latest patch (v8) to add support for Incremental View\nMaintenance (IVM). This patch adds OUTER join support in addition\nto the patch (v7) submitted last week in the following post.\n\nOn Fri, 22 Nov 2019 15:29:45 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> Up to now, IVM supports materialized views using:\n> \n> - Inner joins\n> - Some aggregate functions (count, sum, min, max, avg)\n> - GROUP BY\n> - Self joins\n> \n> With the latest patch now IVM supports subqueries in addition to\n> above.\n> \n> Known limitations are listed here:\n> \n> https://github.com/sraoss/pgsql-ivm/issues\n> \n> See more details at:\n> https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\n* About outer join support:\n\nIn case of outer-join, when a table is modified, in addition to deltas\nwhich occur in inner-join case, we also need to deletion or insertion of\ndangling tuples, that is, null-extended tuples generated when a join\ncondition isn't met.\n\n[Example]\n---------------------------------------------\n-- Create base tables and an outer join view\nCREATE TABLE r(i int);\nCREATE TABLE s(j int);\nINSERT INTO r VALUES (1);\nCREATE INCREMENTAL MATERIALIZED VIEW mv \n AS SELECT * FROM r LEFT JOIN s ON r.i=s.j;\nSELECT * FROM mv;\n i | j \n---+---\n(1 row)\n\n-- After an insertion to a base table ...\nINSERT INTO s VALUES (1);\n -- (1,1) is inserted and (1,null) is deleted from the view.\nSELECT * FROM mv;\n i | j \n---+---\n 1 | 1\n(1 row)\n---------------------------------------------\n\nOur implementation is basically based on the algorithm of Larson & Zhou\n(2007) [1]. Before view maintenances, the view definition query's jointree\nis analysed to make \"view maintenance graph\". This graph represents\nwhich tuples in the views are affected when a base table is modified.\nSpecifically, tuples which are not null-extended on the modified table\n(that is, tuples generated by joins with the modiifed table) are directly\naffected. The delta of such effects are calculated similarly to inner-joins.\n\nOn the other hand, dangling tuples generated by anti-joins with directly\naffected tuples can be indirectly affected. This means that we may need to\ndelete dangling tuples when any tuples are inserted to a table, as well as\nto insert dangling tuples when tuples are deleted from a table.\n\n[1] Efficient Maintenance of Materialized Outer-Join Views (Larson & Zhou, 2007)\nhttps://ieeexplore.ieee.org/document/4221654\n\nAlthough the original paper assumes that every base table and view have a\nunique key and tuple duplicates is disallowed, we allow this. If a view has\ntuple duplicates, we have to determine the number of each dangling tuple to\nbe inserted into the view when tuples in a table are deleted. For this purpose,\nwe count the number of each tuples which constitute a deleted tuple. These\ncounts are stored as JSONB object in the delta table, and we use this\ninformation to maintain outer-join views. Also, we support outer self-joins\nthat is not assumed in the original paper.\n\n* Restrictions\n\nCurrently, we have following restrictions:\n\n- outer join view's targetlist must contain attributes used in join conditions\n- outer join view's targetlist cannot contain non-strict functions\n- outer join supports only simple equijoin\n- outer join view's WHERE clause cannot contain non null-rejecting predicates\n- aggregate is not supported with outer join\n- subquery (including EXSITS) is not supported with outer join\n\n\nRegression tests for all patterns of 3-way outer join and are added. \n\nMoreover, I reordered IVM related functions in matview.c so that ones\nwhich have relationship will be located closely. Moreover, I added more\ncomments.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Tue, 26 Nov 2019 16:02:25 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Note that this is the last patch in the series of IVM patches: now we\nwould like focus on blushing up the patches, rather than adding new\nSQL support to IVM, so that the patch is merged into PostgreSQL 13\n(hopefully). We are very welcome reviews, comments on the patch.\n\nBTW, the SGML docs in the patch is very poor at this point. I am going\nto add more descriptions to the doc.\n\n> Hi,\n> \n> Attached is the latest patch (v8) to add support for Incremental View\n> Maintenance (IVM). This patch adds OUTER join support in addition\n> to the patch (v7) submitted last week in the following post.\n> \n> On Fri, 22 Nov 2019 15:29:45 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> Up to now, IVM supports materialized views using:\n>> \n>> - Inner joins\n>> - Some aggregate functions (count, sum, min, max, avg)\n>> - GROUP BY\n>> - Self joins\n>> \n>> With the latest patch now IVM supports subqueries in addition to\n>> above.\n>> \n>> Known limitations are listed here:\n>> \n>> https://github.com/sraoss/pgsql-ivm/issues\n>> \n>> See more details at:\n>> https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n> \n> * About outer join support:\n> \n> In case of outer-join, when a table is modified, in addition to deltas\n> which occur in inner-join case, we also need to deletion or insertion of\n> dangling tuples, that is, null-extended tuples generated when a join\n> condition isn't met.\n> \n> [Example]\n> ---------------------------------------------\n> -- Create base tables and an outer join view\n> CREATE TABLE r(i int);\n> CREATE TABLE s(j int);\n> INSERT INTO r VALUES (1);\n> CREATE INCREMENTAL MATERIALIZED VIEW mv \n> AS SELECT * FROM r LEFT JOIN s ON r.i=s.j;\n> SELECT * FROM mv;\n> i | j \n> ---+---\n> (1 row)\n> \n> -- After an insertion to a base table ...\n> INSERT INTO s VALUES (1);\n> -- (1,1) is inserted and (1,null) is deleted from the view.\n> SELECT * FROM mv;\n> i | j \n> ---+---\n> 1 | 1\n> (1 row)\n> ---------------------------------------------\n> \n> Our implementation is basically based on the algorithm of Larson & Zhou\n> (2007) [1]. Before view maintenances, the view definition query's jointree\n> is analysed to make \"view maintenance graph\". This graph represents\n> which tuples in the views are affected when a base table is modified.\n> Specifically, tuples which are not null-extended on the modified table\n> (that is, tuples generated by joins with the modiifed table) are directly\n> affected. The delta of such effects are calculated similarly to inner-joins.\n> \n> On the other hand, dangling tuples generated by anti-joins with directly\n> affected tuples can be indirectly affected. This means that we may need to\n> delete dangling tuples when any tuples are inserted to a table, as well as\n> to insert dangling tuples when tuples are deleted from a table.\n> \n> [1] Efficient Maintenance of Materialized Outer-Join Views (Larson & Zhou, 2007)\n> https://ieeexplore.ieee.org/document/4221654\n> \n> Although the original paper assumes that every base table and view have a\n> unique key and tuple duplicates is disallowed, we allow this. If a view has\n> tuple duplicates, we have to determine the number of each dangling tuple to\n> be inserted into the view when tuples in a table are deleted. For this purpose,\n> we count the number of each tuples which constitute a deleted tuple. These\n> counts are stored as JSONB object in the delta table, and we use this\n> information to maintain outer-join views. Also, we support outer self-joins\n> that is not assumed in the original paper.\n> \n> * Restrictions\n> \n> Currently, we have following restrictions:\n> \n> - outer join view's targetlist must contain attributes used in join conditions\n> - outer join view's targetlist cannot contain non-strict functions\n> - outer join supports only simple equijoin\n> - outer join view's WHERE clause cannot contain non null-rejecting predicates\n> - aggregate is not supported with outer join\n> - subquery (including EXSITS) is not supported with outer join\n> \n> \n> Regression tests for all patterns of 3-way outer join and are added. \n> \n> Moreover, I reordered IVM related functions in matview.c so that ones\n> which have relationship will be located closely. Moreover, I added more\n> comments.\n> \n> Regards,\n> Yugo Nagata\n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 26 Nov 2019 16:14:21 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Note that this is the last patch in the series of IVM patches: now we\n> would like focus on blushing up the patches, rather than adding new\n> SQL support to IVM, so that the patch is merged into PostgreSQL 13\n> (hopefully). We are very welcome reviews, comments on the patch.\n> \n> BTW, the SGML docs in the patch is very poor at this point. I am going\n> to add more descriptions to the doc.\n\nAs promised, I have created the doc (CREATE MATERIALIZED VIEW manual)\npatch.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Thu, 28 Nov 2019 11:26:40 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 28 Nov 2019 11:26:40 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Note that this is the last patch in the series of IVM patches: now we\n> > would like focus on blushing up the patches, rather than adding new\n> > SQL support to IVM, so that the patch is merged into PostgreSQL 13\n> > (hopefully). We are very welcome reviews, comments on the patch.\n> > \n> > BTW, the SGML docs in the patch is very poor at this point. I am going\n> > to add more descriptions to the doc.\n> \n> As promised, I have created the doc (CREATE MATERIALIZED VIEW manual)\n> patch.\n\n- because the triggers will be invoked.\n+ because the triggers will be invoked. We call this form of materialized\n+ view as \"Incremantal materialized View Maintenance\" (IVM).\n\nThis part seems incorrect to me. Incremental (materialized) View\nMaintenance (IVM) is a way to maintain materialized views and is not a\nword to refer views to be maintained.\n\nHowever, it would be useful if there is a term referring views which\ncan be maintained using IVM. Off the top of my head, we can call this\nIncrementally Maintainable Views (= IMVs), but this might cofusable with\nIVM, so I'll think about that a little more....\n\nRegards,\nYugo Nagata\n\n> \n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 28 Nov 2019 17:10:52 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "One thing pending in this development line is how to catalogue aggregate\nfunctions that can be used in incrementally-maintainable views.\nI saw a brief mention somewhere that the devels knew it needed to be\ndone, but I don't see in the thread that they got around to doing it.\nDid you guys have any thoughts on how it can be represented in catalogs?\nIt seems sine-qua-non ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 Nov 2019 11:03:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> As promised, I have created the doc (CREATE MATERIALIZED VIEW manual)\n>> patch.\n> \n> - because the triggers will be invoked.\n> + because the triggers will be invoked. We call this form of materialized\n> + view as \"Incremantal materialized View Maintenance\" (IVM).\n> \n> This part seems incorrect to me. Incremental (materialized) View\n> Maintenance (IVM) is a way to maintain materialized views and is not a\n> word to refer views to be maintained.\n> \n> However, it would be useful if there is a term referring views which\n> can be maintained using IVM. Off the top of my head, we can call this\n> Incrementally Maintainable Views (= IMVs), but this might cofusable with\n> IVM, so I'll think about that a little more....\n\nBut if we introduce IMV, IVM would be used in much less places in the\ndoc and source code, so less confusion would happen, I guess.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 29 Nov 2019 07:19:44 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Hi,\r\n> \r\n> Attached is the latest patch (v8) to add support for Incremental View\r\n> Maintenance (IVM). This patch adds OUTER join support in addition\r\n> to the patch (v7) submitted last week in the following post.\r\n\r\nThere's a compiler warning:\r\n\r\nmatview.c: In function ‘getRteListCell’:\r\nmatview.c:2685:9: warning: ‘rte_lc’ may be used uninitialized in this function [-Wmaybe-uninitialized]\r\n return rte_lc;\r\n ^~~~~~\r\n\r\nBest regards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS, Inc. Japan\r\nEnglish: http://www.sraoss.co.jp/index_en.php\r\nJapanese:http://www.sraoss.co.jp\r\n",
"msg_date": "Fri, 29 Nov 2019 09:50:49 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 29 Nov 2019 09:50:49 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Hi,\n> > \n> > Attached is the latest patch (v8) to add support for Incremental View\n> > Maintenance (IVM). This patch adds OUTER join support in addition\n> > to the patch (v7) submitted last week in the following post.\n> \n> There's a compiler warning:\n> \n> matview.c: In function ‘getRteListCell’:\n> matview.c:2685:9: warning: ‘rte_lc’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n> return rte_lc;\n> ^~~~~~\n\nThanks! I'll fix this.\n\n> \n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 09:56:15 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 29 Nov 2019 07:19:44 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> As promised, I have created the doc (CREATE MATERIALIZED VIEW manual)\n> >> patch.\n> > \n> > - because the triggers will be invoked.\n> > + because the triggers will be invoked. We call this form of materialized\n> > + view as \"Incremantal materialized View Maintenance\" (IVM).\n> > \n> > This part seems incorrect to me. Incremental (materialized) View\n> > Maintenance (IVM) is a way to maintain materialized views and is not a\n> > word to refer views to be maintained.\n> > \n> > However, it would be useful if there is a term referring views which\n> > can be maintained using IVM. Off the top of my head, we can call this\n> > Incrementally Maintainable Views (= IMVs), but this might cofusable with\n> > IVM, so I'll think about that a little more....\n> \n> But if we introduce IMV, IVM would be used in much less places in the\n> doc and source code, so less confusion would happen, I guess.\n\nMake senses. However, we came to think that \"Incrementally Maintainable\nMaterialized Views\" (IMMs) would be good. So, how about using this for now?\nWhen other better opinions are raised, let's discuss again\n\nRegards,\nYugo Nagata\n\n> \n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 13:00:53 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> But if we introduce IMV, IVM would be used in much less places in the\n>> doc and source code, so less confusion would happen, I guess.\n> \n> Make senses. However, we came to think that \"Incrementally Maintainable\n> Materialized Views\" (IMMs) would be good. So, how about using this for now?\n> When other better opinions are raised, let's discuss again\n\nSounds good to me.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 29 Nov 2019 13:46:05 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello,\n\nThanks a lot for working on this. It's a great (and big!) feature and\nI can see that a lot of work has been put into writing this patch. I\nstarted looking at the patch (v8), but as it's quite big:\n\n 34 files changed, 5444 insertions(+), 69 deletions(-)\n\nI'm having a bit of trouble reading through, which I suspect others\nmay be too. Perhaps, it can be easier for you, as authors, to know\neverything that's being changed (added, removed, existing code\nrewritten), but certainly not for a reviewer, so I think it would be a\ngood idea to try to think dividing this into parts. I still don't\nhave my head wrapped around the topic of materialized view\nmaintenance, but roughly it looks to me like there are really *two*\nfeatures that are being added:\n\n1. Add a new method to refresh an MV incrementally; IIUC, there's\nalready one method that's used by REFRESH MATERIALIZED VIEW\nCONCURRENTLY, correct?\n\n2. Make the refresh automatic (using triggers on the component tables)\n\nMaybe, there are even:\n\n0. Infrastructure additions\n\nAs you can tell, having the patch broken down like this would allow us\nto focus on the finer aspects of each of the problem being solved and\nsolution being adopted, for example:\n\n* It would be easier for someone having an expert opinion on how to\nimplement incremental refresh to have to only look at the patch for\n(1). If the new method handles more query types than currently, which\nobviously means more code is needed, which in turn entails possibility\nof bugs, despite the best efforts. It would be better to get more\neyeballs at this portion of the patch and having it isolated seems\nlike a good way to attract more eyeballs.\n\n* Someone well versed in trigger infrastructure can help fine tune the\npatch for (2)\n\nand so on.\n\nSo, please consider giving some thought to this.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 29 Nov 2019 15:34:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "The following review on our patch was posted on another thread,\nso I quote here. The tab completion is Hoshiai-san's work, so\nhe will handle this issue.\n\nRegards,\nYugo Nagata.\n\nOn Thu, 28 Nov 2019 13:00:05 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Hi.\n> \n> I'm using the \"Incremental Materialized View Maintenance\" patch and have\n> reported the following issues.\n> (https://commitfest.postgresql.org/25/2138/)\n> \n> To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the syntax\n> error when you run.\n> (\"DROP MATERIALIZED VIEW\" command can drop Incremental Materialozed view\n> normally.)\n> \n> \n> ramendb=# CREATE INCREMENTAL MATERIALIZED VIEW pref_count AS SELECT pref,\n> COUNT(pref) FROM shops GROUP BY pref;\n> SELECT 48\n> ramendb=# \\d pref_count\n> Materialized view \"public.pref_count\"\n> Column | Type | Collation | Nullable | Default\n> ---------------+--------+-----------+----------+---------\n> pref | text | | |\n> count | bigint | | |\n> __ivm_count__ | bigint | | |\n> \n> ramendb=# DROP IN\n> INCREMENTAL MATERIALIZED VIEW INDEX\n> ramendb=# DROP INCREMENTAL MATERIALIZED VIEW pref_count;\n> 2019-11-27 11:51:03.916 UTC [9759] ERROR: syntax error at or near\n> \"INCREMENTAL\" at character 6\n> 2019-11-27 11:51:03.916 UTC [9759] STATEMENT: DROP INCREMENTAL\n> MATERIALIZED VIEW pref_count;\n> ERROR: syntax error at or near \"INCREMENTAL\"\n> LINE 1: DROP INCREMENTAL MATERIALIZED VIEW pref_count;\n> ^\n> ramendb=# DROP MATERIALIZED VIEW pref_count ;\n> DROP MATERIALIZED VIEW\n> ramendb=#\n> \n> \n> Regard.\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 15:45:13 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 29 Nov 2019 15:45:13 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> The following review on our patch was posted on another thread,\n> so I quote here. The tab completion is Hoshiai-san's work, so\n> he will handle this issue.\n> \n> Regards,\n> Yugo Nagata.\n> \n> On Thu, 28 Nov 2019 13:00:05 +0900\n> nuko yokohama <nuko.yokohama@gmail.com> wrote:\n> \n> > Hi.\n> > \n> > I'm using the \"Incremental Materialized View Maintenance\" patch and have\n> > reported the following issues.\n> > (https://commitfest.postgresql.org/25/2138/)\n> > \n> > To Suggest a \"DROP INCREMENTAL MATERIALIZED VIEW\" in psql, but the syntax\n> > error when you run.\n> > (\"DROP MATERIALIZED VIEW\" command can drop Incremental Materialozed view\n> > normally.)\n\nThank you for your review. This psql's suggestion is mistake, \n\"INCREMENTAL MATERIALIZED\" phrase is only used for CREATE statement.\n\nI will fix it as the following:\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 2051bc3..8c4b211 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -1001,7 +1001,7 @@ static const pgsql_thing_t words_after_create[] = {\n \t{\"FOREIGN TABLE\", NULL, NULL, NULL},\n \t{\"FUNCTION\", NULL, NULL, Query_for_list_of_functions},\n \t{\"GROUP\", Query_for_list_of_roles},\n-\t{\"INCREMENTAL MATERIALIZED VIEW\", NULL, NULL, &Query_for_list_of_matviews},\n+\t{\"INCREMENTAL MATERIALIZED VIEW\", NULL, NULL, &Query_for_list_of_matviews, THING_NO_DROP | THING_NO_ALTER},\n \t{\"INDEX\", NULL, NULL, &Query_for_list_of_indexes},\n \t{\"LANGUAGE\", Query_for_list_of_languages},\n \t{\"LARGE OBJECT\", NULL, NULL, NULL, THING_NO_CREATE | THING_NO_DROP},\n\n\nBest Regards,\nTakuma Hoshiai\n\n> > ramendb=# CREATE INCREMENTAL MATERIALIZED VIEW pref_count AS SELECT pref,\n> > COUNT(pref) FROM shops GROUP BY pref;\n> > SELECT 48\n> > ramendb=# \\d pref_count\n> > Materialized view \"public.pref_count\"\n> > Column | Type | Collation | Nullable | Default\n> > ---------------+--------+-----------+----------+---------\n> > pref | text | | |\n> > count | bigint | | |\n> > __ivm_count__ | bigint | | |\n> > \n> > ramendb=# DROP IN\n> > INCREMENTAL MATERIALIZED VIEW INDEX\n> > ramendb=# DROP INCREMENTAL MATERIALIZED VIEW pref_count;\n> > 2019-11-27 11:51:03.916 UTC [9759] ERROR: syntax error at or near\n> > \"INCREMENTAL\" at character 6\n> > 2019-11-27 11:51:03.916 UTC [9759] STATEMENT: DROP INCREMENTAL\n> > MATERIALIZED VIEW pref_count;\n> > ERROR: syntax error at or near \"INCREMENTAL\"\n> > LINE 1: DROP INCREMENTAL MATERIALIZED VIEW pref_count;\n> > ^\n> > ramendb=# DROP MATERIALIZED VIEW pref_count ;\n> > DROP MATERIALIZED VIEW\n> > ramendb=#\n> > \n> > \n> > Regard.\n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n> \n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n\n",
"msg_date": "Fri, 29 Nov 2019 16:10:40 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 28 Nov 2019 11:03:33 -0300\nAlvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> One thing pending in this development line is how to catalogue aggregate\n> functions that can be used in incrementally-maintainable views.\n> I saw a brief mention somewhere that the devels knew it needed to be\n> done, but I don't see in the thread that they got around to doing it.\n> Did you guys have any thoughts on how it can be represented in catalogs?\n> It seems sine-qua-non ...\n\nYes, this is a pending issue. Currently, supported aggregate functions are\nidentified their name, that is, we support aggregate functions named \"count\",\n\"sum\", \"avg\", \"min\", or \"max\". As mentioned before, this is not robust\nbecause there might be user-defined aggregates with these names although all\nbuilt-in aggregates can be used in IVM.\n\nIn our implementation, the new aggregate values are calculated using \"+\" and\n\"-\" operations for sum and count, \"/\" for agv, and \">=\" / \"<=\" for min/max. \nTherefore, if there is a user-defined aggregate on a user-defined type which\ndoesn't support these operators, errors will raise. Obviously, this is a\nproblem. Even if these operators are defined, the semantics of user-defined\naggregate functions might not match with the way of maintaining views, and\nresultant might be incorrect.\n\nI think there are at least three options to prevent these problems.\n\nIn the first option, we support only built-in aggregates which we know able\nto handle correctly. Supported aggregates can be identified using their OIDs.\nUser-defined aggregates are not supported. I think this is the simplest and\neasiest way.\n\nSecond, supported aggregates can be identified using name, like the current\nimplementation, but also it is checked if required operators are defined. In\nthis case, user-defined aggregates are allowed to some extent and we can\nprevent errors during IVM although aggregates value in view might be\nincorrect if the semantics doesn't match. \n\nThird, we can add a new attribute to pg_aggregate which shows if each\naggregate can be used in IVM. We don't need to use names or OIDs list of\nsupported aggregates although we need modification of the system catalogue.\n\nRegarding pg_aggregate, now we have aggcombinefn attribute for supporting\npartial aggregation. Maybe we could use combine functions to calculate new\naggregate values in IVM when tuples are inserted into a table. However, in\nthe context of IVM, we also need other function used when tuples are deleted\nfrom a table, so we can not use partial aggregation for IVM in the current\nimplementation. This might be another option to implement \"inverse combine\nfunction\"(?) for IVM, but I am not sure it worth.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 17:33:28 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 29 Nov 2019 15:34:52 +0900\nAmit Langote <amitlangote09@gmail.com> wrote:\n\n> Thanks a lot for working on this. It's a great (and big!) feature and\n> I can see that a lot of work has been put into writing this patch. I\n> started looking at the patch (v8), but as it's quite big:\n> \n> 34 files changed, 5444 insertions(+), 69 deletions(-)\n\nThank you for your reviewing the patch! Yes, this is a big patch\nathough \n\n> I'm having a bit of trouble reading through, which I suspect others\n> may be too. Perhaps, it can be easier for you, as authors, to know\n> everything that's being changed (added, removed, existing code\n> rewritten), but certainly not for a reviewer, so I think it would be a\n> good idea to try to think dividing this into parts. I still don't\n\nI agree with you. We also think the need to split the patch and we are\nconsidering the way.\n \n> have my head wrapped around the topic of materialized view\n> maintenance, but roughly it looks to me like there are really *two*\n> features that are being added:\n> \n> 1. Add a new method to refresh an MV incrementally; IIUC, there's\n> already one method that's used by REFRESH MATERIALIZED VIEW\n> CONCURRENTLY, correct?\n\nNo, REFRESH MATERIALIZED VIEW CONCURRENTLY is not the way to refresh\nmaterialized views. This just acquires weaker locks on views to not\nprevent SELECT, so this calculate the content of the view completely\nfrom scratch. There is no method to incrementally refresh materialized\nviews in the current PostgreSQL.\n\nAlso, we didn't implement incremental refresh on REFRESH command in\nthis patch. This supports only automatically refresh using triggers.\nHowever, we used the code for REFRESH in our IVM implementation, so\nI think splitting the patch according to this point of view can make\nsense.\n\n> 2. Make the refresh automatic (using triggers on the component tables)\n> \n> Maybe, there are even:\n> \n> 0. Infrastructure additions\n\nYes, we have a bit modification on the infrastructure, for example,\ntrigger.c.\n\n> As you can tell, having the patch broken down like this would allow us\n> to focus on the finer aspects of each of the problem being solved and\n> solution being adopted, for example:\n> \n> * It would be easier for someone having an expert opinion on how to\n> implement incremental refresh to have to only look at the patch for\n> (1). If the new method handles more query types than currently, which\n> obviously means more code is needed, which in turn entails possibility\n> of bugs, despite the best efforts. It would be better to get more\n> eyeballs at this portion of the patch and having it isolated seems\n> like a good way to attract more eyeballs.\n> \n> * Someone well versed in trigger infrastructure can help fine tune the\n> patch for (2)\n> \n> and so on.\n> \n> So, please consider giving some thought to this.\n\nAgreed. Although I am not sure we will do it as above way, we will\nconsider to split the patch, anyway. Thanks. \n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 18:16:00 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 29 Nov 2019 18:16:00 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> On Fri, 29 Nov 2019 15:34:52 +0900\n> Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> > Thanks a lot for working on this. It's a great (and big!) feature and\n> > I can see that a lot of work has been put into writing this patch. I\n> > started looking at the patch (v8), but as it's quite big:\n> > \n> > 34 files changed, 5444 insertions(+), 69 deletions(-)\n> \n> Thank you for your reviewing the patch! Yes, this is a big patch\n> athough \n\nSorry, an unfinished line was left... Please ignore this.\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 29 Nov 2019 18:19:54 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 06:19:54PM +0900, Yugo Nagata wrote:\n> Sorry, an unfinished line was left... Please ignore this.\n\nA rebase looks to be necessary, Mr Robot complains that the patch does\nnot apply cleanly. As the thread is active recently, I have moved the\npatch to next CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:55:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Michael,\n\n> A rebase looks to be necessary, Mr Robot complains that the patch does\n> not apply cleanly. As the thread is active recently, I have moved the\n> patch to next CF, waiting on author.\n\nThank you for taking care of this patch. Hoshiai-san, can you please\nrebase the patch?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 02 Dec 2019 10:01:18 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> One thing pending in this development line is how to catalogue aggregate\n>> functions that can be used in incrementally-maintainable views.\n>> I saw a brief mention somewhere that the devels knew it needed to be\n>> done, but I don't see in the thread that they got around to doing it.\n>> Did you guys have any thoughts on how it can be represented in catalogs?\n>> It seems sine-qua-non ...\n> \n> Yes, this is a pending issue. Currently, supported aggregate functions are\n> identified their name, that is, we support aggregate functions named \"count\",\n> \"sum\", \"avg\", \"min\", or \"max\". As mentioned before, this is not robust\n> because there might be user-defined aggregates with these names although all\n> built-in aggregates can be used in IVM.\n> \n> In our implementation, the new aggregate values are calculated using \"+\" and\n> \"-\" operations for sum and count, \"/\" for agv, and \">=\" / \"<=\" for min/max. \n> Therefore, if there is a user-defined aggregate on a user-defined type which\n> doesn't support these operators, errors will raise. Obviously, this is a\n> problem. Even if these operators are defined, the semantics of user-defined\n> aggregate functions might not match with the way of maintaining views, and\n> resultant might be incorrect.\n> \n> I think there are at least three options to prevent these problems.\n> \n> In the first option, we support only built-in aggregates which we know able\n> to handle correctly. Supported aggregates can be identified using their OIDs.\n> User-defined aggregates are not supported. I think this is the simplest and\n> easiest way.\n\nI think this is enough for the first cut of IVM. So +1.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 02 Dec 2019 10:36:36 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> On Fri, Nov 29, 2019 at 06:19:54PM +0900, Yugo Nagata wrote:\n>> Sorry, an unfinished line was left... Please ignore this.\n> \n> A rebase looks to be necessary, Mr Robot complains that the patch does\n> not apply cleanly.\n\nIs this because the patch has \".gz\" suffix?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n\n",
"msg_date": "Mon, 02 Dec 2019 10:57:29 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 02 Dec 2019 10:01:18 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> Michael,\n> \n> > A rebase looks to be necessary, Mr Robot complains that the patch does\n> > not apply cleanly. As the thread is active recently, I have moved the\n> > patch to next CF, waiting on author.\n> \n> Thank you for taking care of this patch. Hoshiai-san, can you please\n> rebase the patch?\n\n\nSure,\nI re-created a patch. This contains 'IVM_8.patch' and 'create_materialized_view.patch', \nand I checked to apply a patch on latest master.\n\n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n> \n\nBest Regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_date": "Mon, 2 Dec 2019 11:05:38 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 02 Dec 2019 10:36:36 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> One thing pending in this development line is how to catalogue aggregate\n> >> functions that can be used in incrementally-maintainable views.\n> >> I saw a brief mention somewhere that the devels knew it needed to be\n> >> done, but I don't see in the thread that they got around to doing it.\n> >> Did you guys have any thoughts on how it can be represented in catalogs?\n> >> It seems sine-qua-non ...\n> > \n> > Yes, this is a pending issue. Currently, supported aggregate functions are\n> > identified their name, that is, we support aggregate functions named \"count\",\n> > \"sum\", \"avg\", \"min\", or \"max\". As mentioned before, this is not robust\n> > because there might be user-defined aggregates with these names although all\n> > built-in aggregates can be used in IVM.\n> > \n> > In our implementation, the new aggregate values are calculated using \"+\" and\n> > \"-\" operations for sum and count, \"/\" for agv, and \">=\" / \"<=\" for min/max. \n> > Therefore, if there is a user-defined aggregate on a user-defined type which\n> > doesn't support these operators, errors will raise. Obviously, this is a\n> > problem. Even if these operators are defined, the semantics of user-defined\n> > aggregate functions might not match with the way of maintaining views, and\n> > resultant might be incorrect.\n> > \n> > I think there are at least three options to prevent these problems.\n> > \n> > In the first option, we support only built-in aggregates which we know able\n> > to handle correctly. Supported aggregates can be identified using their OIDs.\n> > User-defined aggregates are not supported. I think this is the simplest and\n> > easiest way.\n> \n> I think this is enough for the first cut of IVM. So +1.\n\nIf there is no objection, I will add the check of aggregate functions\nby this way. Thanks.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 2 Dec 2019 15:42:08 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On 2019-Dec-02, Yugo Nagata wrote:\n\n> On Mon, 02 Dec 2019 10:36:36 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n> > >> One thing pending in this development line is how to catalogue aggregate\n> > >> functions that can be used in incrementally-maintainable views.\n> > >> I saw a brief mention somewhere that the devels knew it needed to be\n> > >> done, but I don't see in the thread that they got around to doing it.\n> > >> Did you guys have any thoughts on how it can be represented in catalogs?\n> > >> It seems sine-qua-non ...\n\n> > > In the first option, we support only built-in aggregates which we know able\n> > > to handle correctly. Supported aggregates can be identified using their OIDs.\n> > > User-defined aggregates are not supported. I think this is the simplest and\n> > > easiest way.\n> > \n> > I think this is enough for the first cut of IVM. So +1.\n> \n> If there is no objection, I will add the check of aggregate functions\n> by this way. Thanks.\n\nThe way I imagine things is that there's (one or more) new column in\npg_aggregate that links to the operator(s) (or function(s)?) that\nsupport incremental update of the MV for that aggregate function. Is\nthat what you're proposing?\n\nAll that query-construction business in apply_delta() looks quite\nsuspicious.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Dec 2019 13:48:40 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 2 Dec 2019 13:48:40 -0300\nAlvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Dec-02, Yugo Nagata wrote:\n> \n> > On Mon, 02 Dec 2019 10:36:36 +0900 (JST)\n> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > \n> > > >> One thing pending in this development line is how to catalogue aggregate\n> > > >> functions that can be used in incrementally-maintainable views.\n> > > >> I saw a brief mention somewhere that the devels knew it needed to be\n> > > >> done, but I don't see in the thread that they got around to doing it.\n> > > >> Did you guys have any thoughts on how it can be represented in catalogs?\n> > > >> It seems sine-qua-non ...\n> \n> > > > In the first option, we support only built-in aggregates which we know able\n> > > > to handle correctly. Supported aggregates can be identified using their OIDs.\n> > > > User-defined aggregates are not supported. I think this is the simplest and\n> > > > easiest way.\n> > > \n> > > I think this is enough for the first cut of IVM. So +1.\n> > \n> > If there is no objection, I will add the check of aggregate functions\n> > by this way. Thanks.\n> \n> The way I imagine things is that there's (one or more) new column in\n> pg_aggregate that links to the operator(s) (or function(s)?) that\n> support incremental update of the MV for that aggregate function. Is\n> that what you're proposing?\n\nThe way I am proposing above is using OID to check if a aggregate can be\nused in IVM. This allows only a part of built-in aggreagete functions.\n\nThis way you mentioned was proposed as one of options as following.\n\nOn Fri, 29 Nov 2019 17:33:28 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n> Third, we can add a new attribute to pg_aggregate which shows if each\n> aggregate can be used in IVM. We don't need to use names or OIDs list of\n> supported aggregates although we need modification of the system catalogue.\n> \n> Regarding pg_aggregate, now we have aggcombinefn attribute for supporting\n> partial aggregation. Maybe we could use combine functions to calculate new\n> aggregate values in IVM when tuples are inserted into a table. However, in\n> the context of IVM, we also need other function used when tuples are deleted\n> from a table, so we can not use partial aggregation for IVM in the current\n> implementation. This might be another option to implement \"inverse combine\n> function\"(?) for IVM, but I am not sure it worth.\n\nIf we add \"inverse combine function\" in pg_aggregate that takes two results\nof aggregating over tuples in a view and tuples in a delta, and produces a\nresult of aggregating over tuples in the view after tuples in the delta are\ndeleted from this, it would allow to calculate new aggregate values in IVM\nusing aggcombinefn together when the aggregate function provides both\nfunctions.\n\nAnother idea is to use support functions for moving-aggregate mode which are\nalready provided in pg_aggregate. However, in this case, we have to apply\ntuples in the delta to the view one by one instead of applying after\naggregating tuples in the delta.\n\nIn both case, we can not use these support functions in SQL via SPI because\nthe type of some aggregates is internal. We have to alter the current\napply_delta implementation if we adopt a way using these support functions.\nInstead, we also can add support functions for IVM independent to partial\naggregate or moving-aggregate. Maybe this is also one of options.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 3 Dec 2019 14:41:22 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi.\n\nI found the problem after running \"ALTER MATERIALIZED VIEW ... RENAME TO\".\nIf a view created with \"CREATE INCREMENT MATERIALIZED VIEW\" is renamed,\nsubsequent INSERT operations to the base table will fail.\n\nError message.\n```\nERROR: could not open relation with OID 0\n```\n\nExecution log.\n```\n[ec2-user@ip-10-0-1-10 ivm]$ psql -U postgres test -e -f\n~/test/ivm/alter_rename_bug.sql\nDROP TABLE IF EXISTS table_x CASCADE;\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:1: NOTICE: drop cascades\nto materialized view group_imv\nDROP TABLE\nCREATE TABLE table_x AS\n SELECT generate_series(1, 10000) AS id,\n ROUND(random()::numeric * 100, 2) AS data,\n CASE (random() * 5)::integer\n WHEN 4 THEN 'group-a'\n WHEN 3 THEN 'group-b'\n ELSE 'group-c'\n END AS part_key\n;\nSELECT 10000\n Table \"public.table_x\"\n Column | Type | Collation | Nullable | Default\n----------+---------+-----------+----------+---------\n id | integer | | |\n data | numeric | | |\n part_key | text | | |\n\nDROP MATERIALIZED VIEW IF EXISTS group_imv;\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:15: NOTICE: materialized\nview \"group_imv\" does not exist, skipping\nDROP MATERIALIZED VIEW\nCREATE INCREMENTAL MATERIALIZED VIEW group_imv AS\nSELECT part_key, COUNT(*), MAX(data), MIN(data), SUM(data), AVG(data)\nFROM table_x\nGROUP BY part_key;\nSELECT 3\n List of relations\n Schema | Name | Type | Owner\n--------+-----------+-------------------+----------\n public | group_imv | materialized view | postgres\n public | table_x | table | postgres\n(2 rows)\n\n Materialized view \"public.group_imv\"\n Column | Type | Collation | Nullable | Default\n-------------------+---------+-----------+----------+---------\n part_key | text | | |\n count | bigint | | |\n max | numeric | | |\n min | numeric | | |\n sum | numeric | | |\n avg | numeric | | |\n __ivm_count_max__ | bigint | | |\n __ivm_count_min__ | bigint | | |\n __ivm_count_sum__ | bigint | | |\n __ivm_count_avg__ | bigint | | |\n __ivm_sum_avg__ | numeric | | |\n __ivm_count__ | bigint | | |\n\nSELECT * FROM group_imv ORDER BY part_key;\n part_key | count | max | min | sum | avg\n----------+-------+-------+------+-----------+---------------------\n group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777\n group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848\n group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057\n(3 rows)\n\nALTER MATERIALIZED VIEW group_imv RENAME TO group_imv2;\nALTER MATERIALIZED VIEW\n List of relations\n Schema | Name | Type | Owner\n--------+------------+-------------------+----------\n public | group_imv2 | materialized view | postgres\n public | table_x | table | postgres\n(2 rows)\n\n Materialized view \"public.group_imv2\"\n Column | Type | Collation | Nullable | Default\n-------------------+---------+-----------+----------+---------\n part_key | text | | |\n count | bigint | | |\n max | numeric | | |\n min | numeric | | |\n sum | numeric | | |\n avg | numeric | | |\n __ivm_count_max__ | bigint | | |\n __ivm_count_min__ | bigint | | |\n __ivm_count_sum__ | bigint | | |\n __ivm_count_avg__ | bigint | | |\n __ivm_sum_avg__ | numeric | | |\n __ivm_count__ | bigint | | |\n\nSET client_min_messages = debug5;\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:30: DEBUG:\n CommitTransaction(1) name: unnamed; blockState: STARTED; state:\nINPROGRESS, xid/subid/cid: 0/1/0\nSET\nINSERT INTO table_x VALUES (10000001, ROUND(random()::numeric * 100, 2),\n'gruop_d');\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG:\n StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS,\nxid/subid/cid: 0/1/0\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: relation\n\"public.group_imv\" does not exist\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: relation\n\"public.group_imv\" does not exist\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: ERROR: could not\nopen relation with OID 0\nRESET client_min_messages;\npsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:34: DEBUG:\n StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS,\nxid/subid/cid: 0/1/0\nRESET\nSELECT * FROM group_imv2 ORDER BY part_key;\n part_key | count | max | min | sum | avg\n----------+-------+-------+------+-----------+---------------------\n group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777\n group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848\n group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057\n(3 rows)\n\nALTER MATERIALIZED VIEW group_imv2 RENAME TO group_imv;\nALTER MATERIALIZED VIEW\nINSERT INTO table_x VALUES (10000001, ROUND(random()::numeric * 100, 2),\n'gruop_d');\nINSERT 0 1\nSELECT * FROM group_imv ORDER BY part_key;\n part_key | count | max | min | sum | avg\n----------+-------+-------+-------+-----------+---------------------\n group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777\n group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848\n group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057\n gruop_d | 1 | 81.43 | 81.43 | 81.43 | 81.4300000000000000\n(4 rows)\n\n[ec2-user@ip-10-0-1-10 ivm]$\n```\n\nThis may be because IVM internal information is not modified when the view\nname is renamed.\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\nHi. \nI found the problem after running \"ALTER MATERIALIZED VIEW ... RENAME TO\".If a view created with \"CREATE INCREMENT MATERIALIZED VIEW\" is renamed, subsequent INSERT operations to the base table will fail.Error message.```ERROR: could not open relation with OID 0```Execution log.```[ec2-user@ip-10-0-1-10 ivm]$ psql -U postgres test -e -f ~/test/ivm/alter_rename_bug.sqlDROP TABLE IF EXISTS table_x CASCADE;psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:1: NOTICE: drop cascades to materialized view group_imvDROP TABLECREATE TABLE table_x AS SELECT generate_series(1, 10000) AS id, ROUND(random()::numeric * 100, 2) AS data, CASE (random() * 5)::integer WHEN 4 THEN 'group-a' WHEN 3 THEN 'group-b' ELSE 'group-c' END AS part_key;SELECT 10000 Table \"public.table_x\" Column | Type | Collation | Nullable | Default----------+---------+-----------+----------+--------- id | integer | | | data | numeric | | | part_key | text | | |DROP MATERIALIZED VIEW IF EXISTS group_imv;psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:15: NOTICE: materialized view \"group_imv\" does not exist, skippingDROP MATERIALIZED VIEWCREATE INCREMENTAL MATERIALIZED VIEW group_imv ASSELECT part_key, COUNT(*), MAX(data), MIN(data), SUM(data), AVG(data)FROM table_xGROUP BY part_key;SELECT 3 List of relations Schema | Name | Type | Owner--------+-----------+-------------------+---------- public | group_imv | materialized view | postgres public | table_x | table | postgres(2 rows) Materialized view \"public.group_imv\" Column | Type | Collation | Nullable | Default-------------------+---------+-----------+----------+--------- part_key | text | | | count | bigint | | | max | numeric | | | min | numeric | | | sum | numeric | | | avg | numeric | | | __ivm_count_max__ | bigint | | | __ivm_count_min__ | bigint | | | __ivm_count_sum__ | bigint | | | __ivm_count_avg__ | bigint | | | __ivm_sum_avg__ | numeric | | | __ivm_count__ | bigint | | |SELECT * FROM group_imv ORDER BY part_key; part_key | count | max | min | sum | avg----------+-------+-------+------+-----------+--------------------- group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777 group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848 group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057(3 rows)ALTER MATERIALIZED VIEW group_imv RENAME TO group_imv2;ALTER MATERIALIZED VIEW List of relations Schema | Name | Type | Owner--------+------------+-------------------+---------- public | group_imv2 | materialized view | postgres public | table_x | table | postgres(2 rows) Materialized view \"public.group_imv2\" Column | Type | Collation | Nullable | Default-------------------+---------+-----------+----------+--------- part_key | text | | | count | bigint | | | max | numeric | | | min | numeric | | | sum | numeric | | | avg | numeric | | | __ivm_count_max__ | bigint | | | __ivm_count_min__ | bigint | | | __ivm_count_sum__ | bigint | | | __ivm_count_avg__ | bigint | | | __ivm_sum_avg__ | numeric | | | __ivm_count__ | bigint | | |SET client_min_messages = debug5;psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:30: DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0SETINSERT INTO table_x VALUES (10000001, ROUND(random()::numeric * 100, 2),'gruop_d');psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: relation \"public.group_imv\" does not existpsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: relation \"public.group_imv\" does not existpsql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: ERROR: could not open relation with OID 0RESET client_min_messages;psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:34: DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0RESETSELECT * FROM group_imv2 ORDER BY part_key; part_key | count | max | min | sum | avg----------+-------+-------+------+-----------+--------------------- group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777 group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848 group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057(3 rows)ALTER MATERIALIZED VIEW group_imv2 RENAME TO group_imv;ALTER MATERIALIZED VIEWINSERT INTO table_x VALUES (10000001, ROUND(random()::numeric * 100, 2),'gruop_d');INSERT 0 1SELECT * FROM group_imv ORDER BY part_key; part_key | count | max | min | sum | avg----------+-------+-------+-------+-----------+--------------------- group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777 group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848 group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057 gruop_d | 1 | 81.43 | 81.43 | 81.43 | 81.4300000000000000(4 rows)[ec2-user@ip-10-0-1-10 ivm]$```This may be because IVM internal information is not modified when the view name is renamed.2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Wed, 4 Dec 2019 21:18:02 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "2019年12月3日(火) 14:42 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> On Mon, 2 Dec 2019 13:48:40 -0300\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> > On 2019-Dec-02, Yugo Nagata wrote:\n> >\n> > > On Mon, 02 Dec 2019 10:36:36 +0900 (JST)\n> > > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > >\n> > > > >> One thing pending in this development line is how to catalogue\n> aggregate\n> > > > >> functions that can be used in incrementally-maintainable views.\n> > > > >> I saw a brief mention somewhere that the devels knew it needed to\n> be\n> > > > >> done, but I don't see in the thread that they got around to doing\n> it.\n> > > > >> Did you guys have any thoughts on how it can be represented in\n> catalogs?\n> > > > >> It seems sine-qua-non ...\n> >\n> > > > > In the first option, we support only built-in aggregates which we\n> know able\n> > > > > to handle correctly. Supported aggregates can be identified using\n> their OIDs.\n> > > > > User-defined aggregates are not supported. I think this is the\n> simplest and\n> > > > > easiest way.\n> > > >\n> > > > I think this is enough for the first cut of IVM. So +1.\n> > >\n> > > If there is no objection, I will add the check of aggregate functions\n> > > by this way. Thanks.\n> >\n> > The way I imagine things is that there's (one or more) new column in\n> > pg_aggregate that links to the operator(s) (or function(s)?) that\n> > support incremental update of the MV for that aggregate function. Is\n> > that what you're proposing?\n>\n> The way I am proposing above is using OID to check if a aggregate can be\n> used in IVM. This allows only a part of built-in aggreagete functions.\n>\n> This way you mentioned was proposed as one of options as following.\n>\n> On Fri, 29 Nov 2019 17:33:28 +0900\n> Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> > Third, we can add a new attribute to pg_aggregate which shows if each\n> > aggregate can be used in IVM. We don't need to use names or OIDs list of\n> > supported aggregates although we need modification of the system\n> catalogue.\n> >\n> > Regarding pg_aggregate, now we have aggcombinefn attribute for supporting\n> > partial aggregation. Maybe we could use combine functions to calculate\n> new\n> > aggregate values in IVM when tuples are inserted into a table. However,\n> in\n> > the context of IVM, we also need other function used when tuples are\n> deleted\n> > from a table, so we can not use partial aggregation for IVM in the\n> current\n> > implementation. This might be another option to implement \"inverse\n> combine\n> > function\"(?) for IVM, but I am not sure it worth.\n>\n> If we add \"inverse combine function\" in pg_aggregate that takes two results\n> of aggregating over tuples in a view and tuples in a delta, and produces a\n> result of aggregating over tuples in the view after tuples in the delta are\n> deleted from this, it would allow to calculate new aggregate values in IVM\n> using aggcombinefn together when the aggregate function provides both\n> functions.\n>\n> Another idea is to use support functions for moving-aggregate mode which\n> are\n> already provided in pg_aggregate. However, in this case, we have to apply\n> tuples in the delta to the view one by one instead of applying after\n> aggregating tuples in the delta.\n>\n> In both case, we can not use these support functions in SQL via SPI because\n> the type of some aggregates is internal. We have to alter the current\n> apply_delta implementation if we adopt a way using these support functions.\n> Instead, we also can add support functions for IVM independent to partial\n> aggregate or moving-aggregate. Maybe this is also one of options.\n>\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n>\n\n2019年12月3日(火) 14:42 Yugo Nagata <nagata@sraoss.co.jp>:On Mon, 2 Dec 2019 13:48:40 -0300\nAlvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Dec-02, Yugo Nagata wrote:\n> \n> > On Mon, 02 Dec 2019 10:36:36 +0900 (JST)\n> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > \n> > > >> One thing pending in this development line is how to catalogue aggregate\n> > > >> functions that can be used in incrementally-maintainable views.\n> > > >> I saw a brief mention somewhere that the devels knew it needed to be\n> > > >> done, but I don't see in the thread that they got around to doing it.\n> > > >> Did you guys have any thoughts on how it can be represented in catalogs?\n> > > >> It seems sine-qua-non ...\n> \n> > > > In the first option, we support only built-in aggregates which we know able\n> > > > to handle correctly. Supported aggregates can be identified using their OIDs.\n> > > > User-defined aggregates are not supported. I think this is the simplest and\n> > > > easiest way.\n> > > \n> > > I think this is enough for the first cut of IVM. So +1.\n> > \n> > If there is no objection, I will add the check of aggregate functions\n> > by this way. Thanks.\n> \n> The way I imagine things is that there's (one or more) new column in\n> pg_aggregate that links to the operator(s) (or function(s)?) that\n> support incremental update of the MV for that aggregate function. Is\n> that what you're proposing?\n\nThe way I am proposing above is using OID to check if a aggregate can be\nused in IVM. This allows only a part of built-in aggreagete functions.\n\nThis way you mentioned was proposed as one of options as following.\n\nOn Fri, 29 Nov 2019 17:33:28 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n> Third, we can add a new attribute to pg_aggregate which shows if each\n> aggregate can be used in IVM. We don't need to use names or OIDs list of\n> supported aggregates although we need modification of the system catalogue.\n> \n> Regarding pg_aggregate, now we have aggcombinefn attribute for supporting\n> partial aggregation. Maybe we could use combine functions to calculate new\n> aggregate values in IVM when tuples are inserted into a table. However, in\n> the context of IVM, we also need other function used when tuples are deleted\n> from a table, so we can not use partial aggregation for IVM in the current\n> implementation. This might be another option to implement \"inverse combine\n> function\"(?) for IVM, but I am not sure it worth.\n\nIf we add \"inverse combine function\" in pg_aggregate that takes two results\nof aggregating over tuples in a view and tuples in a delta, and produces a\nresult of aggregating over tuples in the view after tuples in the delta are\ndeleted from this, it would allow to calculate new aggregate values in IVM\nusing aggcombinefn together when the aggregate function provides both\nfunctions.\n\nAnother idea is to use support functions for moving-aggregate mode which are\nalready provided in pg_aggregate. However, in this case, we have to apply\ntuples in the delta to the view one by one instead of applying after\naggregating tuples in the delta.\n\nIn both case, we can not use these support functions in SQL via SPI because\nthe type of some aggregates is internal. We have to alter the current\napply_delta implementation if we adopt a way using these support functions.\nInstead, we also can add support functions for IVM independent to partial\naggregate or moving-aggregate. Maybe this is also one of options.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Wed, 4 Dec 2019 21:43:09 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 4 Dec 2019 21:18:02 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Hi.\n> \n> I found the problem after running \"ALTER MATERIALIZED VIEW ... RENAME TO\".\n> If a view created with \"CREATE INCREMENT MATERIALIZED VIEW\" is renamed,\n> subsequent INSERT operations to the base table will fail.\n> \n> Error message.\n> ```\n> ERROR: could not open relation with OID 0\n\nThank you for your pointing out this issue! This error occurs\nbecause the view's OID is retrieved using the view name.\nConsidering that the name can be changed, this is obviously\nwrong. We'll fix it.\n\nRegards,\nYugo Nagata\n\n> ```\n> \n> Execution log.\n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql -U postgres test -e -f\n> ~/test/ivm/alter_rename_bug.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:1: NOTICE: drop cascades\n> to materialized view group_imv\n> DROP TABLE\n> CREATE TABLE table_x AS\n> SELECT generate_series(1, 10000) AS id,\n> ROUND(random()::numeric * 100, 2) AS data,\n> CASE (random() * 5)::integer\n> WHEN 4 THEN 'group-a'\n> WHEN 3 THEN 'group-b'\n> ELSE 'group-c'\n> END AS part_key\n> ;\n> SELECT 10000\n> Table \"public.table_x\"\n> Column | Type | Collation | Nullable | Default\n> ----------+---------+-----------+----------+---------\n> id | integer | | |\n> data | numeric | | |\n> part_key | text | | |\n> \n> DROP MATERIALIZED VIEW IF EXISTS group_imv;\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:15: NOTICE: materialized\n> view \"group_imv\" does not exist, skipping\n> DROP MATERIALIZED VIEW\n> CREATE INCREMENTAL MATERIALIZED VIEW group_imv AS\n> SELECT part_key, COUNT(*), MAX(data), MIN(data), SUM(data), AVG(data)\n> FROM table_x\n> GROUP BY part_key;\n> SELECT 3\n> List of relations\n> Schema | Name | Type | Owner\n> --------+-----------+-------------------+----------\n> public | group_imv | materialized view | postgres\n> public | table_x | table | postgres\n> (2 rows)\n> \n> Materialized view \"public.group_imv\"\n> Column | Type | Collation | Nullable | Default\n> -------------------+---------+-----------+----------+---------\n> part_key | text | | |\n> count | bigint | | |\n> max | numeric | | |\n> min | numeric | | |\n> sum | numeric | | |\n> avg | numeric | | |\n> __ivm_count_max__ | bigint | | |\n> __ivm_count_min__ | bigint | | |\n> __ivm_count_sum__ | bigint | | |\n> __ivm_count_avg__ | bigint | | |\n> __ivm_sum_avg__ | numeric | | |\n> __ivm_count__ | bigint | | |\n> \n> SELECT * FROM group_imv ORDER BY part_key;\n> part_key | count | max | min | sum | avg\n> ----------+-------+-------+------+-----------+---------------------\n> group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777\n> group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848\n> group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057\n> (3 rows)\n> \n> ALTER MATERIALIZED VIEW group_imv RENAME TO group_imv2;\n> ALTER MATERIALIZED VIEW\n> List of relations\n> Schema | Name | Type | Owner\n> --------+------------+-------------------+----------\n> public | group_imv2 | materialized view | postgres\n> public | table_x | table | postgres\n> (2 rows)\n> \n> Materialized view \"public.group_imv2\"\n> Column | Type | Collation | Nullable | Default\n> -------------------+---------+-----------+----------+---------\n> part_key | text | | |\n> count | bigint | | |\n> max | numeric | | |\n> min | numeric | | |\n> sum | numeric | | |\n> avg | numeric | | |\n> __ivm_count_max__ | bigint | | |\n> __ivm_count_min__ | bigint | | |\n> __ivm_count_sum__ | bigint | | |\n> __ivm_count_avg__ | bigint | | |\n> __ivm_sum_avg__ | numeric | | |\n> __ivm_count__ | bigint | | |\n> \n> SET client_min_messages = debug5;\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:30: DEBUG:\n> CommitTransaction(1) name: unnamed; blockState: STARTED; state:\n> INPROGRESS, xid/subid/cid: 0/1/0\n> SET\n> INSERT INTO table_x VALUES (10000001, ROUND(random()::numeric * 100, 2),\n> 'gruop_d');\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG:\n> StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS,\n> xid/subid/cid: 0/1/0\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: relation\n> \"public.group_imv\" does not exist\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: DEBUG: relation\n> \"public.group_imv\" does not exist\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:33: ERROR: could not\n> open relation with OID 0\n> RESET client_min_messages;\n> psql:/home/ec2-user/test/ivm/alter_rename_bug.sql:34: DEBUG:\n> StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS,\n> xid/subid/cid: 0/1/0\n> RESET\n> SELECT * FROM group_imv2 ORDER BY part_key;\n> part_key | count | max | min | sum | avg\n> ----------+-------+-------+------+-----------+---------------------\n> group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777\n> group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848\n> group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057\n> (3 rows)\n> \n> ALTER MATERIALIZED VIEW group_imv2 RENAME TO group_imv;\n> ALTER MATERIALIZED VIEW\n> INSERT INTO table_x VALUES (10000001, ROUND(random()::numeric * 100, 2),\n> 'gruop_d');\n> INSERT 0 1\n> SELECT * FROM group_imv ORDER BY part_key;\n> part_key | count | max | min | sum | avg\n> ----------+-------+-------+-------+-----------+---------------------\n> group-a | 1966 | 99.85 | 0.05 | 98634.93 | 50.1703611393692777\n> group-b | 2021 | 99.99 | 0.17 | 102614.02 | 50.7738842157347848\n> group-c | 6013 | 99.99 | 0.02 | 300968.43 | 50.0529569266589057\n> gruop_d | 1 | 81.43 | 81.43 | 81.43 | 81.4300000000000000\n> (4 rows)\n> \n> [ec2-user@ip-10-0-1-10 ivm]$\n> ```\n> \n> This may be because IVM internal information is not modified when the view\n> name is renamed.\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 5 Dec 2019 10:19:51 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the latest patch (v10) to add support for Incremental\nMaterialized View Maintenance (IVM). \n\nIVM is a way to make materialized views up-to-date in which only\nincremental changes are computed and applied on views rather than\nrecomputing the contents from scratch as REFRESH MATERIALIZED VIEW\ndoes. IVM can update materialized views more efficiently\nthan recomputation when only small part of the view need updates.\n\nThere are two approaches with regard to timing of view maintenance:\nimmediate and deferred. In immediate maintenance, views are updated in\nthe same transaction where its base table is modified. In deferred\nmaintenance, views are updated after the transaction is committed,\nfor example, when the view is accessed, as a response to user command\nlike REFRESH, or periodically in background, and so on. \n\nThis patch implements a kind of immediate maintenance, in which\nmaterialized views are updated immediately in AFTER triggers when a\nbase table is modified.\n\nThis supports views using:\n - inner and outer joins including self-join\n - some built-in aggregate functions (count, sum, agv, min, max)\n - a part of subqueries\n -- simple subqueries in FROM clause\n -- EXISTS subqueries in WHERE clause\n - DISTINCT and views with tuple duplicates\n\n===\nHere are major changes we made after the previous submitted patch:\n\n* Aggregate functions are checked if they can be used in IVM \n using their OID. Per comments from Alvaro Herrera.\n\n For this purpose, Gen_fmgrtab.pl was modified so that OIDs of\n aggregate functions are output to fmgroids.h.\n\n* Some bug fixes including:\n\n - Mistake of tab-completion of psql pointed out by nuko-san\n - A bug relating rename of matview pointed out by nuko-san\n - spelling errors\n - etc.\n\n* Add documentations for IVM\n\n* Patch is splited into eleven parts to make review easier\n as suggested by Amit Langote:\n\n - 0001: Add a new syntax:\n CREATE INCREMENTAL MATERIALIZED VIEW\n - 0002: Add a new column relisivm to pg_class\n - 0003: Change trigger.c to allow to prolong life span of tupestores\n containing Transition Tables generated via AFTER trigger\n - 0004: Add the basic IVM future using counting algorithm:\n This supports inner joins, DISTINCT, and tuple duplicates.\n - 0005: Change GEN_fmgrtab.pl to output aggregate function's OIDs\n - 0006: Add aggregates support for IVM\n - 0007: Add subqueries support for IVM\n - 0008: Add outer joins support for IVM\n - 0009: Add IVM support to psql command\n - 0010: Add regression tests for IVM\n - 0011: Add documentations for IVM\n\n===\nTodo:\n\nCurrently, REFRESH and pg_dump/pg_restore is not supported, but\nwe are working on them.\n\nAlso, TRUNCATE is not supported. When TRUNCATE command is executed\non a base table, nothing occurs on materialized views. We are\nnow considering another better options, like:\n\n- Raise an error or warning when a base table is TRUNCATEed.\n- Make the view non-scannable (like WITH NO DATA)\n- Update the view in some ways. It would be easy for inner joins\n or aggregate views, but there is some difficult with outer joins.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Fri, 20 Dec 2019 14:02:32 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello,\n\n\nI'm starting to take a closer look at this feature. I've just finished reading the discussion, excluding other referenced materials.\n\nThe following IVM wiki page returns an error. Does anybody know what's wrong?\n\nhttps://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\n[screen]\n----------\nMediaWiki internal error.\n\n Exception caught inside exception handler.\n\n Set $wgShowExceptionDetails = true; at the bottom of LocalSettings.php to show detailed debugging information.\n----------\n\n\nCould you give some concrete use cases, so that I can have a clearer image of the target data? In the discussion, someone referred to master data with low update frequency, because the proposed IVM implementation adds triggers on source tables, which limits the applicability to update-heavy tables.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Fri, 20 Dec 2019 07:12:23 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> I'm starting to take a closer look at this feature. I've just finished reading the discussion, excluding other referenced materials.\n\nThank you!\n\n> The following IVM wiki page returns an error. Does anybody know what's wrong?\n> \n> https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\nI don't have any problem with the page. Maybe temporary error?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 20 Dec 2019 17:10:58 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "SELECT statement that is not IMMUTABLE must not be specified when creating\na view.\n\nAn expression SELECT statement that is not IMMUTABLE must not be specified\nwhen creating a view.\n\nIn the current implementation, a SELECT statement containing an expression\nthat is not IMMUTABLE can be specified when creating a view.\nIf an incremental materialized view is created from a SELECT statement that\ncontains an expression that is not IMMUTABLE, applying the SELECT statement\nto the view returns incorrect results.\nTo prevent this, we propose that the same error occur when a non-IMMUTABLE\nexpression is specified in the \"CREATE INDEX\" statement.\n\nThe following is an inappropriate example.\n----\nCREATE TABLE base (id int primary key, data text, ts timestamp);\nCREATE TABLE\nCREATE VIEW base_v AS SELECT * FROM base\n WHERE ts >= (now() - '3 second'::interval);\nCREATE VIEW\nCREATE MATERIALIZED VIEW base_mv AS SELECT * FROM base\n WHERE ts >= (now() - '3 second'::interval);\nSELECT 0\nCREATE INCREMENTAL MATERIALIZED VIEW base_imv AS SELECT * FROM base\n WHERE ts >= (now() - '3 second'::interval);\nSELECT 0\n View \"public.base_v\"\n Column | Type | Collation | Nullable | Default |\nStorage | Description\n--------+-----------------------------+-----------+----------+---------+----------+-------------\n id | integer | | | |\nplain |\n data | text | | | |\nextended |\n ts | timestamp without time zone | | | |\nplain |\nView definition:\n SELECT base.id,\n base.data,\n base.ts\n FROM base\n WHERE base.ts >= (now() - '00:00:03'::interval);\n\n Materialized view \"public.base_mv\"\n Column | Type | Collation | Nullable | Default |\nStorage | Stats target | Description\n--------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | | |\nplain | |\n data | text | | | |\nextended | |\n ts | timestamp without time zone | | | |\nplain | |\nView definition:\n SELECT base.id,\n base.data,\n base.ts\n FROM base\n WHERE base.ts >= (now() - '00:00:03'::interval);\nAccess method: heap\n\n Materialized view \"public.base_imv\"\n Column | Type | Collation | Nullable |\nDefault | Storage | Stats target | Description\n---------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | |\n | plain | |\n data | text | | |\n | extended | |\n ts | timestamp without time zone | | |\n | plain | |\n __ivm_count__ | bigint | | |\n | plain | |\nView definition:\n SELECT base.id,\n base.data,\n base.ts\n FROM base\n WHERE base.ts >= (now() - '00:00:03'::interval);\nAccess method: heap\nIncremental view maintenance: yes\n\nINSERT INTO base VALUES (generate_series(1,3), 'dummy', clock_timestamp());\nINSERT 0 3\nSELECT * FROM base_v ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 1 | dummy | 2019-12-22 11:38:26.367481\n 2 | dummy | 2019-12-22 11:38:26.367599\n 3 | dummy | 2019-12-22 11:38:26.367606\n(3 rows)\n\nSELECT * FROM base_mv ORDER BY id;\n id | data | ts\n----+------+----\n(0 rows)\n\nREFRESH MATERIALIZED VIEW base_mv;\nREFRESH MATERIALIZED VIEW\nSELECT * FROM base_mv ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 1 | dummy | 2019-12-22 11:38:26.367481\n 2 | dummy | 2019-12-22 11:38:26.367599\n 3 | dummy | 2019-12-22 11:38:26.367606\n(3 rows)\n\nSELECT * FROM base_imv ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 1 | dummy | 2019-12-22 11:38:26.367481\n 2 | dummy | 2019-12-22 11:38:26.367599\n 3 | dummy | 2019-12-22 11:38:26.367606\n(3 rows)\n\nSELECT pg_sleep(3);\n pg_sleep\n----------\n\n(1 row)\n\nINSERT INTO base VALUES (generate_series(4,6), 'dummy', clock_timestamp());\nINSERT 0 3\nSELECT * FROM base_v ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 4 | dummy | 2019-12-22 11:38:29.381414\n 5 | dummy | 2019-12-22 11:38:29.381441\n 6 | dummy | 2019-12-22 11:38:29.381444\n(3 rows)\n\nSELECT * FROM base_mv ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 1 | dummy | 2019-12-22 11:38:26.367481\n 2 | dummy | 2019-12-22 11:38:26.367599\n 3 | dummy | 2019-12-22 11:38:26.367606\n(3 rows)\n\nREFRESH MATERIALIZED VIEW base_mv;\nREFRESH MATERIALIZED VIEW\nSELECT * FROM base_mv ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 4 | dummy | 2019-12-22 11:38:29.381414\n 5 | dummy | 2019-12-22 11:38:29.381441\n 6 | dummy | 2019-12-22 11:38:29.381444\n(3 rows)\n\nSELECT * FROM base_imv ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 1 | dummy | 2019-12-22 11:38:26.367481\n 2 | dummy | 2019-12-22 11:38:26.367599\n 3 | dummy | 2019-12-22 11:38:26.367606\n 4 | dummy | 2019-12-22 11:38:29.381414\n 5 | dummy | 2019-12-22 11:38:29.381441\n 6 | dummy | 2019-12-22 11:38:29.381444\n(6 rows)\n\nREFRESH MATERIALIZED VIEW base_mv;\nREFRESH MATERIALIZED VIEW\nSELECT * FROM base_imv ORDER BY id;\n id | data | ts\n----+-------+----------------------------\n 1 | dummy | 2019-12-22 11:38:26.367481\n 2 | dummy | 2019-12-22 11:38:26.367599\n 3 | dummy | 2019-12-22 11:38:26.367606\n 4 | dummy | 2019-12-22 11:38:29.381414\n 5 | dummy | 2019-12-22 11:38:29.381441\n 6 | dummy | 2019-12-22 11:38:29.381444\n(6 rows)\n----\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\nSELECT statement that is not IMMUTABLE must not be specified when creating a view.\nAn expression SELECT statement that is not IMMUTABLE must not be specified when creating a view.\nIn the current implementation, a SELECT statement containing an \nexpression that is not IMMUTABLE can be specified when creating a view.\nIf an incremental materialized view is created from a SELECT statement \nthat contains an expression that is not IMMUTABLE, applying the SELECT \nstatement to the view returns incorrect results.\nTo prevent this, we propose that the same error occur when a \nnon-IMMUTABLE expression is specified in the \"CREATE INDEX\" statement.\nThe following is an inappropriate example.\n----CREATE TABLE base (id int primary key, data text, ts timestamp);CREATE TABLECREATE VIEW base_v AS SELECT * FROM base WHERE ts >= (now() - '3 second'::interval);CREATE VIEWCREATE MATERIALIZED VIEW base_mv AS SELECT * FROM base WHERE ts >= (now() - '3 second'::interval);SELECT 0CREATE INCREMENTAL MATERIALIZED VIEW base_imv AS SELECT * FROM base WHERE ts >= (now() - '3 second'::interval);SELECT 0 View \"public.base_v\" Column | Type | Collation | Nullable | Default | Storage | Description--------+-----------------------------+-----------+----------+---------+----------+------------- id | integer | | | | plain | data | text | | | | extended | ts | timestamp without time zone | | | | plain |View definition: SELECT base.id, base.data, base.ts FROM base WHERE base.ts >= (now() - '00:00:03'::interval); Materialized view \"public.base_mv\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description--------+-----------------------------+-----------+----------+---------+----------+--------------+------------- id | integer | | | | plain | | data | text | | | | extended | | ts | timestamp without time zone | | | | plain | |View definition: SELECT base.id, base.data, base.ts FROM base WHERE base.ts >= (now() - '00:00:03'::interval);Access method: heap Materialized view \"public.base_imv\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description---------------+-----------------------------+-----------+----------+---------+----------+--------------+------------- id | integer | | | | plain | | data | text | | | | extended | | ts | timestamp without time zone | | | | plain | | __ivm_count__ | bigint | | | | plain | |View definition: SELECT base.id, base.data, base.ts FROM base WHERE base.ts >= (now() - '00:00:03'::interval);Access method: heapIncremental view maintenance: yesINSERT INTO base VALUES (generate_series(1,3), 'dummy', clock_timestamp());INSERT 0 3SELECT * FROM base_v ORDER BY id; id | data | ts----+-------+---------------------------- 1 | dummy | 2019-12-22 11:38:26.367481 2 | dummy | 2019-12-22 11:38:26.367599 3 | dummy | 2019-12-22 11:38:26.367606(3 rows)SELECT * FROM base_mv ORDER BY id; id | data | ts----+------+----(0 rows)REFRESH MATERIALIZED VIEW base_mv;REFRESH MATERIALIZED VIEWSELECT * FROM base_mv ORDER BY id; id | data | ts----+-------+---------------------------- 1 | dummy | 2019-12-22 11:38:26.367481 2 | dummy | 2019-12-22 11:38:26.367599 3 | dummy | 2019-12-22 11:38:26.367606(3 rows)SELECT * FROM base_imv ORDER BY id; id | data | ts----+-------+---------------------------- 1 | dummy | 2019-12-22 11:38:26.367481 2 | dummy | 2019-12-22 11:38:26.367599 3 | dummy | 2019-12-22 11:38:26.367606(3 rows)SELECT pg_sleep(3); pg_sleep----------(1 row)INSERT INTO base VALUES (generate_series(4,6), 'dummy', clock_timestamp());INSERT 0 3SELECT * FROM base_v ORDER BY id; id | data | ts----+-------+---------------------------- 4 | dummy | 2019-12-22 11:38:29.381414 5 | dummy | 2019-12-22 11:38:29.381441 6 | dummy | 2019-12-22 11:38:29.381444(3 rows)SELECT * FROM base_mv ORDER BY id; id | data | ts----+-------+---------------------------- 1 | dummy | 2019-12-22 11:38:26.367481 2 | dummy | 2019-12-22 11:38:26.367599 3 | dummy | 2019-12-22 11:38:26.367606(3 rows)REFRESH MATERIALIZED VIEW base_mv;REFRESH MATERIALIZED VIEWSELECT * FROM base_mv ORDER BY id; id | data | ts----+-------+---------------------------- 4 | dummy | 2019-12-22 11:38:29.381414 5 | dummy | 2019-12-22 11:38:29.381441 6 | dummy | 2019-12-22 11:38:29.381444(3 rows)SELECT * FROM base_imv ORDER BY id; id | data | ts----+-------+---------------------------- 1 | dummy | 2019-12-22 11:38:26.367481 2 | dummy | 2019-12-22 11:38:26.367599 3 | dummy | 2019-12-22 11:38:26.367606 4 | dummy | 2019-12-22 11:38:29.381414 5 | dummy | 2019-12-22 11:38:29.381441 6 | dummy | 2019-12-22 11:38:29.381444(6 rows)REFRESH MATERIALIZED VIEW base_mv;REFRESH MATERIALIZED VIEWSELECT * FROM base_imv ORDER BY id; id | data | ts----+-------+---------------------------- 1 | dummy | 2019-12-22 11:38:26.367481 2 | dummy | 2019-12-22 11:38:26.367599 3 | dummy | 2019-12-22 11:38:26.367606 4 | dummy | 2019-12-22 11:38:29.381414 5 | dummy | 2019-12-22 11:38:29.381441 6 | dummy | 2019-12-22 11:38:29.381444(6 rows)----2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Sun, 22 Dec 2019 20:54:41 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello,\n\nFirst of all many thanks for this Great feature \nreplacing so many triggers by a so simple syntax ;o)\n\nI was wondering about performances and add a look \nat pg_stat_statements (with track=all) with IVM_v9.patch.\n\nFor each insert into a base table there are 3 statements:\n- ANALYZE pg_temp_3.pg_temp_81976\n- WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...\n- DROP TABLE pg_temp_3.pg_temp_81976\n\nIt generates a lot of lines in pg_stat_statements with calls = 1.\nThoses statements can not be shared because the temp table is dropped each\ntime.\n\nIs there a plan to change this ?\n\nMany Thanks again\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 22 Dec 2019 13:22:50 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Tatsuo Ishii <ishii@sraoss.co.jp>\n> > The following IVM wiki page returns an error. Does anybody know what's\n> wrong?\n> >\n> > https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n> \n> I don't have any problem with the page. Maybe temporary error?\n\nYeah, I can see it now. I could see it on the weekend. The page was not available for at least an hour or so when I asked about this. I thought the Pgsql-www team kindly solved the issue.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Sun, 22 Dec 2019 23:38:43 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sun, 22 Dec 2019 20:54:41 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> SELECT statement that is not IMMUTABLE must not be specified when creating\n> a view.\n> \n> An expression SELECT statement that is not IMMUTABLE must not be specified\n> when creating a view.\n> \n> In the current implementation, a SELECT statement containing an expression\n> that is not IMMUTABLE can be specified when creating a view.\n> If an incremental materialized view is created from a SELECT statement that\n> contains an expression that is not IMMUTABLE, applying the SELECT statement\n> to the view returns incorrect results.\n> To prevent this, we propose that the same error occur when a non-IMMUTABLE\n> expression is specified in the \"CREATE INDEX\" statement.\n\nThank you for pointing out this. That makes sense. The check of not-IMMUTABLE\nepressions is missing at creating IMMV. We'll add this.\n\nThanks,\nYugo Nagata\n\n> \n> The following is an inappropriate example.\n> ----\n> CREATE TABLE base (id int primary key, data text, ts timestamp);\n> CREATE TABLE\n> CREATE VIEW base_v AS SELECT * FROM base\n> WHERE ts >= (now() - '3 second'::interval);\n> CREATE VIEW\n> CREATE MATERIALIZED VIEW base_mv AS SELECT * FROM base\n> WHERE ts >= (now() - '3 second'::interval);\n> SELECT 0\n> CREATE INCREMENTAL MATERIALIZED VIEW base_imv AS SELECT * FROM base\n> WHERE ts >= (now() - '3 second'::interval);\n> SELECT 0\n> View \"public.base_v\"\n> Column | Type | Collation | Nullable | Default |\n> Storage | Description\n> --------+-----------------------------+-----------+----------+---------+----------+-------------\n> id | integer | | | |\n> plain |\n> data | text | | | |\n> extended |\n> ts | timestamp without time zone | | | |\n> plain |\n> View definition:\n> SELECT base.id,\n> base.data,\n> base.ts\n> FROM base\n> WHERE base.ts >= (now() - '00:00:03'::interval);\n> \n> Materialized view \"public.base_mv\"\n> Column | Type | Collation | Nullable | Default |\n> Storage | Stats target | Description\n> --------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n> id | integer | | | |\n> plain | |\n> data | text | | | |\n> extended | |\n> ts | timestamp without time zone | | | |\n> plain | |\n> View definition:\n> SELECT base.id,\n> base.data,\n> base.ts\n> FROM base\n> WHERE base.ts >= (now() - '00:00:03'::interval);\n> Access method: heap\n> \n> Materialized view \"public.base_imv\"\n> Column | Type | Collation | Nullable |\n> Default | Storage | Stats target | Description\n> ---------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------\n> id | integer | | |\n> | plain | |\n> data | text | | |\n> | extended | |\n> ts | timestamp without time zone | | |\n> | plain | |\n> __ivm_count__ | bigint | | |\n> | plain | |\n> View definition:\n> SELECT base.id,\n> base.data,\n> base.ts\n> FROM base\n> WHERE base.ts >= (now() - '00:00:03'::interval);\n> Access method: heap\n> Incremental view maintenance: yes\n> \n> INSERT INTO base VALUES (generate_series(1,3), 'dummy', clock_timestamp());\n> INSERT 0 3\n> SELECT * FROM base_v ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 1 | dummy | 2019-12-22 11:38:26.367481\n> 2 | dummy | 2019-12-22 11:38:26.367599\n> 3 | dummy | 2019-12-22 11:38:26.367606\n> (3 rows)\n> \n> SELECT * FROM base_mv ORDER BY id;\n> id | data | ts\n> ----+------+----\n> (0 rows)\n> \n> REFRESH MATERIALIZED VIEW base_mv;\n> REFRESH MATERIALIZED VIEW\n> SELECT * FROM base_mv ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 1 | dummy | 2019-12-22 11:38:26.367481\n> 2 | dummy | 2019-12-22 11:38:26.367599\n> 3 | dummy | 2019-12-22 11:38:26.367606\n> (3 rows)\n> \n> SELECT * FROM base_imv ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 1 | dummy | 2019-12-22 11:38:26.367481\n> 2 | dummy | 2019-12-22 11:38:26.367599\n> 3 | dummy | 2019-12-22 11:38:26.367606\n> (3 rows)\n> \n> SELECT pg_sleep(3);\n> pg_sleep\n> ----------\n> \n> (1 row)\n> \n> INSERT INTO base VALUES (generate_series(4,6), 'dummy', clock_timestamp());\n> INSERT 0 3\n> SELECT * FROM base_v ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 4 | dummy | 2019-12-22 11:38:29.381414\n> 5 | dummy | 2019-12-22 11:38:29.381441\n> 6 | dummy | 2019-12-22 11:38:29.381444\n> (3 rows)\n> \n> SELECT * FROM base_mv ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 1 | dummy | 2019-12-22 11:38:26.367481\n> 2 | dummy | 2019-12-22 11:38:26.367599\n> 3 | dummy | 2019-12-22 11:38:26.367606\n> (3 rows)\n> \n> REFRESH MATERIALIZED VIEW base_mv;\n> REFRESH MATERIALIZED VIEW\n> SELECT * FROM base_mv ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 4 | dummy | 2019-12-22 11:38:29.381414\n> 5 | dummy | 2019-12-22 11:38:29.381441\n> 6 | dummy | 2019-12-22 11:38:29.381444\n> (3 rows)\n> \n> SELECT * FROM base_imv ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 1 | dummy | 2019-12-22 11:38:26.367481\n> 2 | dummy | 2019-12-22 11:38:26.367599\n> 3 | dummy | 2019-12-22 11:38:26.367606\n> 4 | dummy | 2019-12-22 11:38:29.381414\n> 5 | dummy | 2019-12-22 11:38:29.381441\n> 6 | dummy | 2019-12-22 11:38:29.381444\n> (6 rows)\n> \n> REFRESH MATERIALIZED VIEW base_mv;\n> REFRESH MATERIALIZED VIEW\n> SELECT * FROM base_imv ORDER BY id;\n> id | data | ts\n> ----+-------+----------------------------\n> 1 | dummy | 2019-12-22 11:38:26.367481\n> 2 | dummy | 2019-12-22 11:38:26.367599\n> 3 | dummy | 2019-12-22 11:38:26.367606\n> 4 | dummy | 2019-12-22 11:38:29.381414\n> 5 | dummy | 2019-12-22 11:38:29.381441\n> 6 | dummy | 2019-12-22 11:38:29.381444\n> (6 rows)\n> ----\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 23 Dec 2019 10:07:16 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Could you give some concrete use cases, so that I can have a clearer image of the target data? In the discussion, someone referred to master data with low update frequency, because the proposed IVM implementation adds triggers on source tables, which limits the applicability to update-heavy tables.\n\nBut if you want to get always up-to-data you need to pay the cost for\nREFRESH MATERIALIZED VIEW. IVM gives a choice here.\n\npgbench -s 100\ncreate materialized view mv1 as select count(*) from pgbench_accounts;\ncreate incremental materialized view mv2 as select count(*) from pgbench_accounts;\n\nNow I delete one row from pgbench_accounts.\n\ntest=# delete from pgbench_accounts where aid = 10000000;\nDELETE 1\nTime: 12.387 ms\n\nOf course this makes mv1's data obsolete:\ntest=# select * from mv1;\n count \n----------\n 10000000\n(1 row)\n\nTo reflect the fact on mv1 that a row was deleted from\npgbench_accounts, you need to refresh mv1:\n\ntest=# refresh materialized view mv1;\nREFRESH MATERIALIZED VIEW\nTime: 788.757 ms\n\nwhich takes 788ms. With mv2 you don't need to pay this cost to get the\nlatest data.\n\nThis is kind of ideal use case for IVM and I do not claim that IVM\nalways wins over ordinary materialized view (or non materialized\nview). IVM will give benefit in that a materialized view instantly\nupdated whenever base tables get updated with a cost of longer update\ntime on base tables.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 23 Dec 2019 10:50:50 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: legrand legrand <legrand_legrand@hotmail.com>\n> For each insert into a base table there are 3 statements:\n> - ANALYZE pg_temp_3.pg_temp_81976\n> - WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...\n> - DROP TABLE pg_temp_3.pg_temp_81976\n\nDoes it also include CREATE TEMPORARY TABLE, because there's DROP?\n\nI remember that repeated CREATE and DROP of temporary tables should be avoided in PostgreSQL. Dropped temporary tables leave some unused memory in CacheMemoryContext. If creation and deletion of temporary tables are done per row in a single session, say loading of large amount of data, memory bloat could crash the OS. That actually happened at a user's environment.\n\nPlus, repeated create/drop may cause system catalog bloat as well even when they are performed in different sessions. In a fortunate case, the garbage records gather at the end of the system tables, and autovacuum will free those empty areas by truncating data files. However, if some valid entry persists after the long garbage area, the system tables would remain bloated.\n\nWhat kind of workload and data are you targeting with IVM?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Mon, 23 Dec 2019 02:26:09 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 23 Dec 2019 02:26:09 +0000\n\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n\n> From: legrand legrand <legrand_legrand@hotmail.com>\n> > For each insert into a base table there are 3 statements:\n> > - ANALYZE pg_temp_3.pg_temp_81976\n> > - WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...\n> > - DROP TABLE pg_temp_3.pg_temp_81976\n> \n> Does it also include CREATE TEMPORARY TABLE, because there's DROP?\n\nCREATE TEMPRARY TABLE is not called because temptables are created\nby make_new_heap() instead of queries via SPI.\n \n> I remember that repeated CREATE and DROP of temporary tables should be avoided in PostgreSQL. Dropped temporary tables leave some unused memory in CacheMemoryContext. If creation and deletion of temporary tables are done per row in a single session, say loading of large amount of data, memory bloat could crash the OS. That actually happened at a user's environment.\n\n> Plus, repeated create/drop may cause system catalog bloat as well even when they are performed in different sessions. In a fortunate case, the garbage records gather at the end of the system tables, and autovacuum will free those empty areas by truncating data files. However, if some valid entry persists after the long garbage area, the system tables would remain bloated.\n\nThank you for explaining the problem. I understood that creating and\ndropping temprary tables is harmful more than I have thought. Although\nthis is not a concrete plan, there are two ideas to reduce creating\ntemporary tables:\n\n1. Create a temporary table only once at the first view maintenance in\nthis session. This is possible if we store names or oid of temporary\ntables used for each materialized view in memory. However, users may\naccess to these temptables whenever during the session.\n\n2. Use tuplestores instead of temprary tables. Tuplestores can be\nconverted to Ephemeral Name Relation (ENR) and used in queries.\n It doesn't need updating system catalogs, but indexes can not be\nused to access.\n\n> \n> What kind of workload and data are you targeting with IVM?\n\nIVM (with immediate maintenance approach) would be efficient\nin situations where modifications on base tables are not frequent. \nIn such situations, create and drop of temptalbes is not so\nfrequent either, but it would be still possible that the problem\nyou concern occurs. So, it seems worth to consider the way to\nreduce use of temptable.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 23 Dec 2019 15:50:58 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 7:51 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>\n> On Mon, 23 Dec 2019 02:26:09 +0000\n> \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n>\n> > From: legrand legrand <legrand_legrand@hotmail.com>\n> > > For each insert into a base table there are 3 statements:\n> > > - ANALYZE pg_temp_3.pg_temp_81976\n> > > - WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...\n> > > - DROP TABLE pg_temp_3.pg_temp_81976\n> >\n> > Does it also include CREATE TEMPORARY TABLE, because there's DROP?\n>\n> CREATE TEMPRARY TABLE is not called because temptables are created\n> by make_new_heap() instead of queries via SPI.\n>\n> > I remember that repeated CREATE and DROP of temporary tables should be avoided in PostgreSQL. Dropped temporary tables leave some unused memory in CacheMemoryContext. If creation and deletion of temporary tables are done per row in a single session, say loading of large amount of data, memory bloat could crash the OS. That actually happened at a user's environment.\n>\n> > Plus, repeated create/drop may cause system catalog bloat as well even when they are performed in different sessions. In a fortunate case, the garbage records gather at the end of the system tables, and autovacuum will free those empty areas by truncating data files. However, if some valid entry persists after the long garbage area, the system tables would remain bloated.\n>\n> Thank you for explaining the problem. I understood that creating and\n> dropping temprary tables is harmful more than I have thought. Although\n> this is not a concrete plan, there are two ideas to reduce creating\n> temporary tables:\n\nFor the pg_stat_statements point of view, utility command support is\nalready quite bad as with many workloads it's rather impossible to\nactivate track_utility as it'd otherwise pollute the hashtable with an\ninfinity of queries executed only once (random prepared transaction\nname, random cursor names...). I'm wondering whether we should\nnormalize utility statements deparsing the utilityStmt, and also\nnormalizing some identifiers (maybe optionally with a GUC), eg.\n\"DECLARE ? AS CURSOR FOR normalized_query_here\". However commands\nlike vacuum or drop would be better kept as-is.\n\n\n",
"msg_date": "Mon, 23 Dec 2019 08:22:20 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Tatsuo Ishii <ishii@sraoss.co.jp>\n> the target data? In the discussion, someone referred to master data with low\n> update frequency, because the proposed IVM implementation adds triggers on\n> source tables, which limits the applicability to update-heavy tables.\n> \n> But if you want to get always up-to-data you need to pay the cost for\n> REFRESH MATERIALIZED VIEW. IVM gives a choice here.\n\nThank you, that clarified to some extent. What kind of data do you think of as an example?\n\nMaterialized view reminds me of the use in a data warehouse. Oracle handles the top in its Database Data Warehousing Guide, and Microsoft has just started to offer the materialized view feature in its Azure Synapse Analytics (formerly SQL Data Warehouse). AWS also has previewed Redshift's materialized view feature in re:Invent 2019. Are you targeting the data warehouse (analytics) workload?\n\nIIUC, to put (over) simply, the data warehouse has two kind of tables:\n\n* Facts (transaction data): e.g. sales, user activity\nLarge amount. INSERT only on a regular basis (ETL/ELT) or continuously (streaming)\n\n* Dimensions (master/reference data): e.g. product, customer, time, country\nSmall amount. Infrequently INSERTed or UPDATEd.\n\n\nThe proposed trigger-based approach does not seem to be suitable for the facts, because the trigger overhead imposed on data loading may offset or exceed the time saved by incrementally refreshing the materialized views.\n\nThen, does the proposed feature fit the dimension tables? If the materialized view is only based on the dimension data, then the full REFRESH of the materialized view wouldn't take so long. The typical materialized view should join the fact and dimension tables. Then, the fact table will have to have the triggers, causing the data loading slowdown.\n\nI'm saying this because I'm concerned about the trigger based overhead. As you know, Oracle uses materialized view logs to save changes and incrementally apply them later to the materialized views (REFRESH ON STATEMENT materialized views doesn't require the materialized view log, so it might use triggers.) Does any commercial grade database implement materialized view using triggers? I couldn't find relevant information regarding Azure Synapse and Redshift.\n\nIf our only handy option is a trigger, can we minimize the overhead by doing the view maintenance at transaction commit?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Mon, 23 Dec 2019 07:43:23 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Yugo Nagata <nagata@sraoss.co.jp>\n> 1. Create a temporary table only once at the first view maintenance in\n> this session. This is possible if we store names or oid of temporary\n> tables used for each materialized view in memory. However, users may\n> access to these temptables whenever during the session.\n> \n> 2. Use tuplestores instead of temprary tables. Tuplestores can be\n> converted to Ephemeral Name Relation (ENR) and used in queries.\n> It doesn't need updating system catalogs, but indexes can not be\n> used to access.\n\nHow about unlogged tables ? I thought the point of using a temp table is to avoid WAL overhead.\n\nOne concern about the temp table is that it precludes the use of distributed transactions (PREPARE TRANSACTION fails if the transaction accessed a temp table.) This could become a headache when FDW has supported 2PC (which Sawada-san started and Horicuchi-san has taken over.) In the near future, PostgreSQL may evolve into a shared nothing database with distributed transactions like Postgres-XL.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Mon, 23 Dec 2019 08:08:53 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> But if you want to get always up-to-data you need to pay the cost for\n>> REFRESH MATERIALIZED VIEW. IVM gives a choice here.\n> \n> Thank you, that clarified to some extent. What kind of data do you think of as an example?\n> \n> Materialized view reminds me of the use in a data warehouse. Oracle handles the top in its Database Data Warehousing Guide, and Microsoft has just started to offer the materialized view feature in its Azure Synapse Analytics (formerly SQL Data Warehouse). AWS also has previewed Redshift's materialized view feature in re:Invent 2019. Are you targeting the data warehouse (analytics) workload?\n\nFirst of all, we do not think that current approach is the final\none. Instead we want to implement IVM feature one by one: i.e. we\nstart with \"immediate update\" approach, because it's simple and easier\nto implement. Then we will add \"deferred update\" mode later on.\n\nIn fact Oracle has both \"immediate update\" and \"deferred update\" mode\nof IVM (actually there are more \"mode\" with their implementation).\n\nI recommend you to look into Oracle's materialized view feature\nclosely. For fair evaluation, probably we should compare the IVM patch\nwith Oracle's \"immediate update\" (they call it \"on statement\") mode.\n\n> IIUC, to put (over) simply, the data warehouse has two kind of tables:\n\nProbably deferred IVM mode is more suitable for DWH. However as I said\nearlier, we hope to implement the immediate mode first then add the\ndeferred mode. Let's start with simple one then add more features.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 23 Dec 2019 17:13:19 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 23 Dec 2019 08:08:53 +0000\n\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n\n> From: Yugo Nagata <nagata@sraoss.co.jp>\n> > 1. Create a temporary table only once at the first view maintenance in\n> > this session. This is possible if we store names or oid of temporary\n> > tables used for each materialized view in memory. However, users may\n> > access to these temptables whenever during the session.\n> > \n> > 2. Use tuplestores instead of temprary tables. Tuplestores can be\n> > converted to Ephemeral Name Relation (ENR) and used in queries.\n> > It doesn't need updating system catalogs, but indexes can not be\n> > used to access.\n> \n> How about unlogged tables ? I thought the point of using a temp table is to avoid WAL overhead.\n\nHmm... this might be another option. However, if we use unlogged tables,\nwe will need to create them in a special schema similar to pg_toast\nto split this from user tables. Otherwise, we need to create and drop\nunlogged tables repeatedly for each session.\n\n> \n> One concern about the temp table is that it precludes the use of distributed transactions (PREPARE TRANSACTION fails if the transaction accessed a temp table.) This could become a headache when FDW has supported 2PC (which Sawada-san started and Horicuchi-san has taken over.) In the near future, PostgreSQL may evolve into a shared nothing database with distributed transactions like Postgres-XL.\n\nThis makes sense since you mean that PREPARE TRANSACTION can not be used\nif any base table of incrementally maintainable materialized views is\nmodified in the transaction, at least in the immediate maintenance. Maybe,\nthis issue can be resolved if we implement the deferred maintenance planned\nin future because materialized views can be updated in other transactions\nin this way.\n\n> \n> \n> Regards\n> Takayuki Tsunakawa\n> \n> \n> \n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 23 Dec 2019 18:50:47 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello,\nregarding my initial post:\n\n> For each insert into a base table there are 3 statements:\n> - ANALYZE pg_temp_3.pg_temp_81976\n> - WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...\n> - DROP TABLE pg_temp_3.pg_temp_81976\n\nFor me there where 3 points to discuss:\n- create/drop tables may bloat dictionnary tables \n- create/drop tables prevents \"WITH updt ...\" from being shared (with some\nplan caching)\n- generates many lines in pg_stat_statements\n\nIn fact I like the idea of a table created per session, but I would even\nprefer a common \"table\" shared between all sessions like GLOBAL TEMPORARY\nTABLE (or something similar) as described here:\nhttps://www.postgresql.org/message-id/flat/157703426606.1198.2452090605041230054.pgcf%40coridan.postgresql.org#331e8344bbae904350af161fb43a0aa6\n\nThat would remove the drop/create issue, permits to reduce planning time for\n\"WITH updt ...\" statements\n(as done today in PLpgsql triggers), and would fix the pgss \"bloat\" issue. \n\nLike that the \"cost\" of the immediate refresh approach would be easier to\nsupport ;o)\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 23 Dec 2019 03:41:18 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 23 Dec 2019 03:41:18 -0700 (MST)\nlegrand legrand <legrand_legrand@hotmail.com> wrote:\n\n> Hello,\n> regarding my initial post:\n> \n> > For each insert into a base table there are 3 statements:\n> > - ANALYZE pg_temp_3.pg_temp_81976\n> > - WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...\n> > - DROP TABLE pg_temp_3.pg_temp_81976\n> \n> For me there where 3 points to discuss:\n> - create/drop tables may bloat dictionnary tables \n> - create/drop tables prevents \"WITH updt ...\" from being shared (with some\n> plan caching)\n> - generates many lines in pg_stat_statements\n> \n> In fact I like the idea of a table created per session, but I would even\n> prefer a common \"table\" shared between all sessions like GLOBAL TEMPORARY\n> TABLE (or something similar) as described here:\n> https://www.postgresql.org/message-id/flat/157703426606.1198.2452090605041230054.pgcf%40coridan.postgresql.org#331e8344bbae904350af161fb43a0aa6\n\nAlthough I have not looked into this thread, this may be help if this is\nimplemented. However, it would be still necessary to truncate the table\nbefore the view maintenance because such tables always exist and can be\naccessed and modified by any users.\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 24 Dec 2019 11:01:48 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Tatsuo Ishii <ishii@sraoss.co.jp>\n> First of all, we do not think that current approach is the final\n> one. Instead we want to implement IVM feature one by one: i.e. we\n> start with \"immediate update\" approach, because it's simple and easier\n> to implement. Then we will add \"deferred update\" mode later on.\n\nI agree about incremental feature introduction. What I'm simply asking is the concrete use case (workload and data), so that I can convince myself to believe that this feature is useful and focus on reviewing and testing (because the patch seems big and difficult...)\n\n\n> In fact Oracle has both \"immediate update\" and \"deferred update\" mode\n> of IVM (actually there are more \"mode\" with their implementation).\n> \n> I recommend you to look into Oracle's materialized view feature\n> closely. For fair evaluation, probably we should compare the IVM patch\n> with Oracle's \"immediate update\" (they call it \"on statement\") mode.\n> \n> Probably deferred IVM mode is more suitable for DWH. However as I said\n> earlier, we hope to implement the immediate mode first then add the\n> deferred mode. Let's start with simple one then add more features.\n\nYes, I know Oracle's ON STATEMENT refresh mode (I attached references at the end for others.)\n\nUnfortunately, it's not clear to me which of ON STATEMENT or ON COMMIT the user should choose. The benefit of ON STATEMENT is that the user does not have to create and maintain the materialized view log. But I'm not sure if and when the benefit defeats the performance overhead on DML statements. It's not disclosed whether ON STATEMENT uses triggers.\n\nCould you give your opinion on the following to better understand the proposed feature and/or Oracle's ON STATEMENT refresh mode?\n\n* What use case does the feature fit?\nIf the trigger makes it difficult to use in the data ware house, does the feature target OLTP?\nWhat kind of data and query would benefit most from the feature (e.g. join of a large sales table and a small product table, where the data volume and frequency of data loading is ...)?\nIn other words, this is about what kind of example we can recommend as a typical use case of this feature.\n\n* Do you think the benefit of ON STATEMENT (i.e. do not have to use materialized view log) outweighs the drawback of ON STATEMENT (i.g. DML overhead)?\n\n* Do you think it's important to refresh the materialized view after every statement, or the per-statement refresh is not a requirement but simply the result of implementation?\n\n\n[References]\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/refreshing-materialized-views.html#GUID-C40C225A-8328-44D5-AE90-9078C2C773EA\n--------------------------------------------------\n7.1.5 About ON COMMIT Refresh for Materialized Views \n\nA materialized view can be refreshed automatically using the ON COMMIT method. Therefore, whenever a transaction commits which has updated the tables on which a materialized view is defined, those changes are automatically reflected in the materialized view. The advantage of using this approach is you never have to remember to refresh the materialized view. The only disadvantage is the time required to complete the commit will be slightly longer because of the extra processing involved. However, in a data warehouse, this should not be an issue because there is unlikely to be concurrent processes trying to update the same table. \n\n\n7.1.6 About ON STATEMENT Refresh for Materialized Views \n\nA materialized view that uses the ON STATEMENT refresh mode is automatically refreshed every time a DML operation is performed on any of the materialized view’s base tables. \n\nWith the ON STATEMENT refresh mode, any changes to the base tables are immediately reflected in the materialized view. There is no need to commit the transaction or maintain materialized view logs on the base tables. If the DML statements are subsequently rolled back, then the corresponding changes made to the materialized view are also rolled back. \n\nThe advantage of the ON STATEMENT refresh mode is that the materialized view is always synchronized with the data in the base tables, without the overhead of maintaining materialized view logs. However, this mode may increase the time taken to perform a DML operation because the materialized view is being refreshed as part of the DML operation. \n--------------------------------------------------\n\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/release-changes.html#GUID-2A2D6E3B-A3FD-47A8-82A3-1EF95AEF5993\n--------------------------------------------------\nON STATEMENT refresh mode for materialized views \nThe ON STATEMENT refresh mode refreshes materialized views every time a DML operation is performed on any base table, without the need to commit the transaction. This mode does not require you to maintain materialized view logs on the base tables. \n--------------------------------------------------\n\n\nhttp://www.oracle.com/us/solutions/sap/matview-refresh-db12c-2877319.pdf\n--------------------------------------------------\nWe have introduced a new Materialized View (MV) refresh mechanism called ON STATEMENT refresh. With the ON STATEMENT refresh method, an MV is automatically refreshed whenever DML happens on a base table of the MV. Therefore, whenever a DML happens on any table on which a materialized view is defined, the change is automatically reflected in the materialized view. The advantage of using this approach is that the user no long needs to create a materialized view log on each of the base table in order to do fast refresh. The refresh can then avoid the overhead introduced by MV logging but still keep the materialized view refreshed all the time.\n\nSpecify ON STATEMENT to indicate that a fast refresh is to occur whenever DML happens on a base table of the materialized view. This is to say, ON STATEMENT materialized view is always in sync with base table changes even before the transaction commits. If a transaction that made changes to the base tables rolls back, the corresponding changes in on statement MV are rolled back as well. This clause may increase the time taken to complete a DML, because the database performs the refresh operation as part of the DML execution. However, unlike other types of fast refreshable materialized views, ON STATEMENT MV refresh no longer requires MV log on the base tables or any extra work on MV logs in order to do fast refresh.\n--------------------------------------------------\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Tue, 24 Dec 2019 06:52:35 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Yugo Nagata <nagata@sraoss.co.jp>\n> On Mon, 23 Dec 2019 08:08:53 +0000\n> \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n> > How about unlogged tables ? I thought the point of using a temp table is to\n> avoid WAL overhead.\n> \n> Hmm... this might be another option. However, if we use unlogged tables,\n> we will need to create them in a special schema similar to pg_toast\n> to split this from user tables. Otherwise, we need to create and drop\n> unlogged tables repeatedly for each session.\n\nMaybe we can create the work tables in the same schema as the materialized view, following:\n\n* Prefix the table name to indicate that the table is system-managed, thus alluding to the user that manually deleting the table would break something. This is like the system attribute __imv_count you are proposing.\n\n* Describe the above in the manual. Columns of serial and bigserial data type similarly create sequences behind the scenes.\n\n* Make the work tables depend on the materialized view by recording the dependency in pg_depend, so that Dropping the materialized view will also drop its work tables.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 24 Dec 2019 07:07:35 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Unfortunately, it's not clear to me which of ON STATEMENT or ON COMMIT the user should choose. The benefit of ON STATEMENT is that the user does not have to create and maintain the materialized view log. But I'm not sure if and when the benefit defeats the performance overhead on DML statements. It's not disclosed whether ON STATEMENT uses triggers.\n\nAFAIK benefit of ON STATEMENT is the transaction can see the result of\nupdate to the base tables. With ON COMMIT, the transaction does not\nsee the result until the transaction commits.\n\n> Could you give your opinion on the following to better understand the proposed feature and/or Oracle's ON STATEMENT refresh mode?\n> \n> * What use case does the feature fit?\n> If the trigger makes it difficult to use in the data ware house, does the feature target OLTP?\n\nWell, I can see use cases of IVM in both DWH and OLTP.\n\nFor example, a user create a DWH-like data using materialized\nview. After the initial data is loaded, the data is seldom updated.\nHowever one day a user wants to change just one row to see how it\naffects to the whole DWH data. IVM will help here because it could be\ndone in shorter time than loading whole data.\n\nAnother use case is a ticket selling system. The system shows how many\ntickets remain in a real time manner. For this purpose it needs to\ncount the number of tickets already sold from a log table. By using\nIVM, it could be accomplished in simple and effective way.\n\n> What kind of data and query would benefit most from the feature (e.g. join of a large sales table and a small product table, where the data volume and frequency of data loading is ...)?\n> In other words, this is about what kind of example we can recommend as a typical use case of this feature.\n\nHere are some use cases suitable for IVM I can think of:\n\n- Users are creating home made triggers to get data from tables. Since\n IVM could eliminates some of those triggers, we could expect less\n maintenance cost and bugs accidentally brought in when the triggers\n were created.\n\n- Any use case in which the cost of refreshing whole result table\n (materialized view) is so expensive that it justifies the cost of\n updating of base tables. See the example of use cases above.\n\n> * Do you think the benefit of ON STATEMENT (i.e. do not have to use materialized view log) outweighs the drawback of ON STATEMENT (i.g. DML overhead)?\n\nOutweights to what?\n\n> * Do you think it's important to refresh the materialized view after every statement, or the per-statement refresh is not a requirement but simply the result of implementation?\n\nI think it's important to refresh the materialized view after every\nstatement and the benefit for users are apparent because it brings\nreal time data refresh to users.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 24 Dec 2019 17:09:09 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Yugo Nagata wrote\n> On Mon, 23 Dec 2019 03:41:18 -0700 (MST)\n> legrand legrand <\n\n> legrand_legrand@\n\n> > wrote:\n> \n> [ ...]\n> \n>> I would even\n>> prefer a common \"table\" shared between all sessions like GLOBAL TEMPORARY\n>> TABLE (or something similar) as described here:\n>> https://www.postgresql.org/message-id/flat/157703426606.1198.2452090605041230054.pgcf%40coridan.postgresql.org#331e8344bbae904350af161fb43a0aa6\n> \n> Although I have not looked into this thread, this may be help if this is\n> implemented. However, it would be still necessary to truncate the table\n> before the view maintenance because such tables always exist and can be\n> accessed and modified by any users.\n> \n> -- \n> Yugo Nagata <\n\n> nagata@.co\n\n> >\n\nFor information, in this table data is PRIVATE to each session, can be\npurged on the ON COMMIT event and disappear at SESSION end.\nYes, this feature could be utile only if it's implemented. And you are rigth\nsome data has to be deleted\non the ON STATEMENT event (not sure if TRUNCATE is Global or Session\nspecific in this situation).\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Tue, 24 Dec 2019 04:12:30 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Materialized view reminds me of the use in a data warehouse. Oracle handles the top in its Database Data Warehousing Guide, and Microsoft has just started to offer the materialized view feature in its Azure Synapse Analytics (formerly SQL Data Warehouse). AWS also has previewed Redshift's materialized view feature in re:Invent 2019. Are you targeting the data warehouse (analytics) workload?\n> \n> IIUC, to put (over) simply, the data warehouse has two kind of tables:\n> \n> * Facts (transaction data): e.g. sales, user activity\n> Large amount. INSERT only on a regular basis (ETL/ELT) or continuously (streaming)\n> \n> * Dimensions (master/reference data): e.g. product, customer, time, country\n> Small amount. Infrequently INSERTed or UPDATEd.\n> \n> \n> The proposed trigger-based approach does not seem to be suitable for the facts, because the trigger overhead imposed on data loading may offset or exceed the time saved by incrementally refreshing the materialized views.\n\nI think that depends on use case of the DWH. If the freshness of\nmaterialized view tables is important for a user, then the cost of the\ntrigger overhead may be acceptable for the user.\n\n> Then, does the proposed feature fit the dimension tables? If the materialized view is only based on the dimension data, then the full REFRESH of the materialized view wouldn't take so long. The typical materialized view should join the fact and dimension tables. Then, the fact table will have to have the triggers, causing the data loading slowdown.\n> \n> I'm saying this because I'm concerned about the trigger based overhead. As you know, Oracle uses materialized view logs to save changes and incrementally apply them later to the materialized views (REFRESH ON STATEMENT materialized views doesn't require the materialized view log, so it might use triggers.) Does any commercial grade database implement materialized view using triggers? I couldn't find relevant information regarding Azure Synapse and Redshift.\n\nI heard that REFRESH ON STATEMENT of Oracle has been added after ON\nCOMMIT materialized view. So I suspect Oracle realizes that there are\nneeds/use case for ON STATEMENT, but I am not sure.\n\n> If our only handy option is a trigger, can we minimize the overhead by doing the view maintenance at transaction commit?\n\nI am not sure it's worth the trouble. If it involves some form of\nlogging, then I think it should be used for deferred IVM first because\nit has more use case than on commit IVM.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 25 Dec 2019 08:14:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Tatsuo Ishii <ishii@sraoss.co.jp>\n> AFAIK benefit of ON STATEMENT is the transaction can see the result of\n> update to the base tables. With ON COMMIT, the transaction does not\n> see the result until the transaction commits.\n> \n> Well, I can see use cases of IVM in both DWH and OLTP.\n> \n> For example, a user create a DWH-like data using materialized\n> view. After the initial data is loaded, the data is seldom updated.\n> However one day a user wants to change just one row to see how it\n> affects to the whole DWH data. IVM will help here because it could be\n> done in shorter time than loading whole data.\n\n> I heard that REFRESH ON STATEMENT of Oracle has been added after ON\n> COMMIT materialized view. So I suspect Oracle realizes that there are\n> needs/use case for ON STATEMENT, but I am not sure.\n\nYes, it was added relatively recently in Oracle Database 12.2. As the following introduction to new features shows, the benefits are described as twofold:\n1) The transaction can see the refreshed view result without committing.\n2) The materialized view log is not needed.\n\nI guess from these that the ON STATEMENT refresh mode can be useful when the user wants to experiment with some changes to see how data change could affect the analytics result, without persisting the change. I think that type of experiment is done in completely or almost static data marts where the user is allowed to modify the data freely. The ON STATEMENT refresh mode wouldn't be for the DWH that requires high-performance, regular and/or continuous data loading and maintenance based on a rigorous discipline. But I'm still not sure if this is a real-world use case...\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/release-changes.html#GUID-2A2D6E3B-A3FD-47A8-82A3-1EF95AEF5993\n--------------------------------------------------\nON STATEMENT refresh mode for materialized views \nThe ON STATEMENT refresh mode refreshes materialized views every time a DML operation is performed on any base table, without the need to commit the transaction. This mode does not require you to maintain materialized view logs on the base tables. \n--------------------------------------------------\n\n\n> Another use case is a ticket selling system. The system shows how many\n> tickets remain in a real time manner. For this purpose it needs to\n> count the number of tickets already sold from a log table. By using\n> IVM, it could be accomplished in simple and effective way.\n\nWouldn't the app just have a table like ticket(id, name, quantity), decrement the quantity when the ticket is sold, and read the current quantity to know the remaining tickets? If many consumers try to buy tickets for a popular event, the materialized view refresh would limit the concurrency.\n\n\n> Here are some use cases suitable for IVM I can think of:\n> \n> - Users are creating home made triggers to get data from tables. Since\n> IVM could eliminates some of those triggers, we could expect less\n> maintenance cost and bugs accidentally brought in when the triggers\n> were created.\n> \n> - Any use case in which the cost of refreshing whole result table\n> (materialized view) is so expensive that it justifies the cost of\n> updating of base tables. See the example of use cases above.\n\nI think we need to find a typical example of this. That should be useful to write the manual article, because it's better to caution users that the IMV is a good fit for this case and not for that case. Using real-world table names in the syntax example will also be good.\n\n\n> > * Do you think the benefit of ON STATEMENT (i.e. do not have to use\n> materialized view log) outweighs the drawback of ON STATEMENT (i.g. DML\n> overhead)?\n> \n> Outweights to what?\n\n\"outweigh\" means \"exceed.\" I meant that I'm wondering if and why users prefer ON STATEMENT's benefit despite of its additional overhead on update statements.\n\n\nBottom line: The use of triggers makes me hesitate, because I saw someone's (probably Fujii san) article that INSERTs into inheritance-and-trigger-based partitioned tables were 10 times slower than the declaration-based partitioned tables. I think I will try to find a good use case.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 25 Dec 2019 05:27:26 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> Another use case is a ticket selling system. The system shows how many\n>> tickets remain in a real time manner. For this purpose it needs to\n>> count the number of tickets already sold from a log table. By using\n>> IVM, it could be accomplished in simple and effective way.\n> \n> Wouldn't the app just have a table like ticket(id, name, quantity), decrement the quantity when the ticket is sold, and read the current quantity to know the remaining tickets? If many consumers try to buy tickets for a popular event, the materialized view refresh would limit the concurrency.\n\nYes, as long as number of sold ticks is the only important data for\nthe system, it could be true. However suppose the system wants to\nstart sort of \"campaign\" and the system needs to collect statistics of\ncounts depending on the city that each ticket buyer belongs to so that\ncertain offer is limited to first 100 ticket buyers in each city. In\nthis case IVM will give more flexible way to handle this kind of\nrequirements than having adhoc city counts column in a table.\n\n> I think we need to find a typical example of this. That should be useful to write the manual article, because it's better to caution users that the IMV is a good fit for this case and not for that case. Using real-world table names in the syntax example will also be good.\n\nIn general I agree. I'd try to collect good real-world examples by\nmyself but my experience is limited. I hope people in this community\ncome up with such that examples.\n\n> \"outweigh\" means \"exceed.\" I meant that I'm wondering if and why users prefer ON STATEMENT's benefit despite of its additional overhead on update statements.\n\nI already found at least one such user in the upthread if I don't\nmissing something.\n\n> Bottom line: The use of triggers makes me hesitate, because I saw someone's (probably Fujii san) article that INSERTs into inheritance-and-trigger-based partitioned tables were 10 times slower than the declaration-based partitioned tables. I think I will try to find a good use case.\n\nGreat. In the mean time we will try to mitigate the overhead of IVM\n(triggers are just one of them).\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 26 Dec 2019 09:26:39 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, \n\nAttached is the latest patch (v11) to add support for Incremental Materialized View Maintenance (IVM).\n\nDifferences from the previous patch (v10) include:\n- Prohibit creating matviews including mutable functions\n\nMatviews including mutable functions (for example now(),random(), ... etc) could result in inconsistent data with the base tables.\nThis patch adds a check whether the requested matview definition includes SELECTs using mutable functions. If so, raise an error while creating the matview.\n\nThis issue is reported by nuko-san.\nhttps://www.postgresql.org/message-id/CAF3Gu1Z950HqQJzwanbeg7PmUXLc+7uZMstfnLeZM9iqDWeW9Q@mail.gmail.com\n\n\nCurrently other IVM's support status is:\n\n> IVM is a way to make materialized views up-to-date in which only\n> incremental changes are computed and applied on views rather than\n> recomputing the contents from scratch as REFRESH MATERIALIZED VIEW\n> does. IVM can update materialized views more efficiently\n> than recomputation when only small part of the view need updates.\n> \n> There are two approaches with regard to timing of view maintenance:\n> immediate and deferred. In immediate maintenance, views are updated in\n> the same transaction where its base table is modified. In deferred\n> maintenance, views are updated after the transaction is committed,\n> for example, when the view is accessed, as a response to user command\n> like REFRESH, or periodically in background, and so on. \n> \n> This patch implements a kind of immediate maintenance, in which\n> materialized views are updated immediately in AFTER triggers when a\n> base table is modified.\n> \n> This supports views using:\n> - inner and outer joins including self-join\n> - some built-in aggregate functions (count, sum, agv, min, max)\n> - a part of subqueries\n> -- simple subqueries in FROM clause\n> -- EXISTS subqueries in WHERE clause\n> - DISTINCT and views with tuple duplicates\n> \n> ===\n> Here are major changes we made after the previous submitted patch:\n> \n> * Aggregate functions are checked if they can be used in IVM \n> using their OID. Per comments from Alvaro Herrera.\n> \n> For this purpose, Gen_fmgrtab.pl was modified so that OIDs of\n> aggregate functions are output to fmgroids.h.\n> \n> * Some bug fixes including:\n> \n> - Mistake of tab-completion of psql pointed out by nuko-san\n> - A bug relating rename of matview pointed out by nuko-san\n> - spelling errors\n> - etc.\n> \n> * Add documentations for IVM\n> \n> * Patch is splited into eleven parts to make review easier\n> as suggested by Amit Langote:\n> \n> - 0001: Add a new syntax:\n> CREATE INCREMENTAL MATERIALIZED VIEW\n> - 0002: Add a new column relisivm to pg_class\n> - 0003: Change trigger.c to allow to prolong life span of tupestores\n> containing Transition Tables generated via AFTER trigger\n> - 0004: Add the basic IVM future using counting algorithm:\n> This supports inner joins, DISTINCT, and tuple duplicates.\n> - 0005: Change GEN_fmgrtab.pl to output aggregate function's OIDs\n> - 0006: Add aggregates support for IVM\n> - 0007: Add subqueries support for IVM\n> - 0008: Add outer joins support for IVM\n> - 0009: Add IVM support to psql command\n> - 0010: Add regression tests for IVM\n> - 0011: Add documentations for IVM\n> \n> ===\n> Todo:\n> \n> Currently, REFRESH and pg_dump/pg_restore is not supported, but\n> we are working on them.\n> \n> Also, TRUNCATE is not supported. When TRUNCATE command is executed\n> on a base table, nothing occurs on materialized views. We are\n> now considering another better options, like:\n> \n> - Raise an error or warning when a base table is TRUNCATEed.\n> - Make the view non-scannable (like WITH NO DATA)\n> - Update the view in some ways. It would be easy for inner joins\n> or aggregate views, but there is some difficult with outer joins.\n\nBest Regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_date": "Thu, 26 Dec 2019 11:03:02 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 24 Dec 2019 07:07:35 +0000\n\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n\n> From: Yugo Nagata <nagata@sraoss.co.jp>\n> > On Mon, 23 Dec 2019 08:08:53 +0000\n> > \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote:\n> > > How about unlogged tables ? I thought the point of using a temp table is to\n> > avoid WAL overhead.\n> > \n> > Hmm... this might be another option. However, if we use unlogged tables,\n> > we will need to create them in a special schema similar to pg_toast\n> > to split this from user tables. Otherwise, we need to create and drop\n> > unlogged tables repeatedly for each session.\n> \n> Maybe we can create the work tables in the same schema as the materialized view, following:\n> \n> * Prefix the table name to indicate that the table is system-managed, thus alluding to the user that manually deleting the table would break something. This is like the system attribute __imv_count you are proposing.\n> \n> * Describe the above in the manual. Columns of serial and bigserial data type similarly create sequences behind the scenes.\n> \n> * Make the work tables depend on the materialized view by recording the dependency in pg_depend, so that Dropping the materialized view will also drop its work tables.\n\nMaybe it works, but instead of using special names for work tables, we can also create\na schema whose name is special and place work tables in this. This will not annoy users\nwith information they are not interested in when, for example, psql meta-commands like\n\\d are used.\n\nAnyway, I understood it is better to avoid creating and dropping temporary tables\nduring view maintenance per statement.\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 26 Dec 2019 11:36:47 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello,\nThank you for this patch.\n\nI have tried to use an other patch with yours:\n\"Planning counters in pg_stat_statements (using pgss_store)\"\nhttps://www.postgresql.org/message-id/CAOBaU_Y12bn0tOdN9RMBZn29bfYYH11b2CwKO1RO7dX9fQ3aZA%40mail.gmail.com\n\nsetting\nshared_preload_libraries='pg_stat_statements'\npg_stat_statements.track=all\nand creating the extension\n\n\nWhen trying following syntax:\n\ncreate table b1 (id integer, x numeric(10,3));\ncreate incremental materialized view mv1 as select id, count(*),sum(x) from\nb1 group by id;\ninsert into b1 values (1,1)\n\nI got an ASSERT FAILURE in pg_stat_statements.c\non\n\tAssert(query != NULL);\n\ncomming from matview.c\n\trefresh_matview_datafill(dest_old, query, queryEnv, NULL);\nor\n\trefresh_matview_datafill(dest_new, query, queryEnv, NULL);\n\n\nIf this (last) NULL field was replaced by the query text, a comment or just\n\"n/a\",\nit would fix the problem.\n\nCould this be investigated ?\n\nThanks in advance\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 27 Dec 2019 16:42:04 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 12:42 AM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Hello,\n> Thank you for this patch.\n>\n> I have tried to use an other patch with yours:\n> \"Planning counters in pg_stat_statements (using pgss_store)\"\n> https://www.postgresql.org/message-id/CAOBaU_Y12bn0tOdN9RMBZn29bfYYH11b2CwKO1RO7dX9fQ3aZA%40mail.gmail.com\n>\n> setting\n> shared_preload_libraries='pg_stat_statements'\n> pg_stat_statements.track=all\n> and creating the extension\n>\n>\n> When trying following syntax:\n>\n> create table b1 (id integer, x numeric(10,3));\n> create incremental materialized view mv1 as select id, count(*),sum(x) from\n> b1 group by id;\n> insert into b1 values (1,1)\n>\n> I got an ASSERT FAILURE in pg_stat_statements.c\n> on\n> Assert(query != NULL);\n>\n> comming from matview.c\n> refresh_matview_datafill(dest_old, query, queryEnv, NULL);\n> or\n> refresh_matview_datafill(dest_new, query, queryEnv, NULL);\n>\n>\n> If this (last) NULL field was replaced by the query text, a comment or just\n> \"n/a\",\n> it would fix the problem.\n>\n> Could this be investigated ?\n\nI digged deeper into this. I found a bug in the pg_stat_statements\npatch, as the new pgss_planner_hook() doesn't check for a non-zero\nqueryId, which I think should avoid that problem. This however indeed\nraises the question on whether the query text should be provided, and\nif the behavior is otherwise correct. If I understand correctly, for\nnow this specific query won't go through parse_analysis, thus won't\nget a queryId and will be ignored in pgss_ExecutorEnd, so it'll be\nentirely invisible, except with auto_explain which will only show an\norphan plan like this:\n\n2019-12-28 12:03:29.334 CET [9399] LOG: duration: 0.180 ms plan:\nHashAggregate (cost=0.04..0.06 rows=1 width=60)\n Group Key: new_16385_0.id\n -> Named Tuplestore Scan (cost=0.00..0.02 rows=1 width=52)\n\n\n",
"msg_date": "Sat, 28 Dec 2019 12:05:50 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "LIMIT clause without ORDER BY should be prohibited when creating\nincremental materialized views.\n\nIn SQL, the result of a LIMIT clause without ORDER BY is undefined.\nIf the LIMIT clause is allowed when creating an incremental materialized\nview, incorrect results will be obtained when the view is updated after\nupdating the source table.\n\n```\n[ec2-user@ip-10-0-1-10 ivm]$ psql --version\npsql (PostgreSQL) 13devel-ivm-3bf6953688153fa72dd48478a77e37cf3111a1ee\n[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f limit-problem.sql\nDROP TABLE IF EXISTS test CASCADE;\npsql:limit-problem.sql:1: NOTICE: drop cascades to materialized view\ntest_imv\nDROP TABLE\nCREATE TABLE test (id int primary key, data text);\nCREATE TABLE\nINSERT INTO test VALUES (generate_series(1, 10), 'foo');\nINSERT 0 10\nCREATE INCREMENTAL MATERIALIZED VIEW test_imv AS SELECT * FROM test LIMIT 1;\nSELECT 1\n Materialized view \"public.test_imv\"\n Column | Type | Collation | Nullable | Default | Storage |\nStats target | Description\n---------------+---------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | | | plain |\n |\n data | text | | | | extended |\n |\n __ivm_count__ | bigint | | | | plain |\n |\nView definition:\n SELECT test.id,\n test.data\n FROM test\n LIMIT 1;\nAccess method: heap\nIncremental view maintenance: yes\n\nSELECT * FROM test LIMIT 1;\n id | data\n----+------\n 1 | foo\n(1 row)\n\nTABLE test_imv;\n id | data\n----+------\n 1 | foo\n(1 row)\n\nUPDATE test SET data = 'bar' WHERE id = 1;\nUPDATE 1\nSELECT * FROM test LIMIT 1;\n id | data\n----+------\n 2 | foo\n(1 row)\n\nTABLE test_imv;\n id | data\n----+------\n 1 | bar\n(1 row)\n\nDELETE FROM test WHERE id = 1;\nDELETE 1\nSELECT * FROM test LIMIT 1;\n id | data\n----+------\n 2 | foo\n(1 row)\n\nTABLE test_imv;\n id | data\n----+------\n(0 rows)\n```\n\nORDER BY clause is not allowed when executing CREATE INCREMENTAL\nMATELIARIZED VIEW.\nWe propose not to allow LIMIT clauses as well.\n\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\nLIMIT clause without ORDER BY should be prohibited when creating incremental materialized views.In SQL, the result of a LIMIT clause without ORDER BY is undefined.If the LIMIT clause is allowed when creating an incremental materialized view, incorrect results will be obtained when the view is updated after updating the source table.```[ec2-user@ip-10-0-1-10 ivm]$ psql --versionpsql (PostgreSQL) 13devel-ivm-3bf6953688153fa72dd48478a77e37cf3111a1ee[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f limit-problem.sqlDROP TABLE IF EXISTS test CASCADE;psql:limit-problem.sql:1: NOTICE: drop cascades to materialized view test_imvDROP TABLECREATE TABLE test (id int primary key, data text);CREATE TABLEINSERT INTO test VALUES (generate_series(1, 10), 'foo');INSERT 0 10CREATE INCREMENTAL MATERIALIZED VIEW test_imv AS SELECT * FROM test LIMIT 1;SELECT 1 Materialized view \"public.test_imv\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description---------------+---------+-----------+----------+---------+----------+--------------+------------- id | integer | | | | plain | | data | text | | | | extended | | __ivm_count__ | bigint | | | | plain | |View definition: SELECT test.id, test.data FROM test LIMIT 1;Access method: heapIncremental view maintenance: yesSELECT * FROM test LIMIT 1; id | data----+------ 1 | foo(1 row)TABLE test_imv; id | data----+------ 1 | foo(1 row)UPDATE test SET data = 'bar' WHERE id = 1;UPDATE 1SELECT * FROM test LIMIT 1; id | data----+------ 2 | foo(1 row)TABLE test_imv; id | data----+------ 1 | bar(1 row)DELETE FROM test WHERE id = 1;DELETE 1SELECT * FROM test LIMIT 1; id | data----+------ 2 | foo(1 row)TABLE test_imv; id | data----+------(0 rows)```ORDER BY clause is not allowed when executing CREATE INCREMENTAL MATELIARIZED VIEW.We propose not to allow LIMIT clauses as well.2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Sat, 11 Jan 2020 09:27:58 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sat, 11 Jan 2020 09:27:58 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> LIMIT clause without ORDER BY should be prohibited when creating\n> incremental materialized views.\n> \n> In SQL, the result of a LIMIT clause without ORDER BY is undefined.\n> If the LIMIT clause is allowed when creating an incremental materialized\n> view, incorrect results will be obtained when the view is updated after\n> updating the source table.\n\nThank you for your advice. It's just as you said. \nLIMIT/OFFSET clause should is prohibited. We will add this to next patch.\n\nBest Regards,\n Takuma Hoshiai\n\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql --version\n> psql (PostgreSQL) 13devel-ivm-3bf6953688153fa72dd48478a77e37cf3111a1ee\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f limit-problem.sql\n> DROP TABLE IF EXISTS test CASCADE;\n> psql:limit-problem.sql:1: NOTICE: drop cascades to materialized view\n> test_imv\n> DROP TABLE\n> CREATE TABLE test (id int primary key, data text);\n> CREATE TABLE\n> INSERT INTO test VALUES (generate_series(1, 10), 'foo');\n> INSERT 0 10\n> CREATE INCREMENTAL MATERIALIZED VIEW test_imv AS SELECT * FROM test LIMIT 1;\n> SELECT 1\n> Materialized view \"public.test_imv\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Stats target | Description\n> ---------------+---------+-----------+----------+---------+----------+--------------+-------------\n> id | integer | | | | plain |\n> |\n> data | text | | | | extended |\n> |\n> __ivm_count__ | bigint | | | | plain |\n> |\n> View definition:\n> SELECT test.id,\n> test.data\n> FROM test\n> LIMIT 1;\n> Access method: heap\n> Incremental view maintenance: yes\n> \n> SELECT * FROM test LIMIT 1;\n> id | data\n> ----+------\n> 1 | foo\n> (1 row)\n> \n> TABLE test_imv;\n> id | data\n> ----+------\n> 1 | foo\n> (1 row)\n> \n> UPDATE test SET data = 'bar' WHERE id = 1;\n> UPDATE 1\n> SELECT * FROM test LIMIT 1;\n> id | data\n> ----+------\n> 2 | foo\n> (1 row)\n> \n> TABLE test_imv;\n> id | data\n> ----+------\n> 1 | bar\n> (1 row)\n> \n> DELETE FROM test WHERE id = 1;\n> DELETE 1\n> SELECT * FROM test LIMIT 1;\n> id | data\n> ----+------\n> 2 | foo\n> (1 row)\n> \n> TABLE test_imv;\n> id | data\n> ----+------\n> (0 rows)\n> ```\n> \n> ORDER BY clause is not allowed when executing CREATE INCREMENTAL\n> MATELIARIZED VIEW.\n> We propose not to allow LIMIT clauses as well.\n> \n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n\n",
"msg_date": "Tue, 14 Jan 2020 15:37:45 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Aggregate operation of user-defined type cannot be specified\n(commit e150d964df7e3aeb768e4bae35d15764f8abd284)\n\nA SELECT statement using the MIN() and MAX() functions can be executed on a\nuser-defined type column that implements the aggregate functions MIN () and\nMAX ().\nHowever, if the same SELECT statement is specified in the AS clause of\nCREATE INCREMENTAL MATERIALIZED VIEW, the following error will occur.\n\n```\nSELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n data_min | data_max\n----------+----------\n 1/3 | 2/3\n(1 row)\n\nCREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\ndata_min FROM foo;\npsql:extension-agg.sql:14: ERROR: aggregate function min is not supported\nCREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\ndata_max FROM foo;\npsql:extension-agg.sql:15: ERROR: aggregate function max is not supported\n```\n\nDoes query including user-defined type aggregate operation not supported by\nINCREMENTAL MATERIALIZED VIEW?\n\nAn execution example is shown below.\n\n```\n[ec2-user@ip-10-0-1-10 ivm]$ cat extension-agg.sql\n--\n-- pg_fraction: https://github.com/nuko-yokohama/pg_fraction\n--\nDROP EXTENSION IF EXISTS pg_fraction CASCADE;\nDROP TABLE IF EXISTS foo CASCADE;\n\nCREATE EXTENSION IF NOT EXISTS pg_fraction;\n\\dx\n\\dT+ fraction\n\nCREATE TABLE foo (id int, data fraction);\nINSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');\nSELECT MIN(data) data_min, MAX(data) data_max FROM foo;\nCREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\ndata_min FROM foo;\nCREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\ndata_max FROM foo;\n\nSELECT MIN(id) id_min, MAX(id) id_max FROM foo;\nCREATE INCREMENTAL MATERIALIZED VIEW foo_id_imv AS SELECT MIN(id) id_min,\nMAX(id) id_max FROM foo;\n```\n\nBest regards.\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\nAggregate operation of user-defined type cannot be specified(commit e150d964df7e3aeb768e4bae35d15764f8abd284)A SELECT statement using the MIN() and MAX() functions can be executed on a user-defined type column that implements the aggregate functions MIN () and MAX ().However, if the same SELECT statement is specified in the AS clause of CREATE INCREMENTAL MATERIALIZED VIEW, the following error will occur.```SELECT MIN(data) data_min, MAX(data) data_max FROM foo; data_min | data_max----------+---------- 1/3 | 2/3(1 row)CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data) data_min FROM foo;psql:extension-agg.sql:14: ERROR: aggregate function min is not supportedCREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data) data_max FROM foo;psql:extension-agg.sql:15: ERROR: aggregate function max is not supported```Does query including user-defined type aggregate operation not supported by INCREMENTAL MATERIALIZED VIEW?An execution example is shown below.```[ec2-user@ip-10-0-1-10 ivm]$ cat extension-agg.sql---- pg_fraction: https://github.com/nuko-yokohama/pg_fraction--DROP EXTENSION IF EXISTS pg_fraction CASCADE;DROP TABLE IF EXISTS foo CASCADE;CREATE EXTENSION IF NOT EXISTS pg_fraction;\\dx\\dT+ fractionCREATE TABLE foo (id int, data fraction);INSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');SELECT MIN(data) data_min, MAX(data) data_max FROM foo;CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data) data_min FROM foo;CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data) data_max FROM foo;SELECT MIN(id) id_min, MAX(id) id_max FROM foo;CREATE INCREMENTAL MATERIALIZED VIEW foo_id_imv AS SELECT MIN(id) id_min, MAX(id) id_max FROM foo;```Best regards.2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Thu, 16 Jan 2020 12:59:11 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Error occurs when updating user-defined type columns.\n\nCreate an INCREMENTAL MATERIALIZED VIEW by specifying a query that includes\nuser-defined type columns.\nAfter the view is created, an error occurs when inserting into the view\nsource table (including the user-defined type column).\n```\nERROR: operator does not exist\n```\n\nAn execution example is shown below.\n\n```\n[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -a -f extension-insert.sql\n--\n-- pg_fraction: https://github.com/nuko-yokohama/pg_fraction\n--\nDROP EXTENSION IF EXISTS pg_fraction CASCADE;\npsql:extension-insert.sql:4: NOTICE: drop cascades to column data of table\nfoo\nDROP EXTENSION\nDROP TABLE IF EXISTS foo CASCADE;\nDROP TABLE\nCREATE EXTENSION IF NOT EXISTS pg_fraction;\nCREATE EXTENSION\n\\dx\n List of installed extensions\n Name | Version | Schema | Description\n-------------+---------+------------+------------------------------\n pg_fraction | 1.0 | public | fraction data type\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(2 rows)\n\n\\dT+ fraction\n List of data types\n Schema | Name | Internal name | Size | Elements | Owner | Access\nprivileges | Description\n--------+----------+---------------+------+----------+----------+-------------------+-------------\n public | fraction | fraction | 16 | | postgres |\n |\n(1 row)\n\nCREATE TABLE foo (id int, data fraction);\nCREATE TABLE\nINSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');\nINSERT 0 3\nSELECT id, data FROM foo WHERE data >= '1/2';\n id | data\n----+------\n 1 | 2/3\n 3 | 1/2\n(2 rows)\n\nCREATE INCREMENTAL MATERIALIZED VIEW foo_imv AS SELECT id, data FROM foo\nWHERE data >= '1/2';\nSELECT 2\nTABLE foo_imv;\n id | data\n----+------\n 1 | 2/3\n 3 | 1/2\n(2 rows)\n\nINSERT INTO foo (id, data) VALUES (4,'2/3'),(5,'2/5'),(6,'3/6'); -- error\npsql:extension-insert.sql:17: ERROR: operator does not exist: fraction\npg_catalog.= fraction\nLINE 1: ...(mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(p...\n ^\nHINT: No operator matches the given name and argument types. You might\nneed to add explicit type casts.\nQUERY: WITH updt AS (UPDATE public.foo_imv AS mv SET __ivm_count__ =\nmv.__ivm_count__ OPERATOR(pg_catalog.+) diff.__ivm_count__ FROM\npg_temp_3.pg_temp_73900 AS diff WHERE (mv.id OPERATOR(pg_catalog.=) diff.id\nOR (mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(pg_catalog.=)\ndiff.data OR (mv.data IS NULL AND diff.data IS NULL)) RETURNING mv.id,\nmv.data) INSERT INTO public.foo_imv SELECT * FROM pg_temp_3.pg_temp_73900\nAS diff WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE (mv.id\nOPERATOR(pg_catalog.=) diff.id OR (mv.id IS NULL AND diff.id IS NULL)) AND\n(mv.data OPERATOR(pg_catalog.=) diff.data OR (mv.data IS NULL AND diff.data\nIS NULL)));\nTABLE foo;\n id | data\n----+------\n 1 | 2/3\n 2 | 1/3\n 3 | 1/2\n(3 rows)\n\nTABLE foo_imv;\n id | data\n----+------\n 1 | 2/3\n 3 | 1/2\n(2 rows)\n\nDROP MATERIALIZED VIEW foo_imv;\nDROP MATERIALIZED VIEW\nINSERT INTO foo (id, data) VALUES (4,'2/3'),(5,'2/5'),(6,'3/6');\nINSERT 0 3\nTABLE foo;\n id | data\n----+------\n 1 | 2/3\n 2 | 1/3\n 3 | 1/2\n 4 | 2/3\n 5 | 2/5\n 6 | 1/2\n(6 rows)\n\n```\n\nBest regards.\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\nError occurs when updating user-defined type columns.Create an INCREMENTAL MATERIALIZED VIEW by specifying a query that includes user-defined type columns.After the view is created, an error occurs when inserting into the view source table (including the user-defined type column).```ERROR: operator does not exist```An execution example is shown below.```[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -a -f extension-insert.sql---- pg_fraction: https://github.com/nuko-yokohama/pg_fraction--DROP EXTENSION IF EXISTS pg_fraction CASCADE;psql:extension-insert.sql:4: NOTICE: drop cascades to column data of table fooDROP EXTENSIONDROP TABLE IF EXISTS foo CASCADE;DROP TABLECREATE EXTENSION IF NOT EXISTS pg_fraction;CREATE EXTENSION\\dx List of installed extensions Name | Version | Schema | Description-------------+---------+------------+------------------------------ pg_fraction | 1.0 | public | fraction data type plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language(2 rows)\\dT+ fraction List of data types Schema | Name | Internal name | Size | Elements | Owner | Access privileges | Description--------+----------+---------------+------+----------+----------+-------------------+------------- public | fraction | fraction | 16 | | postgres | |(1 row)CREATE TABLE foo (id int, data fraction);CREATE TABLEINSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');INSERT 0 3SELECT id, data FROM foo WHERE data >= '1/2'; id | data----+------ 1 | 2/3 3 | 1/2(2 rows)CREATE INCREMENTAL MATERIALIZED VIEW foo_imv AS SELECT id, data FROM foo WHERE data >= '1/2';SELECT 2TABLE foo_imv; id | data----+------ 1 | 2/3 3 | 1/2(2 rows)INSERT INTO foo (id, data) VALUES (4,'2/3'),(5,'2/5'),(6,'3/6'); -- errorpsql:extension-insert.sql:17: ERROR: operator does not exist: fraction pg_catalog.= fractionLINE 1: ...(mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(p... ^HINT: No operator matches the given name and argument types. You might need to add explicit type casts.QUERY: WITH updt AS (UPDATE public.foo_imv AS mv SET __ivm_count__ = mv.__ivm_count__ OPERATOR(pg_catalog.+) diff.__ivm_count__ FROM pg_temp_3.pg_temp_73900 AS diff WHERE (mv.id OPERATOR(pg_catalog.=) diff.id OR (mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(pg_catalog.=) diff.data OR (mv.data IS NULL AND diff.data IS NULL)) RETURNING mv.id, mv.data) INSERT INTO public.foo_imv SELECT * FROM pg_temp_3.pg_temp_73900 AS diff WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE (mv.id OPERATOR(pg_catalog.=) diff.id OR (mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(pg_catalog.=) diff.data OR (mv.data IS NULL AND diff.data IS NULL)));TABLE foo; id | data----+------ 1 | 2/3 2 | 1/3 3 | 1/2(3 rows)TABLE foo_imv; id | data----+------ 1 | 2/3 3 | 1/2(2 rows)DROP MATERIALIZED VIEW foo_imv;DROP MATERIALIZED VIEWINSERT INTO foo (id, data) VALUES (4,'2/3'),(5,'2/5'),(6,'3/6');INSERT 0 3TABLE foo; id | data----+------ 1 | 2/3 2 | 1/3 3 | 1/2 4 | 2/3 5 | 2/5 6 | 1/2(6 rows)```Best regards.2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Thu, 16 Jan 2020 18:50:40 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 16 Jan 2020 12:59:11 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Aggregate operation of user-defined type cannot be specified\n> (commit e150d964df7e3aeb768e4bae35d15764f8abd284)\n> \n> A SELECT statement using the MIN() and MAX() functions can be executed on a\n> user-defined type column that implements the aggregate functions MIN () and\n> MAX ().\n> However, if the same SELECT statement is specified in the AS clause of\n> CREATE INCREMENTAL MATERIALIZED VIEW, the following error will occur.\n> \n> ```\n> SELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n> data_min | data_max\n> ----------+----------\n> 1/3 | 2/3\n> (1 row)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\n> data_min FROM foo;\n> psql:extension-agg.sql:14: ERROR: aggregate function min is not supported\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\n> data_max FROM foo;\n> psql:extension-agg.sql:15: ERROR: aggregate function max is not supported\n> ```\n> \n> Does query including user-defined type aggregate operation not supported by\n> INCREMENTAL MATERIALIZED VIEW?\n\nThe current implementation supports only built-in aggregate functions, so\nuser-defined aggregates are not supported, although it is allowed before.\nThis is because we can not know how user-defined aggregates behave and if\nit can work safely with IVM. Min/Max on your fraction type may work well, \nbut it is possible that some user-defined aggregate functions named min\nor max behave in totally different way than we expected.\n\nIn future, maybe it is possible support user-defined aggregates are supported\nby extending pg_aggregate and adding support functions for IVM, but there is\nnot still a concrete plan for now. \n\nBTW, the following error message doesn't look good because built-in min is\nsupported, so I will improve it.\n\n ERROR: aggregate function min is not supported\n\nRegards,\nYugo Nagata\n\n> \n> An execution example is shown below.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ cat extension-agg.sql\n> --\n> -- pg_fraction: https://github.com/nuko-yokohama/pg_fraction\n> --\n> DROP EXTENSION IF EXISTS pg_fraction CASCADE;\n> DROP TABLE IF EXISTS foo CASCADE;\n> \n> CREATE EXTENSION IF NOT EXISTS pg_fraction;\n> \\dx\n> \\dT+ fraction\n> \n> CREATE TABLE foo (id int, data fraction);\n> INSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');\n> SELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\n> data_min FROM foo;\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\n> data_max FROM foo;\n> \n> SELECT MIN(id) id_min, MAX(id) id_max FROM foo;\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_id_imv AS SELECT MIN(id) id_min,\n> MAX(id) id_max FROM foo;\n> ```\n> \n> Best regards.\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:11:53 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 16 Jan 2020 18:50:40 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Error occurs when updating user-defined type columns.\n> \n> Create an INCREMENTAL MATERIALIZED VIEW by specifying a query that includes\n> user-defined type columns.\n> After the view is created, an error occurs when inserting into the view\n> source table (including the user-defined type column).\n> ```\n> ERROR: operator does not exist\n\nThank you for your reporting. I think this error occurs because \npg_catalog.= is used also for user-defined types. I will fix this.\n\nRegards,\nYugo Nagata\n\n> ```\n> \n> An execution example is shown below.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -a -f extension-insert.sql\n> --\n> -- pg_fraction: https://github.com/nuko-yokohama/pg_fraction\n> --\n> DROP EXTENSION IF EXISTS pg_fraction CASCADE;\n> psql:extension-insert.sql:4: NOTICE: drop cascades to column data of table\n> foo\n> DROP EXTENSION\n> DROP TABLE IF EXISTS foo CASCADE;\n> DROP TABLE\n> CREATE EXTENSION IF NOT EXISTS pg_fraction;\n> CREATE EXTENSION\n> \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> -------------+---------+------------+------------------------------\n> pg_fraction | 1.0 | public | fraction data type\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (2 rows)\n> \n> \\dT+ fraction\n> List of data types\n> Schema | Name | Internal name | Size | Elements | Owner | Access\n> privileges | Description\n> --------+----------+---------------+------+----------+----------+-------------------+-------------\n> public | fraction | fraction | 16 | | postgres |\n> |\n> (1 row)\n> \n> CREATE TABLE foo (id int, data fraction);\n> CREATE TABLE\n> INSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');\n> INSERT 0 3\n> SELECT id, data FROM foo WHERE data >= '1/2';\n> id | data\n> ----+------\n> 1 | 2/3\n> 3 | 1/2\n> (2 rows)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW foo_imv AS SELECT id, data FROM foo\n> WHERE data >= '1/2';\n> SELECT 2\n> TABLE foo_imv;\n> id | data\n> ----+------\n> 1 | 2/3\n> 3 | 1/2\n> (2 rows)\n> \n> INSERT INTO foo (id, data) VALUES (4,'2/3'),(5,'2/5'),(6,'3/6'); -- error\n> psql:extension-insert.sql:17: ERROR: operator does not exist: fraction\n> pg_catalog.= fraction\n> LINE 1: ...(mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(p...\n> ^\n> HINT: No operator matches the given name and argument types. You might\n> need to add explicit type casts.\n> QUERY: WITH updt AS (UPDATE public.foo_imv AS mv SET __ivm_count__ =\n> mv.__ivm_count__ OPERATOR(pg_catalog.+) diff.__ivm_count__ FROM\n> pg_temp_3.pg_temp_73900 AS diff WHERE (mv.id OPERATOR(pg_catalog.=) diff.id\n> OR (mv.id IS NULL AND diff.id IS NULL)) AND (mv.data OPERATOR(pg_catalog.=)\n> diff.data OR (mv.data IS NULL AND diff.data IS NULL)) RETURNING mv.id,\n> mv.data) INSERT INTO public.foo_imv SELECT * FROM pg_temp_3.pg_temp_73900\n> AS diff WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE (mv.id\n> OPERATOR(pg_catalog.=) diff.id OR (mv.id IS NULL AND diff.id IS NULL)) AND\n> (mv.data OPERATOR(pg_catalog.=) diff.data OR (mv.data IS NULL AND diff.data\n> IS NULL)));\n> TABLE foo;\n> id | data\n> ----+------\n> 1 | 2/3\n> 2 | 1/3\n> 3 | 1/2\n> (3 rows)\n> \n> TABLE foo_imv;\n> id | data\n> ----+------\n> 1 | 2/3\n> 3 | 1/2\n> (2 rows)\n> \n> DROP MATERIALIZED VIEW foo_imv;\n> DROP MATERIALIZED VIEW\n> INSERT INTO foo (id, data) VALUES (4,'2/3'),(5,'2/5'),(6,'3/6');\n> INSERT 0 3\n> TABLE foo;\n> id | data\n> ----+------\n> 1 | 2/3\n> 2 | 1/3\n> 3 | 1/2\n> 4 | 2/3\n> 5 | 2/5\n> 6 | 1/2\n> (6 rows)\n> \n> ```\n> \n> Best regards.\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:21:18 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello,\n\nIt seems that patch v11 doesn't apply any more.\nProblem with \"scanRTEForColumn\" maybe because of change:\n\nhttps://git.postgresql.org/pg/commitdiff/b541e9accb28c90656388a3f827ca3a68dd2a308\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 17 Jan 2020 14:10:32 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi.\nI understand.\nEven if the function name is min, there is a possibility that it is not an\naggregation operation for finding the minimum value, so it is restricted.\nI understood aggregation of user-defined types is a constraint.\n\nAlso, I agree with the error message improvements.\n\n2020年1月17日(金) 17:12 Yugo NAGATA <nagata@sraoss.co.jp>:\n\n> On Thu, 16 Jan 2020 12:59:11 +0900\n> nuko yokohama <nuko.yokohama@gmail.com> wrote:\n>\n> > Aggregate operation of user-defined type cannot be specified\n> > (commit e150d964df7e3aeb768e4bae35d15764f8abd284)\n> >\n> > A SELECT statement using the MIN() and MAX() functions can be executed\n> on a\n> > user-defined type column that implements the aggregate functions MIN ()\n> and\n> > MAX ().\n> > However, if the same SELECT statement is specified in the AS clause of\n> > CREATE INCREMENTAL MATERIALIZED VIEW, the following error will occur.\n> >\n> > ```\n> > SELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n> > data_min | data_max\n> > ----------+----------\n> > 1/3 | 2/3\n> > (1 row)\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\n> > data_min FROM foo;\n> > psql:extension-agg.sql:14: ERROR: aggregate function min is not\n> supported\n> > CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\n> > data_max FROM foo;\n> > psql:extension-agg.sql:15: ERROR: aggregate function max is not\n> supported\n> > ```\n> >\n> > Does query including user-defined type aggregate operation not supported\n> by\n> > INCREMENTAL MATERIALIZED VIEW?\n>\n> The current implementation supports only built-in aggregate functions, so\n> user-defined aggregates are not supported, although it is allowed before.\n> This is because we can not know how user-defined aggregates behave and if\n> it can work safely with IVM. Min/Max on your fraction type may work well,\n> but it is possible that some user-defined aggregate functions named min\n> or max behave in totally different way than we expected.\n>\n> In future, maybe it is possible support user-defined aggregates are\n> supported\n> by extending pg_aggregate and adding support functions for IVM, but there\n> is\n> not still a concrete plan for now.\n>\n> BTW, the following error message doesn't look good because built-in min is\n> supported, so I will improve it.\n>\n> ERROR: aggregate function min is not supported\n>\n> Regards,\n> Yugo Nagata\n>\n> >\n> > An execution example is shown below.\n> >\n> > ```\n> > [ec2-user@ip-10-0-1-10 ivm]$ cat extension-agg.sql\n> > --\n> > -- pg_fraction: https://github.com/nuko-yokohama/pg_fraction\n> > --\n> > DROP EXTENSION IF EXISTS pg_fraction CASCADE;\n> > DROP TABLE IF EXISTS foo CASCADE;\n> >\n> > CREATE EXTENSION IF NOT EXISTS pg_fraction;\n> > \\dx\n> > \\dT+ fraction\n> >\n> > CREATE TABLE foo (id int, data fraction);\n> > INSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');\n> > SELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n> > CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\n> > data_min FROM foo;\n> > CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\n> > data_max FROM foo;\n> >\n> > SELECT MIN(id) id_min, MAX(id) id_max FROM foo;\n> > CREATE INCREMENTAL MATERIALIZED VIEW foo_id_imv AS SELECT MIN(id) id_min,\n> > MAX(id) id_max FROM foo;\n> > ```\n> >\n> > Best regards.\n> >\n> > 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> >\n> > > Hi,\n> > >\n> > > I would like to implement Incremental View Maintenance (IVM) on\n> > > PostgreSQL.\n> > > IVM is a technique to maintain materialized views which computes and\n> > > applies\n> > > only the incremental changes to the materialized views rather than\n> > > recomputate the contents as the current REFRESH command does.\n> > >\n> > > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > > [1].\n> > > Our implementation uses row OIDs to compute deltas for materialized\n> > > views.\n> > > The basic idea is that if we have information about which rows in base\n> > > tables\n> > > are contributing to generate a certain row in a matview then we can\n> > > identify\n> > > the affected rows when a base table is updated. This is based on an\n> idea of\n> > > Dr. Masunaga [2] who is a member of our group and inspired from\n> ID-based\n> > > approach[3].\n> > >\n> > > In our implementation, the mapping of the row OIDs of the materialized\n> view\n> > > and the base tables are stored in \"OID map\". When a base relation is\n> > > modified,\n> > > AFTER trigger is executed and the delta is recorded in delta tables\n> using\n> > > the transition table feature. The accual udpate of the matview is\n> triggerd\n> > > by REFRESH command with INCREMENTALLY option.\n> > >\n> > > However, we realize problems of our implementation. First, WITH OIDS\n> will\n> > > be removed since PG12, so OIDs are no longer available. Besides this,\n> it\n> > > would\n> > > be hard to implement this since it needs many changes of executor\n> nodes to\n> > > collect base tables's OIDs during execuing a query. Also, the cost of\n> > > maintaining\n> > > OID map would be high.\n> > >\n> > > For these reasons, we started to think to implement IVM without\n> relying on\n> > > OIDs\n> > > and made a bit more surveys.\n> > >\n> > > We also looked at Kevin Grittner's discussion [4] on incremental\n> matview\n> > > maintenance. In this discussion, Kevin proposed to use counting\n> algorithm\n> > > [5]\n> > > to handle projection views (using DISTNICT) properly. This algorithm\n> need\n> > > an\n> > > additional system column, count_t, in materialized views and delta\n> tables\n> > > of\n> > > base tables.\n> > >\n> > > However, the discussion about IVM is now stoped, so we would like to\n> > > restart and\n> > > progress this.\n> > >\n> > >\n> > > Through our PoC inplementation and surveys, I think we need to think at\n> > > least\n> > > the followings for implementing IVM.\n> > >\n> > > 1. How to extract changes on base tables\n> > >\n> > > I think there would be at least two approaches for it.\n> > >\n> > > - Using transition table in AFTER triggers\n> > > - Extracting changes from WAL using logical decoding\n> > >\n> > > In our PoC implementation, we used AFTER trigger and transition tables,\n> > > but using\n> > > logical decoding might be better from the point of performance of base\n> > > table\n> > > modification.\n> > >\n> > > If we can represent a change of UPDATE on a base table as query-like\n> > > rather than\n> > > OLD and NEW, it may be possible to update the materialized view\n> directly\n> > > instead\n> > > of performing delete & insert.\n> > >\n> > >\n> > > 2. How to compute the delta to be applied to materialized views\n> > >\n> > > Essentially, IVM is based on relational algebra. Theorically, changes\n> on\n> > > base\n> > > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > > delta on\n> > > the materialized view is computed using base table deltas based on\n> \"change\n> > > propagation equations\". For implementation, we have to derive the\n> > > equation from\n> > > the view definition query (Query tree, or Plan tree?) and describe\n> this as\n> > > SQL\n> > > query to compulte delta to be applied to the materialized view.\n> > >\n> > > There could be several operations for view definition: selection,\n> > > projection,\n> > > join, aggregation, union, difference, intersection, etc. If we can\n> > > prepare a\n> > > module for each operation, it makes IVM extensable, so we can start a\n> > > simple\n> > > view definition, and then support more complex views.\n> > >\n> > >\n> > > 3. How to identify rows to be modifed in materialized views\n> > >\n> > > When applying the delta to the materialized view, we have to identify\n> > > which row\n> > > in the matview is corresponding to a row in the delta. A naive method\n> is\n> > > matching\n> > > by using all columns in a tuple, but clearly this is unefficient. If\n> > > thematerialized\n> > > view has unique index, we can use this. Maybe, we have to force\n> > > materialized views\n> > > to have all primary key colums in their base tables. In our PoC\n> > > implementation, we\n> > > used OID to identify rows, but this will be no longer available as said\n> > > above.\n> > >\n> > >\n> > > 4. When to maintain materialized views\n> > >\n> > > There are two candidates of the timing of maintenance, immediate\n> (eager)\n> > > or deferred.\n> > >\n> > > In eager maintenance, the materialized view is updated in the same\n> > > transaction\n> > > where the base table is updated. In deferred maintenance, this is done\n> > > after the\n> > > transaction is commited, for example, when view is accessed, as a\n> response\n> > > to user\n> > > request, etc.\n> > >\n> > > In the previous discussion[4], it is planned to start from \"eager\"\n> > > approach. In our PoC\n> > > implementaion, we used the other aproach, that is, using REFRESH\n> command\n> > > to perform IVM.\n> > > I am not sure which is better as a start point, but I begin to think\n> that\n> > > the eager\n> > > approach may be more simple since we don't have to maintain base table\n> > > changes in other\n> > > past transactions.\n> > >\n> > > In the eager maintenance approache, we have to consider a race\n> condition\n> > > where two\n> > > different transactions change base tables simultaneously as discussed\n> in\n> > > [4].\n> > >\n> > >\n> > > [1]\n> > >\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > > [2]\n> > >\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > > (Japanese only)\n> > > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > > [4]\n> > >\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > > [5] https://dl.acm.org/citation.cfm?id=170066\n> > >\n> > > Regards,\n> > > --\n> > > Yugo Nagata <nagata@sraoss.co.jp>\n> > >\n> > >\n>\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n\nHi. I understand.Even if the function name is min, there is a possibility that it is not an aggregation operation for finding the minimum value, so it is restricted.I understood aggregation of user-defined types is a constraint.Also, I agree with the error message improvements.2020年1月17日(金) 17:12 Yugo NAGATA <nagata@sraoss.co.jp>:On Thu, 16 Jan 2020 12:59:11 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Aggregate operation of user-defined type cannot be specified\n> (commit e150d964df7e3aeb768e4bae35d15764f8abd284)\n> \n> A SELECT statement using the MIN() and MAX() functions can be executed on a\n> user-defined type column that implements the aggregate functions MIN () and\n> MAX ().\n> However, if the same SELECT statement is specified in the AS clause of\n> CREATE INCREMENTAL MATERIALIZED VIEW, the following error will occur.\n> \n> ```\n> SELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n> data_min | data_max\n> ----------+----------\n> 1/3 | 2/3\n> (1 row)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\n> data_min FROM foo;\n> psql:extension-agg.sql:14: ERROR: aggregate function min is not supported\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\n> data_max FROM foo;\n> psql:extension-agg.sql:15: ERROR: aggregate function max is not supported\n> ```\n> \n> Does query including user-defined type aggregate operation not supported by\n> INCREMENTAL MATERIALIZED VIEW?\n\nThe current implementation supports only built-in aggregate functions, so\nuser-defined aggregates are not supported, although it is allowed before.\nThis is because we can not know how user-defined aggregates behave and if\nit can work safely with IVM. Min/Max on your fraction type may work well, \nbut it is possible that some user-defined aggregate functions named min\nor max behave in totally different way than we expected.\n\nIn future, maybe it is possible support user-defined aggregates are supported\nby extending pg_aggregate and adding support functions for IVM, but there is\nnot still a concrete plan for now. \n\nBTW, the following error message doesn't look good because built-in min is\nsupported, so I will improve it.\n\n ERROR: aggregate function min is not supported\n\nRegards,\nYugo Nagata\n\n> \n> An execution example is shown below.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ cat extension-agg.sql\n> --\n> -- pg_fraction: https://github.com/nuko-yokohama/pg_fraction\n> --\n> DROP EXTENSION IF EXISTS pg_fraction CASCADE;\n> DROP TABLE IF EXISTS foo CASCADE;\n> \n> CREATE EXTENSION IF NOT EXISTS pg_fraction;\n> \\dx\n> \\dT+ fraction\n> \n> CREATE TABLE foo (id int, data fraction);\n> INSERT INTO foo (id, data) VALUES (1,'2/3'),(2,'1/3'),(3,'1/2');\n> SELECT MIN(data) data_min, MAX(data) data_max FROM foo;\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_min_imv AS SELECT MIN(data)\n> data_min FROM foo;\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_max_imv AS SELECT MAX(data)\n> data_max FROM foo;\n> \n> SELECT MIN(id) id_min, MAX(id) id_max FROM foo;\n> CREATE INCREMENTAL MATERIALIZED VIEW foo_id_imv AS SELECT MIN(id) id_min,\n> MAX(id) id_max FROM foo;\n> ```\n> \n> Best regards.\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Sat, 18 Jan 2020 14:39:35 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 17 Jan 2020 14:10:32 -0700 (MST)\nlegrand legrand <legrand_legrand@hotmail.com> wrote:\n\n> Hello,\n> \n> It seems that patch v11 doesn't apply any more.\n> Problem with \"scanRTEForColumn\" maybe because of change:\n\nThank you for your reporting! We will fix this in the next update. \n\nRegards,\nYugo Nagata\n\n> \n> https://git.postgresql.org/pg/commitdiff/b541e9accb28c90656388a3f827ca3a68dd2a308\n> \n> Regards\n> PAscal\n> \n> \n> \n> --\n> Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 20 Jan 2020 16:57:58 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 20 Jan 2020 16:57:58 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Fri, 17 Jan 2020 14:10:32 -0700 (MST)\n> legrand legrand <legrand_legrand@hotmail.com> wrote:\n> \n> > Hello,\n> > \n> > It seems that patch v11 doesn't apply any more.\n> > Problem with \"scanRTEForColumn\" maybe because of change:\n> \n> Thank you for your reporting! We will fix this in the next update. \n\nAlthough I have been working conflict fix and merge latest master, it\ntakes a little longer, because it has large impact than we thought. \n\nPlease wait a little more.\n\nRegards\nTakuma Hoshiai\n\n\n> Regards,\n> Yugo Nagata\n> \n> > \n> > https://git.postgresql.org/pg/commitdiff/b541e9accb28c90656388a3f827ca3a68dd2a308\n> > \n> > Regards\n> > PAscal\n> > \n> > \n> > \n> > --\n> > Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n> > \n> > \n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n> \n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n\n",
"msg_date": "Mon, 27 Jan 2020 09:19:05 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, \n\nAttached is the latest patch (v12) to add support for Incremental Materialized View Maintenance (IVM).\nIt is possible to apply to current latest master branch.\n\nDifferences from the previous patch (v11) include:\n* support executing REFRESH MATERIALIZED VIEW command with IVM.\n* support unscannable state by WITH NO DATA option.\n* add a check for LIMIT/OFFSET at creating an IMMV\n\n If REFRESH is executed for IMMV (incremental maintainable materialized view), its contents is re-calculated as same as usual materialized views (full REFRESH). Although IMMV is basically keeping up-to-date data, rounding errors can be accumulated in aggregated value in some cases, for example, if the view contains sum/avg on float type columns. Running REFRESH command on IMMV will resolve this. Also, WITH NO DATA option allows to make IMMV unscannable. At that time, IVM triggers are dropped from IMMV because these become unneeded and useless. \n\nAlso, we added new deptype option 'm' in pg_depend view for checking a trigger is for IVM. Please tell me, if add new deptype option is unacceptable. It is also possible to perform the check by referencing pg_depend and pg_trigger, pg_proc view instead of adding a new deptype.\nWe update IVM restrictions. LIMIT/OFFSET clause is not supported with iVM because it is not suitable for incremental changes to the materialized view. \nThis issue is reported by nuko-san.\nhttps://www.postgresql.org/message-id/CAF3Gu1ZK-s9GQh=70n8+21rBL8+fKW4tV3Ce-xuFXMsNFPO+zQ@mail.gmail.com\n\nBest Regards,\nTakuma Hoshiai\n\nOn Mon, 27 Jan 2020 09:19:05 +0900\nTakuma Hoshiai <hoshiai@sraoss.co.jp> wrote:\n\n> On Mon, 20 Jan 2020 16:57:58 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > On Fri, 17 Jan 2020 14:10:32 -0700 (MST)\n> > legrand legrand <legrand_legrand@hotmail.com> wrote:\n> > \n> > > Hello,\n> > > \n> > > It seems that patch v11 doesn't apply any more.\n> > > Problem with \"scanRTEForColumn\" maybe because of change:\n> > \n> > Thank you for your reporting! We will fix this in the next update. \n> \n> Although I have been working conflict fix and merge latest master, it\n> takes a little longer, because it has large impact than we thought. \n> \n> Please wait a little more.\n> \n> Regards\n> Takuma Hoshiai\n> \n> \n> > Regards,\n> > Yugo Nagata\n> > \n> > > \n> > > https://git.postgresql.org/pg/commitdiff/b541e9accb28c90656388a3f827ca3a68dd2a308\n> > > \n> > > Regards\n> > > PAscal\n> > > \n> > > \n> > > \n> > > --\n> > > Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n> > > \n> > > \n> > \n> > \n> > -- \n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> > \n> > \n> > \n> \n> \n> -- \n> Takuma Hoshiai <hoshiai@sraoss.co.jp>\n> \n> \n> \n> \n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_date": "Tue, 4 Feb 2020 10:58:02 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\"ROW LEVEL SECURITY\" and INCREMENTAL MATERIALIZED VIEW.\n\nHi.\n\nIf ROW LEVEL SECURITY is set for the source table after creating the\nINCREMENTAL MATELIALIZED VIEW, the search results by that are not reflected.\nAfter setting ROW LEVEL SECURITY (similar to normal MATERIALIZED VIEW), you\nneed to execute REFRESH MATERILALIZED VIEW and reflect the result.\n(Not limited to this, but in general cases where search results change by\nmeans other than DML)\n\nI propose to add this note to the document (rules.sgml).\n\nexecute log.\n\n```\n[ec2-user@ip-10-0-1-10 rls]$ psql testdb -e -f rls.sql\nCREATE USER user_a;\nCREATE ROLE\nCREATE TABLE test (id int, data text);\nCREATE TABLE\nGRANT ALL ON TABLE test TO user_a;\nGRANT\nGRANT ALL ON SCHEMA public TO user_a;\nGRANT\nSET ROLE user_a;\nSET\nINSERT INTO test VALUES (1,'A'),(2,'B'),(3,'C');\nINSERT 0 3\nSELECT * FROM test;\n id | data\n----+------\n 1 | A\n 2 | B\n 3 | C\n(3 rows)\n\nCREATE VIEW test_v AS SELECT * FROM test;\nCREATE VIEW\nCREATE MATERIALIZED VIEW test_mv AS SELECT * FROM test;\nSELECT 3\nCREATE INCREMENTAL MATERIALIZED VIEW test_imv AS SELECT * FROM test;\nSELECT 3\nSELECT * FROM test_v;\n id | data\n----+------\n 1 | A\n 2 | B\n 3 | C\n(3 rows)\n\nSELECT * FROM test_mv;\n id | data\n----+------\n 1 | A\n 2 | B\n 3 | C\n(3 rows)\n\nSELECT * FROM test_imv;\n id | data\n----+------\n 3 | C\n 1 | A\n 2 | B\n(3 rows)\n\nRESET ROLE;\nRESET\nCREATE POLICY test_AAA ON test FOR SELECT TO user_a USING (data = 'A');\nCREATE POLICY\nALTER TABLE test ENABLE ROW LEVEL SECURITY ;\nALTER TABLE\nSET ROLE user_a;\nSET\nSELECT * FROM test_v;\n id | data\n----+------\n 1 | A\n(1 row)\n\nSELECT * FROM test_mv;\n id | data\n----+------\n 1 | A\n 2 | B\n 3 | C\n(3 rows)\n\nSELECT * FROM test_imv;\n id | data\n----+------\n 3 | C\n 1 | A\n 2 | B\n(3 rows)\n\nREFRESH MATERIALIZED VIEW test_mv;\nREFRESH MATERIALIZED VIEW\nREFRESH MATERIALIZED VIEW test_imv;\nREFRESH MATERIALIZED VIEW\nSELECT * FROM test_mv;\n id | data\n----+------\n 1 | A\n(1 row)\n\nSELECT * FROM test_imv;\n id | data\n----+------\n 1 | A\n(1 row)\n\nRESET ROLE;\nRESET\nREVOKE ALL ON TABLE test FROM user_a;\nREVOKE\nREVOKE ALL ON TABLE test_v FROM user_a;\nREVOKE\nREVOKE ALL ON TABLE test_mv FROM user_a;\nREVOKE\nREVOKE ALL ON TABLE test_imv FROM user_a;\nREVOKE\nREVOKE ALL ON SCHEMA public FROM user_a;\nREVOKE\nDROP TABLE test CASCADE;\npsql:rls.sql:40: NOTICE: drop cascades to 3 other objects\nDETAIL: drop cascades to view test_v\ndrop cascades to materialized view test_mv\ndrop cascades to materialized view test_imv\nDROP TABLE\nDROP USER user_a;\nDROP ROLE\n[ec2-user@ip-10-0-1-10 rls]$\n\n```\n\nRegard.\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\n\"ROW LEVEL SECURITY\" and INCREMENTAL MATERIALIZED VIEW.Hi.If ROW LEVEL SECURITY is set for the source table after creating the INCREMENTAL MATELIALIZED VIEW, the search results by that are not reflected.After setting ROW LEVEL SECURITY (similar to normal MATERIALIZED VIEW), you need to execute REFRESH MATERILALIZED VIEW and reflect the result.(Not limited to this, but in general cases where search results change by means other than DML)I propose to add this note to the document (rules.sgml).execute log.```[ec2-user@ip-10-0-1-10 rls]$ psql testdb -e -f rls.sqlCREATE USER user_a;CREATE ROLECREATE TABLE test (id int, data text);CREATE TABLEGRANT ALL ON TABLE test TO user_a;GRANTGRANT ALL ON SCHEMA public TO user_a;GRANTSET ROLE user_a;SETINSERT INTO test VALUES (1,'A'),(2,'B'),(3,'C');INSERT 0 3SELECT * FROM test; id | data----+------ 1 | A 2 | B 3 | C(3 rows)CREATE VIEW test_v AS SELECT * FROM test;CREATE VIEWCREATE MATERIALIZED VIEW test_mv AS SELECT * FROM test;SELECT 3CREATE INCREMENTAL MATERIALIZED VIEW test_imv AS SELECT * FROM test;SELECT 3SELECT * FROM test_v; id | data----+------ 1 | A 2 | B 3 | C(3 rows)SELECT * FROM test_mv; id | data----+------ 1 | A 2 | B 3 | C(3 rows)SELECT * FROM test_imv; id | data----+------ 3 | C 1 | A 2 | B(3 rows)RESET ROLE;RESETCREATE POLICY test_AAA ON test FOR SELECT TO user_a USING (data = 'A');CREATE POLICYALTER TABLE test ENABLE ROW LEVEL SECURITY ;ALTER TABLESET ROLE user_a;SETSELECT * FROM test_v; id | data----+------ 1 | A(1 row)SELECT * FROM test_mv; id | data----+------ 1 | A 2 | B 3 | C(3 rows)SELECT * FROM test_imv; id | data----+------ 3 | C 1 | A 2 | B(3 rows)REFRESH MATERIALIZED VIEW test_mv;REFRESH MATERIALIZED VIEWREFRESH MATERIALIZED VIEW test_imv;REFRESH MATERIALIZED VIEWSELECT * FROM test_mv; id | data----+------ 1 | A(1 row)SELECT * FROM test_imv; id | data----+------ 1 | A(1 row)RESET ROLE;RESETREVOKE ALL ON TABLE test FROM user_a;REVOKEREVOKE ALL ON TABLE test_v FROM user_a;REVOKEREVOKE ALL ON TABLE test_mv FROM user_a;REVOKEREVOKE ALL ON TABLE test_imv FROM user_a;REVOKEREVOKE ALL ON SCHEMA public FROM user_a;REVOKEDROP TABLE test CASCADE;psql:rls.sql:40: NOTICE: drop cascades to 3 other objectsDETAIL: drop cascades to view test_vdrop cascades to materialized view test_mvdrop cascades to materialized view test_imvDROP TABLEDROP USER user_a;DROP ROLE[ec2-user@ip-10-0-1-10 rls]$```Regard.2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Tue, 4 Feb 2020 18:40:45 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 4 Feb 2020 18:40:45 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> \"ROW LEVEL SECURITY\" and INCREMENTAL MATERIALIZED VIEW.\n> \n> Hi.\n> \n> If ROW LEVEL SECURITY is set for the source table after creating the\n> INCREMENTAL MATELIALIZED VIEW, the search results by that are not reflected.\n> After setting ROW LEVEL SECURITY (similar to normal MATERIALIZED VIEW), you\n> need to execute REFRESH MATERILALIZED VIEW and reflect the result.\n> (Not limited to this, but in general cases where search results change by\n> means other than DML)\n> \n> I propose to add this note to the document (rules.sgml).\n\nThank you for your suggestion! We'll add some description\nabout this to the documentation.\n\nRegards,\nYugo Nagata\n\n> \n> execute log.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 rls]$ psql testdb -e -f rls.sql\n> CREATE USER user_a;\n> CREATE ROLE\n> CREATE TABLE test (id int, data text);\n> CREATE TABLE\n> GRANT ALL ON TABLE test TO user_a;\n> GRANT\n> GRANT ALL ON SCHEMA public TO user_a;\n> GRANT\n> SET ROLE user_a;\n> SET\n> INSERT INTO test VALUES (1,'A'),(2,'B'),(3,'C');\n> INSERT 0 3\n> SELECT * FROM test;\n> id | data\n> ----+------\n> 1 | A\n> 2 | B\n> 3 | C\n> (3 rows)\n> \n> CREATE VIEW test_v AS SELECT * FROM test;\n> CREATE VIEW\n> CREATE MATERIALIZED VIEW test_mv AS SELECT * FROM test;\n> SELECT 3\n> CREATE INCREMENTAL MATERIALIZED VIEW test_imv AS SELECT * FROM test;\n> SELECT 3\n> SELECT * FROM test_v;\n> id | data\n> ----+------\n> 1 | A\n> 2 | B\n> 3 | C\n> (3 rows)\n> \n> SELECT * FROM test_mv;\n> id | data\n> ----+------\n> 1 | A\n> 2 | B\n> 3 | C\n> (3 rows)\n> \n> SELECT * FROM test_imv;\n> id | data\n> ----+------\n> 3 | C\n> 1 | A\n> 2 | B\n> (3 rows)\n> \n> RESET ROLE;\n> RESET\n> CREATE POLICY test_AAA ON test FOR SELECT TO user_a USING (data = 'A');\n> CREATE POLICY\n> ALTER TABLE test ENABLE ROW LEVEL SECURITY ;\n> ALTER TABLE\n> SET ROLE user_a;\n> SET\n> SELECT * FROM test_v;\n> id | data\n> ----+------\n> 1 | A\n> (1 row)\n> \n> SELECT * FROM test_mv;\n> id | data\n> ----+------\n> 1 | A\n> 2 | B\n> 3 | C\n> (3 rows)\n> \n> SELECT * FROM test_imv;\n> id | data\n> ----+------\n> 3 | C\n> 1 | A\n> 2 | B\n> (3 rows)\n> \n> REFRESH MATERIALIZED VIEW test_mv;\n> REFRESH MATERIALIZED VIEW\n> REFRESH MATERIALIZED VIEW test_imv;\n> REFRESH MATERIALIZED VIEW\n> SELECT * FROM test_mv;\n> id | data\n> ----+------\n> 1 | A\n> (1 row)\n> \n> SELECT * FROM test_imv;\n> id | data\n> ----+------\n> 1 | A\n> (1 row)\n> \n> RESET ROLE;\n> RESET\n> REVOKE ALL ON TABLE test FROM user_a;\n> REVOKE\n> REVOKE ALL ON TABLE test_v FROM user_a;\n> REVOKE\n> REVOKE ALL ON TABLE test_mv FROM user_a;\n> REVOKE\n> REVOKE ALL ON TABLE test_imv FROM user_a;\n> REVOKE\n> REVOKE ALL ON SCHEMA public FROM user_a;\n> REVOKE\n> DROP TABLE test CASCADE;\n> psql:rls.sql:40: NOTICE: drop cascades to 3 other objects\n> DETAIL: drop cascades to view test_v\n> drop cascades to materialized view test_mv\n> drop cascades to materialized view test_imv\n> DROP TABLE\n> DROP USER user_a;\n> DROP ROLE\n> [ec2-user@ip-10-0-1-10 rls]$\n> \n> ```\n> \n> Regard.\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 5 Feb 2020 18:52:10 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi.\n\nUNION query problem.(server crash)\n\nWhen creating an INCREMENTAL MATERIALIZED VIEW,\nthe server process crashes if you specify a query with a UNION.\n\n(commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n\nexecute log.\n\n```\n[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\nDROP TABLE IF EXISTS table_x CASCADE;\npsql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\nDROP TABLE\nDROP TABLE IF EXISTS table_y CASCADE;\nDROP TABLE\nCREATE TABLE table_x (id int, data numeric);\nCREATE TABLE\nCREATE TABLE table_y (id int, data numeric);\nCREATE TABLE\nINSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\nINSERT 0 3\nINSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\nINSERT 0 3\nSELECT * FROM table_x;\n id | data\n----+--------------------\n 1 | 0.950724735058774\n 2 | 0.0222670808201144\n 3 | 0.391258547114841\n(3 rows)\n\nSELECT * FROM table_y;\n id | data\n----+--------------------\n 1 | 0.991717347778337\n 2 | 0.0528458947672874\n 3 | 0.965044982911163\n(3 rows)\n\nCREATE VIEW xy_union_v AS\nSELECT 'table_x' AS name, * FROM table_x\nUNION\nSELECT 'table_y' AS name, * FROM table_y\n;\nCREATE VIEW\nTABLE xy_union_v;\n name | id | data\n---------+----+--------------------\n table_y | 2 | 0.0528458947672874\n table_x | 2 | 0.0222670808201144\n table_y | 3 | 0.965044982911163\n table_x | 1 | 0.950724735058774\n table_x | 3 | 0.391258547114841\n table_y | 1 | 0.991717347778337\n(6 rows)\n\nCREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\nSELECT 'table_x' AS name, * FROM table_x\nUNION\nSELECT 'table_y' AS name, * FROM table_y\n;\npsql:union_query_crash.sql:28: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\npsql:union_query_crash.sql:28: fatal: connection to server was lost\n```\nUNION query problem.(server crash)\n\nWhen creating an INCREMENTAL MATERIALIZED VIEW,\nthe server process crashes if you specify a query with a UNION.\n\n(commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n\nexecute log.\n\n```\n[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\nDROP TABLE IF EXISTS table_x CASCADE;\npsql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\nDROP TABLE\nDROP TABLE IF EXISTS table_y CASCADE;\nDROP TABLE\nCREATE TABLE table_x (id int, data numeric);\nCREATE TABLE\nCREATE TABLE table_y (id int, data numeric);\nCREATE TABLE\nINSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\nINSERT 0 3\nINSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\nINSERT 0 3\nSELECT * FROM table_x;\n id | data\n----+--------------------\n 1 | 0.950724735058774\n 2 | 0.0222670808201144\n 3 | 0.391258547114841\n(3 rows)\n\nSELECT * FROM table_y;\n id | data\n----+--------------------\n 1 | 0.991717347778337\n 2 | 0.0528458947672874\n 3 | 0.965044982911163\n(3 rows)\n\nCREATE VIEW xy_union_v AS\nSELECT 'table_x' AS name, * FROM table_x\nUNION\nSELECT 'table_y' AS name, * FROM table_y\n;\nCREATE VIEW\nTABLE xy_union_v;\n name | id | data\n---------+----+--------------------\n table_y | 2 | 0.0528458947672874\n table_x | 2 | 0.0222670808201144\n table_y | 3 | 0.965044982911163\n table_x | 1 | 0.950724735058774\n table_x | 3 | 0.391258547114841\n table_y | 1 | 0.991717347778337\n(6 rows)\n\nCREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\nSELECT 'table_x' AS name, * FROM table_x\nUNION\nSELECT 'table_y' AS name, * FROM table_y\n;\npsql:union_query_crash.sql:28: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\npsql:union_query_crash.sql:28: fatal: connection to server was lost\n```\n\n2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n\n> Hi,\n>\n> I would like to implement Incremental View Maintenance (IVM) on\n> PostgreSQL.\n> IVM is a technique to maintain materialized views which computes and\n> applies\n> only the incremental changes to the materialized views rather than\n> recomputate the contents as the current REFRESH command does.\n>\n> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> [1].\n> Our implementation uses row OIDs to compute deltas for materialized\n> views.\n> The basic idea is that if we have information about which rows in base\n> tables\n> are contributing to generate a certain row in a matview then we can\n> identify\n> the affected rows when a base table is updated. This is based on an idea of\n> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> approach[3].\n>\n> In our implementation, the mapping of the row OIDs of the materialized view\n> and the base tables are stored in \"OID map\". When a base relation is\n> modified,\n> AFTER trigger is executed and the delta is recorded in delta tables using\n> the transition table feature. The accual udpate of the matview is triggerd\n> by REFRESH command with INCREMENTALLY option.\n>\n> However, we realize problems of our implementation. First, WITH OIDS will\n> be removed since PG12, so OIDs are no longer available. Besides this, it\n> would\n> be hard to implement this since it needs many changes of executor nodes to\n> collect base tables's OIDs during execuing a query. Also, the cost of\n> maintaining\n> OID map would be high.\n>\n> For these reasons, we started to think to implement IVM without relying on\n> OIDs\n> and made a bit more surveys.\n>\n> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> maintenance. In this discussion, Kevin proposed to use counting algorithm\n> [5]\n> to handle projection views (using DISTNICT) properly. This algorithm need\n> an\n> additional system column, count_t, in materialized views and delta tables\n> of\n> base tables.\n>\n> However, the discussion about IVM is now stoped, so we would like to\n> restart and\n> progress this.\n>\n>\n> Through our PoC inplementation and surveys, I think we need to think at\n> least\n> the followings for implementing IVM.\n>\n> 1. How to extract changes on base tables\n>\n> I think there would be at least two approaches for it.\n>\n> - Using transition table in AFTER triggers\n> - Extracting changes from WAL using logical decoding\n>\n> In our PoC implementation, we used AFTER trigger and transition tables,\n> but using\n> logical decoding might be better from the point of performance of base\n> table\n> modification.\n>\n> If we can represent a change of UPDATE on a base table as query-like\n> rather than\n> OLD and NEW, it may be possible to update the materialized view directly\n> instead\n> of performing delete & insert.\n>\n>\n> 2. How to compute the delta to be applied to materialized views\n>\n> Essentially, IVM is based on relational algebra. Theorically, changes on\n> base\n> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> delta on\n> the materialized view is computed using base table deltas based on \"change\n> propagation equations\". For implementation, we have to derive the\n> equation from\n> the view definition query (Query tree, or Plan tree?) and describe this as\n> SQL\n> query to compulte delta to be applied to the materialized view.\n>\n> There could be several operations for view definition: selection,\n> projection,\n> join, aggregation, union, difference, intersection, etc. If we can\n> prepare a\n> module for each operation, it makes IVM extensable, so we can start a\n> simple\n> view definition, and then support more complex views.\n>\n>\n> 3. How to identify rows to be modifed in materialized views\n>\n> When applying the delta to the materialized view, we have to identify\n> which row\n> in the matview is corresponding to a row in the delta. A naive method is\n> matching\n> by using all columns in a tuple, but clearly this is unefficient. If\n> thematerialized\n> view has unique index, we can use this. Maybe, we have to force\n> materialized views\n> to have all primary key colums in their base tables. In our PoC\n> implementation, we\n> used OID to identify rows, but this will be no longer available as said\n> above.\n>\n>\n> 4. When to maintain materialized views\n>\n> There are two candidates of the timing of maintenance, immediate (eager)\n> or deferred.\n>\n> In eager maintenance, the materialized view is updated in the same\n> transaction\n> where the base table is updated. In deferred maintenance, this is done\n> after the\n> transaction is commited, for example, when view is accessed, as a response\n> to user\n> request, etc.\n>\n> In the previous discussion[4], it is planned to start from \"eager\"\n> approach. In our PoC\n> implementaion, we used the other aproach, that is, using REFRESH command\n> to perform IVM.\n> I am not sure which is better as a start point, but I begin to think that\n> the eager\n> approach may be more simple since we don't have to maintain base table\n> changes in other\n> past transactions.\n>\n> In the eager maintenance approache, we have to consider a race condition\n> where two\n> different transactions change base tables simultaneously as discussed in\n> [4].\n>\n>\n> [1]\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> [2]\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> (Japanese only)\n> [3] https://dl.acm.org/citation.cfm?id=2750546\n> [4]\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> [5] https://dl.acm.org/citation.cfm?id=170066\n>\n> Regards,\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n>\n\nHi.UNION query problem.(server crash)When creating an INCREMENTAL MATERIALIZED VIEW, the server process crashes if you specify a query with a UNION.(commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)execute log.```[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sqlDROP TABLE IF EXISTS table_x CASCADE;psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_vDROP TABLEDROP TABLE IF EXISTS table_y CASCADE;DROP TABLECREATE TABLE table_x (id int, data numeric);CREATE TABLECREATE TABLE table_y (id int, data numeric);CREATE TABLEINSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3SELECT * FROM table_x; id | data----+-------------------- 1 | 0.950724735058774 2 | 0.0222670808201144 3 | 0.391258547114841(3 rows)SELECT * FROM table_y; id | data----+-------------------- 1 | 0.991717347778337 2 | 0.0528458947672874 3 | 0.965044982911163(3 rows)CREATE VIEW xy_union_v ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;CREATE VIEWTABLE xy_union_v; name | id | data---------+----+-------------------- table_y | 2 | 0.0528458947672874 table_x | 2 | 0.0222670808201144 table_y | 3 | 0.965044982911163 table_x | 1 | 0.950724735058774 table_x | 3 | 0.391258547114841 table_y | 1 | 0.991717347778337(6 rows)CREATE INCREMENTAL MATERIALIZED VIEW xy_imv ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;psql:union_query_crash.sql:28: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.psql:union_query_crash.sql:28: fatal: connection to server was lost```UNION query problem.(server crash)When creating an INCREMENTAL MATERIALIZED VIEW, the server process crashes if you specify a query with a UNION.(commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)execute log.```[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sqlDROP TABLE IF EXISTS table_x CASCADE;psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_vDROP TABLEDROP TABLE IF EXISTS table_y CASCADE;DROP TABLECREATE TABLE table_x (id int, data numeric);CREATE TABLECREATE TABLE table_y (id int, data numeric);CREATE TABLEINSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3SELECT * FROM table_x; id | data----+-------------------- 1 | 0.950724735058774 2 | 0.0222670808201144 3 | 0.391258547114841(3 rows)SELECT * FROM table_y; id | data----+-------------------- 1 | 0.991717347778337 2 | 0.0528458947672874 3 | 0.965044982911163(3 rows)CREATE VIEW xy_union_v ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;CREATE VIEWTABLE xy_union_v; name | id | data---------+----+-------------------- table_y | 2 | 0.0528458947672874 table_x | 2 | 0.0222670808201144 table_y | 3 | 0.965044982911163 table_x | 1 | 0.950724735058774 table_x | 3 | 0.391258547114841 table_y | 1 | 0.991717347778337(6 rows)CREATE INCREMENTAL MATERIALIZED VIEW xy_imv ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;psql:union_query_crash.sql:28: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.psql:union_query_crash.sql:28: fatal: connection to server was lost```2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Sat, 8 Feb 2020 11:15:45 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sat, 8 Feb 2020 11:15:45 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Hi.\n> \n> UNION query problem.(server crash)\n> \n> When creating an INCREMENTAL MATERIALIZED VIEW,\n> the server process crashes if you specify a query with a UNION.\n\nThank you for your report. As you noticed set operations including\nUNION is concurrently unsupported, although this is not checked at\ndefinition time and not documented either. Now we are thoroughly\ninvestigating unsupported queries, and will add checks and\ndocumentations for them.\n\nRegards,\nYugo Nagata\n\n> \n> (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> \n> execute log.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> DROP TABLE\n> DROP TABLE IF EXISTS table_y CASCADE;\n> DROP TABLE\n> CREATE TABLE table_x (id int, data numeric);\n> CREATE TABLE\n> CREATE TABLE table_y (id int, data numeric);\n> CREATE TABLE\n> INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> SELECT * FROM table_x;\n> id | data\n> ----+--------------------\n> 1 | 0.950724735058774\n> 2 | 0.0222670808201144\n> 3 | 0.391258547114841\n> (3 rows)\n> \n> SELECT * FROM table_y;\n> id | data\n> ----+--------------------\n> 1 | 0.991717347778337\n> 2 | 0.0528458947672874\n> 3 | 0.965044982911163\n> (3 rows)\n> \n> CREATE VIEW xy_union_v AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> CREATE VIEW\n> TABLE xy_union_v;\n> name | id | data\n> ---------+----+--------------------\n> table_y | 2 | 0.0528458947672874\n> table_x | 2 | 0.0222670808201144\n> table_y | 3 | 0.965044982911163\n> table_x | 1 | 0.950724735058774\n> table_x | 3 | 0.391258547114841\n> table_y | 1 | 0.991717347778337\n> (6 rows)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:union_query_crash.sql:28: fatal: connection to server was lost\n> ```\n> UNION query problem.(server crash)\n> \n> When creating an INCREMENTAL MATERIALIZED VIEW,\n> the server process crashes if you specify a query with a UNION.\n> \n> (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> \n> execute log.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> DROP TABLE\n> DROP TABLE IF EXISTS table_y CASCADE;\n> DROP TABLE\n> CREATE TABLE table_x (id int, data numeric);\n> CREATE TABLE\n> CREATE TABLE table_y (id int, data numeric);\n> CREATE TABLE\n> INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> SELECT * FROM table_x;\n> id | data\n> ----+--------------------\n> 1 | 0.950724735058774\n> 2 | 0.0222670808201144\n> 3 | 0.391258547114841\n> (3 rows)\n> \n> SELECT * FROM table_y;\n> id | data\n> ----+--------------------\n> 1 | 0.991717347778337\n> 2 | 0.0528458947672874\n> 3 | 0.965044982911163\n> (3 rows)\n> \n> CREATE VIEW xy_union_v AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> CREATE VIEW\n> TABLE xy_union_v;\n> name | id | data\n> ---------+----+--------------------\n> table_y | 2 | 0.0528458947672874\n> table_x | 2 | 0.0222670808201144\n> table_y | 3 | 0.965044982911163\n> table_x | 1 | 0.950724735058774\n> table_x | 3 | 0.391258547114841\n> table_y | 1 | 0.991717347778337\n> (6 rows)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:union_query_crash.sql:28: fatal: connection to server was lost\n> ```\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 10 Feb 2020 10:37:50 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi.\n\nI understod that UNION is unsupported.\n\nI also refer to the implementation of \"./src/backend/commands/createas.c\"\ncheck_ivm_restriction_walker () to see if there are any other queries that\nmay be problematic.\n\n2020年2月10日(月) 10:38 Yugo NAGATA <nagata@sraoss.co.jp>:\n\n> On Sat, 8 Feb 2020 11:15:45 +0900\n> nuko yokohama <nuko.yokohama@gmail.com> wrote:\n>\n> > Hi.\n> >\n> > UNION query problem.(server crash)\n> >\n> > When creating an INCREMENTAL MATERIALIZED VIEW,\n> > the server process crashes if you specify a query with a UNION.\n>\n> Thank you for your report. As you noticed set operations including\n> UNION is concurrently unsupported, although this is not checked at\n> definition time and not documented either. Now we are thoroughly\n> investigating unsupported queries, and will add checks and\n> documentations for them.\n>\n> Regards,\n> Yugo Nagata\n>\n> >\n> > (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> >\n> > execute log.\n> >\n> > ```\n> > [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> > DROP TABLE IF EXISTS table_x CASCADE;\n> > psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> > DROP TABLE\n> > DROP TABLE IF EXISTS table_y CASCADE;\n> > DROP TABLE\n> > CREATE TABLE table_x (id int, data numeric);\n> > CREATE TABLE\n> > CREATE TABLE table_y (id int, data numeric);\n> > CREATE TABLE\n> > INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > SELECT * FROM table_x;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.950724735058774\n> > 2 | 0.0222670808201144\n> > 3 | 0.391258547114841\n> > (3 rows)\n> >\n> > SELECT * FROM table_y;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.991717347778337\n> > 2 | 0.0528458947672874\n> > 3 | 0.965044982911163\n> > (3 rows)\n> >\n> > CREATE VIEW xy_union_v AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > CREATE VIEW\n> > TABLE xy_union_v;\n> > name | id | data\n> > ---------+----+--------------------\n> > table_y | 2 | 0.0528458947672874\n> > table_x | 2 | 0.0222670808201144\n> > table_y | 3 | 0.965044982911163\n> > table_x | 1 | 0.950724735058774\n> > table_x | 3 | 0.391258547114841\n> > table_y | 1 | 0.991717347778337\n> > (6 rows)\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > psql:union_query_crash.sql:28: fatal: connection to server was lost\n> > ```\n> > UNION query problem.(server crash)\n> >\n> > When creating an INCREMENTAL MATERIALIZED VIEW,\n> > the server process crashes if you specify a query with a UNION.\n> >\n> > (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> >\n> > execute log.\n> >\n> > ```\n> > [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> > DROP TABLE IF EXISTS table_x CASCADE;\n> > psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> > DROP TABLE\n> > DROP TABLE IF EXISTS table_y CASCADE;\n> > DROP TABLE\n> > CREATE TABLE table_x (id int, data numeric);\n> > CREATE TABLE\n> > CREATE TABLE table_y (id int, data numeric);\n> > CREATE TABLE\n> > INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > SELECT * FROM table_x;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.950724735058774\n> > 2 | 0.0222670808201144\n> > 3 | 0.391258547114841\n> > (3 rows)\n> >\n> > SELECT * FROM table_y;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.991717347778337\n> > 2 | 0.0528458947672874\n> > 3 | 0.965044982911163\n> > (3 rows)\n> >\n> > CREATE VIEW xy_union_v AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > CREATE VIEW\n> > TABLE xy_union_v;\n> > name | id | data\n> > ---------+----+--------------------\n> > table_y | 2 | 0.0528458947672874\n> > table_x | 2 | 0.0222670808201144\n> > table_y | 3 | 0.965044982911163\n> > table_x | 1 | 0.950724735058774\n> > table_x | 3 | 0.391258547114841\n> > table_y | 1 | 0.991717347778337\n> > (6 rows)\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > psql:union_query_crash.sql:28: fatal: connection to server was lost\n> > ```\n> >\n> > 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> >\n> > > Hi,\n> > >\n> > > I would like to implement Incremental View Maintenance (IVM) on\n> > > PostgreSQL.\n> > > IVM is a technique to maintain materialized views which computes and\n> > > applies\n> > > only the incremental changes to the materialized views rather than\n> > > recomputate the contents as the current REFRESH command does.\n> > >\n> > > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > > [1].\n> > > Our implementation uses row OIDs to compute deltas for materialized\n> > > views.\n> > > The basic idea is that if we have information about which rows in base\n> > > tables\n> > > are contributing to generate a certain row in a matview then we can\n> > > identify\n> > > the affected rows when a base table is updated. This is based on an\n> idea of\n> > > Dr. Masunaga [2] who is a member of our group and inspired from\n> ID-based\n> > > approach[3].\n> > >\n> > > In our implementation, the mapping of the row OIDs of the materialized\n> view\n> > > and the base tables are stored in \"OID map\". When a base relation is\n> > > modified,\n> > > AFTER trigger is executed and the delta is recorded in delta tables\n> using\n> > > the transition table feature. The accual udpate of the matview is\n> triggerd\n> > > by REFRESH command with INCREMENTALLY option.\n> > >\n> > > However, we realize problems of our implementation. First, WITH OIDS\n> will\n> > > be removed since PG12, so OIDs are no longer available. Besides this,\n> it\n> > > would\n> > > be hard to implement this since it needs many changes of executor\n> nodes to\n> > > collect base tables's OIDs during execuing a query. Also, the cost of\n> > > maintaining\n> > > OID map would be high.\n> > >\n> > > For these reasons, we started to think to implement IVM without\n> relying on\n> > > OIDs\n> > > and made a bit more surveys.\n> > >\n> > > We also looked at Kevin Grittner's discussion [4] on incremental\n> matview\n> > > maintenance. In this discussion, Kevin proposed to use counting\n> algorithm\n> > > [5]\n> > > to handle projection views (using DISTNICT) properly. This algorithm\n> need\n> > > an\n> > > additional system column, count_t, in materialized views and delta\n> tables\n> > > of\n> > > base tables.\n> > >\n> > > However, the discussion about IVM is now stoped, so we would like to\n> > > restart and\n> > > progress this.\n> > >\n> > >\n> > > Through our PoC inplementation and surveys, I think we need to think at\n> > > least\n> > > the followings for implementing IVM.\n> > >\n> > > 1. How to extract changes on base tables\n> > >\n> > > I think there would be at least two approaches for it.\n> > >\n> > > - Using transition table in AFTER triggers\n> > > - Extracting changes from WAL using logical decoding\n> > >\n> > > In our PoC implementation, we used AFTER trigger and transition tables,\n> > > but using\n> > > logical decoding might be better from the point of performance of base\n> > > table\n> > > modification.\n> > >\n> > > If we can represent a change of UPDATE on a base table as query-like\n> > > rather than\n> > > OLD and NEW, it may be possible to update the materialized view\n> directly\n> > > instead\n> > > of performing delete & insert.\n> > >\n> > >\n> > > 2. How to compute the delta to be applied to materialized views\n> > >\n> > > Essentially, IVM is based on relational algebra. Theorically, changes\n> on\n> > > base\n> > > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > > delta on\n> > > the materialized view is computed using base table deltas based on\n> \"change\n> > > propagation equations\". For implementation, we have to derive the\n> > > equation from\n> > > the view definition query (Query tree, or Plan tree?) and describe\n> this as\n> > > SQL\n> > > query to compulte delta to be applied to the materialized view.\n> > >\n> > > There could be several operations for view definition: selection,\n> > > projection,\n> > > join, aggregation, union, difference, intersection, etc. If we can\n> > > prepare a\n> > > module for each operation, it makes IVM extensable, so we can start a\n> > > simple\n> > > view definition, and then support more complex views.\n> > >\n> > >\n> > > 3. How to identify rows to be modifed in materialized views\n> > >\n> > > When applying the delta to the materialized view, we have to identify\n> > > which row\n> > > in the matview is corresponding to a row in the delta. A naive method\n> is\n> > > matching\n> > > by using all columns in a tuple, but clearly this is unefficient. If\n> > > thematerialized\n> > > view has unique index, we can use this. Maybe, we have to force\n> > > materialized views\n> > > to have all primary key colums in their base tables. In our PoC\n> > > implementation, we\n> > > used OID to identify rows, but this will be no longer available as said\n> > > above.\n> > >\n> > >\n> > > 4. When to maintain materialized views\n> > >\n> > > There are two candidates of the timing of maintenance, immediate\n> (eager)\n> > > or deferred.\n> > >\n> > > In eager maintenance, the materialized view is updated in the same\n> > > transaction\n> > > where the base table is updated. In deferred maintenance, this is done\n> > > after the\n> > > transaction is commited, for example, when view is accessed, as a\n> response\n> > > to user\n> > > request, etc.\n> > >\n> > > In the previous discussion[4], it is planned to start from \"eager\"\n> > > approach. In our PoC\n> > > implementaion, we used the other aproach, that is, using REFRESH\n> command\n> > > to perform IVM.\n> > > I am not sure which is better as a start point, but I begin to think\n> that\n> > > the eager\n> > > approach may be more simple since we don't have to maintain base table\n> > > changes in other\n> > > past transactions.\n> > >\n> > > In the eager maintenance approache, we have to consider a race\n> condition\n> > > where two\n> > > different transactions change base tables simultaneously as discussed\n> in\n> > > [4].\n> > >\n> > >\n> > > [1]\n> > >\n> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > > [2]\n> > >\n> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > > (Japanese only)\n> > > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > > [4]\n> > >\n> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > > [5] https://dl.acm.org/citation.cfm?id=170066\n> > >\n> > > Regards,\n> > > --\n> > > Yugo Nagata <nagata@sraoss.co.jp>\n> > >\n> > >\n>\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n\nHi.I understod that UNION is unsupported.I also refer to the implementation of \"./src/backend/commands/createas.c\" check_ivm_restriction_walker () to see if there are any other queries that may be problematic.2020年2月10日(月) 10:38 Yugo NAGATA <nagata@sraoss.co.jp>:On Sat, 8 Feb 2020 11:15:45 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Hi.\n> \n> UNION query problem.(server crash)\n> \n> When creating an INCREMENTAL MATERIALIZED VIEW,\n> the server process crashes if you specify a query with a UNION.\n\nThank you for your report. As you noticed set operations including\nUNION is concurrently unsupported, although this is not checked at\ndefinition time and not documented either. Now we are thoroughly\ninvestigating unsupported queries, and will add checks and\ndocumentations for them.\n\nRegards,\nYugo Nagata\n\n> \n> (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> \n> execute log.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> DROP TABLE\n> DROP TABLE IF EXISTS table_y CASCADE;\n> DROP TABLE\n> CREATE TABLE table_x (id int, data numeric);\n> CREATE TABLE\n> CREATE TABLE table_y (id int, data numeric);\n> CREATE TABLE\n> INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> SELECT * FROM table_x;\n> id | data\n> ----+--------------------\n> 1 | 0.950724735058774\n> 2 | 0.0222670808201144\n> 3 | 0.391258547114841\n> (3 rows)\n> \n> SELECT * FROM table_y;\n> id | data\n> ----+--------------------\n> 1 | 0.991717347778337\n> 2 | 0.0528458947672874\n> 3 | 0.965044982911163\n> (3 rows)\n> \n> CREATE VIEW xy_union_v AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> CREATE VIEW\n> TABLE xy_union_v;\n> name | id | data\n> ---------+----+--------------------\n> table_y | 2 | 0.0528458947672874\n> table_x | 2 | 0.0222670808201144\n> table_y | 3 | 0.965044982911163\n> table_x | 1 | 0.950724735058774\n> table_x | 3 | 0.391258547114841\n> table_y | 1 | 0.991717347778337\n> (6 rows)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:union_query_crash.sql:28: fatal: connection to server was lost\n> ```\n> UNION query problem.(server crash)\n> \n> When creating an INCREMENTAL MATERIALIZED VIEW,\n> the server process crashes if you specify a query with a UNION.\n> \n> (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> \n> execute log.\n> \n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> DROP TABLE\n> DROP TABLE IF EXISTS table_y CASCADE;\n> DROP TABLE\n> CREATE TABLE table_x (id int, data numeric);\n> CREATE TABLE\n> CREATE TABLE table_y (id int, data numeric);\n> CREATE TABLE\n> INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> SELECT * FROM table_x;\n> id | data\n> ----+--------------------\n> 1 | 0.950724735058774\n> 2 | 0.0222670808201144\n> 3 | 0.391258547114841\n> (3 rows)\n> \n> SELECT * FROM table_y;\n> id | data\n> ----+--------------------\n> 1 | 0.991717347778337\n> 2 | 0.0528458947672874\n> 3 | 0.965044982911163\n> (3 rows)\n> \n> CREATE VIEW xy_union_v AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> CREATE VIEW\n> TABLE xy_union_v;\n> name | id | data\n> ---------+----+--------------------\n> table_y | 2 | 0.0528458947672874\n> table_x | 2 | 0.0222670808201144\n> table_y | 3 | 0.965044982911163\n> table_x | 1 | 0.950724735058774\n> table_x | 3 | 0.391258547114841\n> table_y | 1 | 0.991717347778337\n> (6 rows)\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:union_query_crash.sql:28: fatal: connection to server was lost\n> ```\n> \n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> \n> > Hi,\n> >\n> > I would like to implement Incremental View Maintenance (IVM) on\n> > PostgreSQL.\n> > IVM is a technique to maintain materialized views which computes and\n> > applies\n> > only the incremental changes to the materialized views rather than\n> > recomputate the contents as the current REFRESH command does.\n> >\n> > I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> > [1].\n> > Our implementation uses row OIDs to compute deltas for materialized\n> > views.\n> > The basic idea is that if we have information about which rows in base\n> > tables\n> > are contributing to generate a certain row in a matview then we can\n> > identify\n> > the affected rows when a base table is updated. This is based on an idea of\n> > Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> > approach[3].\n> >\n> > In our implementation, the mapping of the row OIDs of the materialized view\n> > and the base tables are stored in \"OID map\". When a base relation is\n> > modified,\n> > AFTER trigger is executed and the delta is recorded in delta tables using\n> > the transition table feature. The accual udpate of the matview is triggerd\n> > by REFRESH command with INCREMENTALLY option.\n> >\n> > However, we realize problems of our implementation. First, WITH OIDS will\n> > be removed since PG12, so OIDs are no longer available. Besides this, it\n> > would\n> > be hard to implement this since it needs many changes of executor nodes to\n> > collect base tables's OIDs during execuing a query. Also, the cost of\n> > maintaining\n> > OID map would be high.\n> >\n> > For these reasons, we started to think to implement IVM without relying on\n> > OIDs\n> > and made a bit more surveys.\n> >\n> > We also looked at Kevin Grittner's discussion [4] on incremental matview\n> > maintenance. In this discussion, Kevin proposed to use counting algorithm\n> > [5]\n> > to handle projection views (using DISTNICT) properly. This algorithm need\n> > an\n> > additional system column, count_t, in materialized views and delta tables\n> > of\n> > base tables.\n> >\n> > However, the discussion about IVM is now stoped, so we would like to\n> > restart and\n> > progress this.\n> >\n> >\n> > Through our PoC inplementation and surveys, I think we need to think at\n> > least\n> > the followings for implementing IVM.\n> >\n> > 1. How to extract changes on base tables\n> >\n> > I think there would be at least two approaches for it.\n> >\n> > - Using transition table in AFTER triggers\n> > - Extracting changes from WAL using logical decoding\n> >\n> > In our PoC implementation, we used AFTER trigger and transition tables,\n> > but using\n> > logical decoding might be better from the point of performance of base\n> > table\n> > modification.\n> >\n> > If we can represent a change of UPDATE on a base table as query-like\n> > rather than\n> > OLD and NEW, it may be possible to update the materialized view directly\n> > instead\n> > of performing delete & insert.\n> >\n> >\n> > 2. How to compute the delta to be applied to materialized views\n> >\n> > Essentially, IVM is based on relational algebra. Theorically, changes on\n> > base\n> > tables are represented as deltas on this, like \"R <- R + dR\", and the\n> > delta on\n> > the materialized view is computed using base table deltas based on \"change\n> > propagation equations\". For implementation, we have to derive the\n> > equation from\n> > the view definition query (Query tree, or Plan tree?) and describe this as\n> > SQL\n> > query to compulte delta to be applied to the materialized view.\n> >\n> > There could be several operations for view definition: selection,\n> > projection,\n> > join, aggregation, union, difference, intersection, etc. If we can\n> > prepare a\n> > module for each operation, it makes IVM extensable, so we can start a\n> > simple\n> > view definition, and then support more complex views.\n> >\n> >\n> > 3. How to identify rows to be modifed in materialized views\n> >\n> > When applying the delta to the materialized view, we have to identify\n> > which row\n> > in the matview is corresponding to a row in the delta. A naive method is\n> > matching\n> > by using all columns in a tuple, but clearly this is unefficient. If\n> > thematerialized\n> > view has unique index, we can use this. Maybe, we have to force\n> > materialized views\n> > to have all primary key colums in their base tables. In our PoC\n> > implementation, we\n> > used OID to identify rows, but this will be no longer available as said\n> > above.\n> >\n> >\n> > 4. When to maintain materialized views\n> >\n> > There are two candidates of the timing of maintenance, immediate (eager)\n> > or deferred.\n> >\n> > In eager maintenance, the materialized view is updated in the same\n> > transaction\n> > where the base table is updated. In deferred maintenance, this is done\n> > after the\n> > transaction is commited, for example, when view is accessed, as a response\n> > to user\n> > request, etc.\n> >\n> > In the previous discussion[4], it is planned to start from \"eager\"\n> > approach. In our PoC\n> > implementaion, we used the other aproach, that is, using REFRESH command\n> > to perform IVM.\n> > I am not sure which is better as a start point, but I begin to think that\n> > the eager\n> > approach may be more simple since we don't have to maintain base table\n> > changes in other\n> > past transactions.\n> >\n> > In the eager maintenance approache, we have to consider a race condition\n> > where two\n> > different transactions change base tables simultaneously as discussed in\n> > [4].\n> >\n> >\n> > [1]\n> > https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> > [2]\n> > https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> > (Japanese only)\n> > [3] https://dl.acm.org/citation.cfm?id=2750546\n> > [4]\n> > https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> > [5] https://dl.acm.org/citation.cfm?id=170066\n> >\n> > Regards,\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 10 Feb 2020 12:34:32 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the latest patch (v13) to add support for Incremental\nView Maintenance (IVM). Differences from the previous patch (v12)\ninclude:\n\n* Allow to maintain IMMVs containing user defined types\n \n Previously, IMMVs (Incrementally Maintainable Materialized Views)\n containing user defined types could not be maintained and an error\n was raised because such columns were compared using pg_calatog.=\n during tuple matching. To fix this, use the column type's default\n equality operator instead of forcing to use the built-in operator.\n\n Pointed out by nuko-san.\n https://www.postgresql.org/message-id/CAF3Gu1YL7HWF0Veor3t8sQD%2BJnvozHe6WdUw0YsMqJGFezVhpg%40mail.gmail.com\n\n* Improve an error message for unspoorted aggregate functions\n \n Currentlly only built-in aggregate functions are supported, so\n aggregates on user-defined types causes an error at view definition\n time. However, the message was unappropreate like:\n \n ERROR: aggregate function max is not supported\n \n even though built-in max is supported. Therefore, this is improved\n to include its argument types as following:\n\n ERROR: aggregate function min(xxx) is not supported\n HINT: IVM supports only built-in aggregate functions.\n\n Pointed out by nuko-san.\n https://www.postgresql.org/message-id/CAF3Gu1bP0eiv%3DCqV%3D%2BxATdcmLypjjudLz_wdJgnRNULpiX9GrA%40mail.gmail.com\n \n* Doc: fix description of support subquery\n\n IVM supports regular EXISTS clause not only correlated subqueries.\n\nRegards,\nYugo Nagata\n\nOn Fri, 20 Dec 2019 14:02:32 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> IVM is a way to make materialized views up-to-date in which only\n> incremental changes are computed and applied on views rather than\n> recomputing the contents from scratch as REFRESH MATERIALIZED VIEW\n> does. IVM can update materialized views more efficiently\n> than recomputation when only small part of the view need updates.\n> \n> There are two approaches with regard to timing of view maintenance:\n> immediate and deferred. In immediate maintenance, views are updated in\n> the same transaction where its base table is modified. In deferred\n> maintenance, views are updated after the transaction is committed,\n> for example, when the view is accessed, as a response to user command\n> like REFRESH, or periodically in background, and so on. \n> \n> This patch implements a kind of immediate maintenance, in which\n> materialized views are updated immediately in AFTER triggers when a\n> base table is modified.\n> \n> This supports views using:\n> - inner and outer joins including self-join\n> - some built-in aggregate functions (count, sum, agv, min, max)\n> - a part of subqueries\n> -- simple subqueries in FROM clause\n> -- EXISTS subqueries in WHERE clause\n> - DISTINCT and views with tuple duplicates\n\n\nRegareds,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 10 Feb 2020 13:58:54 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Takuma Hoshiai wrote\n> Hi, \n> \n> Attached is the latest patch (v12) to add support for Incremental\n> Materialized View Maintenance (IVM).\n> It is possible to apply to current latest master branch.\n> \n> Differences from the previous patch (v11) include:\n> * support executing REFRESH MATERIALIZED VIEW command with IVM.\n> * support unscannable state by WITH NO DATA option.\n> * add a check for LIMIT/OFFSET at creating an IMMV\n> \n> If REFRESH is executed for IMMV (incremental maintainable materialized\n> view), its contents is re-calculated as same as usual materialized views\n> (full REFRESH). Although IMMV is basically keeping up-to-date data,\n> rounding errors can be accumulated in aggregated value in some cases, for\n> example, if the view contains sum/avg on float type columns. Running\n> REFRESH command on IMMV will resolve this. Also, WITH NO DATA option\n> allows to make IMMV unscannable. At that time, IVM triggers are dropped\n> from IMMV because these become unneeded and useless. \n> \n> [...]\n\nHello,\n\nregarding syntax REFRESH MATERIALIZED VIEW x WITH NO DATA\n\nI understand that triggers are removed from the source tables, transforming \nthe INCREMENTAL MATERIALIZED VIEW into a(n unscannable) MATERIALIZED VIEW.\n\npostgres=# refresh materialized view imv with no data;\nREFRESH MATERIALIZED VIEW\npostgres=# select * from imv;\nERROR: materialized view \"imv\" has not been populated\nHINT: Use the REFRESH MATERIALIZED VIEW command.\n\nThis operation seems to me more of an ALTER command than a REFRESH ONE.\n\nWouldn't the syntax\nALTER MATERIALIZED VIEW [ IF EXISTS ] name\n SET WITH NO DATA\nor\n SET WITHOUT DATA\nbe better ?\n\nContinuing into this direction, did you ever think about an other feature\nlike: \nALTER MATERIALIZED VIEW [ IF EXISTS ] name\n SET { NOINCREMENTAL }\nor even\n SET { NOINCREMENTAL | INCREMENTAL | INCREMENTAL CONCURRENTLY }\n\nthat would permit to switch between those modes and would keep frozen data \navailable in the materialized view during heavy operations on source tables\n?\n\nRegards\nPAscal \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Tue, 11 Feb 2020 15:04:12 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi PAscal,\n\nOn Tue, 11 Feb 2020 15:04:12 -0700 (MST)\nlegrand legrand <legrand_legrand@hotmail.com> wrote:\n> \n> regarding syntax REFRESH MATERIALIZED VIEW x WITH NO DATA\n> \n> I understand that triggers are removed from the source tables, transforming \n> the INCREMENTAL MATERIALIZED VIEW into a(n unscannable) MATERIALIZED VIEW.\n> \n> postgres=# refresh materialized view imv with no data;\n> REFRESH MATERIALIZED VIEW\n> postgres=# select * from imv;\n> ERROR: materialized view \"imv\" has not been populated\n> HINT: Use the REFRESH MATERIALIZED VIEW command.\n> \n> This operation seems to me more of an ALTER command than a REFRESH ONE.\n> \n> Wouldn't the syntax\n> ALTER MATERIALIZED VIEW [ IF EXISTS ] name\n> SET WITH NO DATA\n> or\n> SET WITHOUT DATA\n> be better ?\n\nWe use \"REFRESH ... WITH NO DATA\" because there is already the syntax\nto make materialized views non-scannable. We are just following in this.\n\nhttps://www.postgresql.org/docs/12/sql-refreshmaterializedview.html\n\n> \n> Continuing into this direction, did you ever think about an other feature\n> like: \n> ALTER MATERIALIZED VIEW [ IF EXISTS ] name\n> SET { NOINCREMENTAL }\n> or even\n> SET { NOINCREMENTAL | INCREMENTAL | INCREMENTAL CONCURRENTLY }\n> \n> that would permit to switch between those modes and would keep frozen data \n> available in the materialized view during heavy operations on source tables\n> ?\n\nThank you for your suggestion! I agree that the feature to switch between\nnormal materialized view and incrementally maintainable view is useful.\nWe will add this to our ToDo list. Regarding its syntax, \nI would not like to add new keyword like NONINCREMENTAL, so how about \nthe following\n\n ALTER MATERIALIZED VIEW ... SET {WITH | WITHOUT} INCREMENTAL REFRESH\n\nalthough this is just a idea and we will need discussion on it.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 13 Feb 2020 15:05:40 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Yugo Nagata wrote\n> Thank you for your suggestion! I agree that the feature to switch between\n> normal materialized view and incrementally maintainable view is useful.\n> We will add this to our ToDo list. Regarding its syntax, \n> I would not like to add new keyword like NONINCREMENTAL, so how about \n> the following\n> \n> ALTER MATERIALIZED VIEW ... SET {WITH | WITHOUT} INCREMENTAL REFRESH\n> \n> although this is just a idea and we will need discussion on it.\n\nThanks I will follow that discussion on GitHub \nhttps://github.com/sraoss/pgsql-ivm/issues/79\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Thu, 13 Feb 2020 12:57:11 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi.\n\nSELECT statements with a TABLESAMPLE clause should be rejected.\n\nCurrently, CREATE INCREMENTAL MATERIALIZED VIEW allows SELECT statements\nwith the TABLESAMPLE clause.\nHowever, the result of this SELECT statement is undefined and should be\nrejected when specified in CREATE INCREMENTAL MATERIALIZED VIEW.\n(similar to handling non-immutable functions)\nRegard.\n\n2020年2月8日(土) 11:15 nuko yokohama <nuko.yokohama@gmail.com>:\n\n> Hi.\n>\n> UNION query problem.(server crash)\n>\n> When creating an INCREMENTAL MATERIALIZED VIEW,\n> the server process crashes if you specify a query with a UNION.\n>\n> (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n>\n> execute log.\n>\n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> DROP TABLE\n> DROP TABLE IF EXISTS table_y CASCADE;\n> DROP TABLE\n> CREATE TABLE table_x (id int, data numeric);\n> CREATE TABLE\n> CREATE TABLE table_y (id int, data numeric);\n> CREATE TABLE\n> INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> SELECT * FROM table_x;\n> id | data\n> ----+--------------------\n> 1 | 0.950724735058774\n> 2 | 0.0222670808201144\n> 3 | 0.391258547114841\n> (3 rows)\n>\n> SELECT * FROM table_y;\n> id | data\n> ----+--------------------\n> 1 | 0.991717347778337\n> 2 | 0.0528458947672874\n> 3 | 0.965044982911163\n> (3 rows)\n>\n> CREATE VIEW xy_union_v AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> CREATE VIEW\n> TABLE xy_union_v;\n> name | id | data\n> ---------+----+--------------------\n> table_y | 2 | 0.0528458947672874\n> table_x | 2 | 0.0222670808201144\n> table_y | 3 | 0.965044982911163\n> table_x | 1 | 0.950724735058774\n> table_x | 3 | 0.391258547114841\n> table_y | 1 | 0.991717347778337\n> (6 rows)\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:union_query_crash.sql:28: fatal: connection to server was lost\n> ```\n> UNION query problem.(server crash)\n>\n> When creating an INCREMENTAL MATERIALIZED VIEW,\n> the server process crashes if you specify a query with a UNION.\n>\n> (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n>\n> execute log.\n>\n> ```\n> [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> DROP TABLE IF EXISTS table_x CASCADE;\n> psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> DROP TABLE\n> DROP TABLE IF EXISTS table_y CASCADE;\n> DROP TABLE\n> CREATE TABLE table_x (id int, data numeric);\n> CREATE TABLE\n> CREATE TABLE table_y (id int, data numeric);\n> CREATE TABLE\n> INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> INSERT 0 3\n> SELECT * FROM table_x;\n> id | data\n> ----+--------------------\n> 1 | 0.950724735058774\n> 2 | 0.0222670808201144\n> 3 | 0.391258547114841\n> (3 rows)\n>\n> SELECT * FROM table_y;\n> id | data\n> ----+--------------------\n> 1 | 0.991717347778337\n> 2 | 0.0528458947672874\n> 3 | 0.965044982911163\n> (3 rows)\n>\n> CREATE VIEW xy_union_v AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> CREATE VIEW\n> TABLE xy_union_v;\n> name | id | data\n> ---------+----+--------------------\n> table_y | 2 | 0.0528458947672874\n> table_x | 2 | 0.0222670808201144\n> table_y | 3 | 0.965044982911163\n> table_x | 1 | 0.950724735058774\n> table_x | 3 | 0.391258547114841\n> table_y | 1 | 0.991717347778337\n> (6 rows)\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> SELECT 'table_x' AS name, * FROM table_x\n> UNION\n> SELECT 'table_y' AS name, * FROM table_y\n> ;\n> psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:union_query_crash.sql:28: fatal: connection to server was lost\n> ```\n>\n> 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n>\n>> Hi,\n>>\n>> I would like to implement Incremental View Maintenance (IVM) on\n>> PostgreSQL.\n>> IVM is a technique to maintain materialized views which computes and\n>> applies\n>> only the incremental changes to the materialized views rather than\n>> recomputate the contents as the current REFRESH command does.\n>>\n>> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n>> [1].\n>> Our implementation uses row OIDs to compute deltas for materialized\n>> views.\n>> The basic idea is that if we have information about which rows in base\n>> tables\n>> are contributing to generate a certain row in a matview then we can\n>> identify\n>> the affected rows when a base table is updated. This is based on an idea\n>> of\n>> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n>> approach[3].\n>>\n>> In our implementation, the mapping of the row OIDs of the materialized\n>> view\n>> and the base tables are stored in \"OID map\". When a base relation is\n>> modified,\n>> AFTER trigger is executed and the delta is recorded in delta tables using\n>> the transition table feature. The accual udpate of the matview is triggerd\n>> by REFRESH command with INCREMENTALLY option.\n>>\n>> However, we realize problems of our implementation. First, WITH OIDS will\n>> be removed since PG12, so OIDs are no longer available. Besides this, it\n>> would\n>> be hard to implement this since it needs many changes of executor nodes to\n>> collect base tables's OIDs during execuing a query. Also, the cost of\n>> maintaining\n>> OID map would be high.\n>>\n>> For these reasons, we started to think to implement IVM without relying\n>> on OIDs\n>> and made a bit more surveys.\n>>\n>> We also looked at Kevin Grittner's discussion [4] on incremental matview\n>> maintenance. In this discussion, Kevin proposed to use counting\n>> algorithm [5]\n>> to handle projection views (using DISTNICT) properly. This algorithm need\n>> an\n>> additional system column, count_t, in materialized views and delta tables\n>> of\n>> base tables.\n>>\n>> However, the discussion about IVM is now stoped, so we would like to\n>> restart and\n>> progress this.\n>>\n>>\n>> Through our PoC inplementation and surveys, I think we need to think at\n>> least\n>> the followings for implementing IVM.\n>>\n>> 1. How to extract changes on base tables\n>>\n>> I think there would be at least two approaches for it.\n>>\n>> - Using transition table in AFTER triggers\n>> - Extracting changes from WAL using logical decoding\n>>\n>> In our PoC implementation, we used AFTER trigger and transition tables,\n>> but using\n>> logical decoding might be better from the point of performance of base\n>> table\n>> modification.\n>>\n>> If we can represent a change of UPDATE on a base table as query-like\n>> rather than\n>> OLD and NEW, it may be possible to update the materialized view directly\n>> instead\n>> of performing delete & insert.\n>>\n>>\n>> 2. How to compute the delta to be applied to materialized views\n>>\n>> Essentially, IVM is based on relational algebra. Theorically, changes on\n>> base\n>> tables are represented as deltas on this, like \"R <- R + dR\", and the\n>> delta on\n>> the materialized view is computed using base table deltas based on \"change\n>> propagation equations\". For implementation, we have to derive the\n>> equation from\n>> the view definition query (Query tree, or Plan tree?) and describe this\n>> as SQL\n>> query to compulte delta to be applied to the materialized view.\n>>\n>> There could be several operations for view definition: selection,\n>> projection,\n>> join, aggregation, union, difference, intersection, etc. If we can\n>> prepare a\n>> module for each operation, it makes IVM extensable, so we can start a\n>> simple\n>> view definition, and then support more complex views.\n>>\n>>\n>> 3. How to identify rows to be modifed in materialized views\n>>\n>> When applying the delta to the materialized view, we have to identify\n>> which row\n>> in the matview is corresponding to a row in the delta. A naive method is\n>> matching\n>> by using all columns in a tuple, but clearly this is unefficient. If\n>> thematerialized\n>> view has unique index, we can use this. Maybe, we have to force\n>> materialized views\n>> to have all primary key colums in their base tables. In our PoC\n>> implementation, we\n>> used OID to identify rows, but this will be no longer available as said\n>> above.\n>>\n>>\n>> 4. When to maintain materialized views\n>>\n>> There are two candidates of the timing of maintenance, immediate (eager)\n>> or deferred.\n>>\n>> In eager maintenance, the materialized view is updated in the same\n>> transaction\n>> where the base table is updated. In deferred maintenance, this is done\n>> after the\n>> transaction is commited, for example, when view is accessed, as a\n>> response to user\n>> request, etc.\n>>\n>> In the previous discussion[4], it is planned to start from \"eager\"\n>> approach. In our PoC\n>> implementaion, we used the other aproach, that is, using REFRESH command\n>> to perform IVM.\n>> I am not sure which is better as a start point, but I begin to think that\n>> the eager\n>> approach may be more simple since we don't have to maintain base table\n>> changes in other\n>> past transactions.\n>>\n>> In the eager maintenance approache, we have to consider a race condition\n>> where two\n>> different transactions change base tables simultaneously as discussed in\n>> [4].\n>>\n>>\n>> [1]\n>> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n>> [2]\n>> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n>> (Japanese only)\n>> [3] https://dl.acm.org/citation.cfm?id=2750546\n>> [4]\n>> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n>> [5] https://dl.acm.org/citation.cfm?id=170066\n>>\n>> Regards,\n>> --\n>> Yugo Nagata <nagata@sraoss.co.jp>\n>>\n>>\n\nHi.\nSELECT statements with a TABLESAMPLE clause should be rejected.\nCurrently, CREATE INCREMENTAL MATERIALIZED VIEW allows SELECT statements with the TABLESAMPLE clause.\nHowever, the result of this SELECT statement is undefined and should be \nrejected when specified in CREATE INCREMENTAL MATERIALIZED VIEW.\n(similar to handling non-immutable functions)\nRegard.2020年2月8日(土) 11:15 nuko yokohama <nuko.yokohama@gmail.com>:Hi.UNION query problem.(server crash)When creating an INCREMENTAL MATERIALIZED VIEW, the server process crashes if you specify a query with a UNION.(commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)execute log.```[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sqlDROP TABLE IF EXISTS table_x CASCADE;psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_vDROP TABLEDROP TABLE IF EXISTS table_y CASCADE;DROP TABLECREATE TABLE table_x (id int, data numeric);CREATE TABLECREATE TABLE table_y (id int, data numeric);CREATE TABLEINSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3SELECT * FROM table_x; id | data----+-------------------- 1 | 0.950724735058774 2 | 0.0222670808201144 3 | 0.391258547114841(3 rows)SELECT * FROM table_y; id | data----+-------------------- 1 | 0.991717347778337 2 | 0.0528458947672874 3 | 0.965044982911163(3 rows)CREATE VIEW xy_union_v ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;CREATE VIEWTABLE xy_union_v; name | id | data---------+----+-------------------- table_y | 2 | 0.0528458947672874 table_x | 2 | 0.0222670808201144 table_y | 3 | 0.965044982911163 table_x | 1 | 0.950724735058774 table_x | 3 | 0.391258547114841 table_y | 1 | 0.991717347778337(6 rows)CREATE INCREMENTAL MATERIALIZED VIEW xy_imv ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;psql:union_query_crash.sql:28: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.psql:union_query_crash.sql:28: fatal: connection to server was lost```UNION query problem.(server crash)When creating an INCREMENTAL MATERIALIZED VIEW, the server process crashes if you specify a query with a UNION.(commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)execute log.```[ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sqlDROP TABLE IF EXISTS table_x CASCADE;psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_vDROP TABLEDROP TABLE IF EXISTS table_y CASCADE;DROP TABLECREATE TABLE table_x (id int, data numeric);CREATE TABLECREATE TABLE table_y (id int, data numeric);CREATE TABLEINSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);INSERT 0 3SELECT * FROM table_x; id | data----+-------------------- 1 | 0.950724735058774 2 | 0.0222670808201144 3 | 0.391258547114841(3 rows)SELECT * FROM table_y; id | data----+-------------------- 1 | 0.991717347778337 2 | 0.0528458947672874 3 | 0.965044982911163(3 rows)CREATE VIEW xy_union_v ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;CREATE VIEWTABLE xy_union_v; name | id | data---------+----+-------------------- table_y | 2 | 0.0528458947672874 table_x | 2 | 0.0222670808201144 table_y | 3 | 0.965044982911163 table_x | 1 | 0.950724735058774 table_x | 3 | 0.391258547114841 table_y | 1 | 0.991717347778337(6 rows)CREATE INCREMENTAL MATERIALIZED VIEW xy_imv ASSELECT 'table_x' AS name, * FROM table_xUNIONSELECT 'table_y' AS name, * FROM table_y;psql:union_query_crash.sql:28: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.psql:union_query_crash.sql:28: fatal: connection to server was lost```2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:Hi,\n\nI would like to implement Incremental View Maintenance (IVM) on PostgreSQL. \nIVM is a technique to maintain materialized views which computes and applies\nonly the incremental changes to the materialized views rather than\nrecomputate the contents as the current REFRESH command does. \n\nI had a presentation on our PoC implementation of IVM at PGConf.eu 2018 [1].\nOur implementation uses row OIDs to compute deltas for materialized views. \nThe basic idea is that if we have information about which rows in base tables\nare contributing to generate a certain row in a matview then we can identify\nthe affected rows when a base table is updated. This is based on an idea of\nDr. Masunaga [2] who is a member of our group and inspired from ID-based\napproach[3].\n\nIn our implementation, the mapping of the row OIDs of the materialized view\nand the base tables are stored in \"OID map\". When a base relation is modified,\nAFTER trigger is executed and the delta is recorded in delta tables using\nthe transition table feature. The accual udpate of the matview is triggerd\nby REFRESH command with INCREMENTALLY option. \n\nHowever, we realize problems of our implementation. First, WITH OIDS will\nbe removed since PG12, so OIDs are no longer available. Besides this, it would\nbe hard to implement this since it needs many changes of executor nodes to\ncollect base tables's OIDs during execuing a query. Also, the cost of maintaining\nOID map would be high.\n\nFor these reasons, we started to think to implement IVM without relying on OIDs\nand made a bit more surveys. \n\nWe also looked at Kevin Grittner's discussion [4] on incremental matview\nmaintenance. In this discussion, Kevin proposed to use counting algorithm [5]\nto handle projection views (using DISTNICT) properly. This algorithm need an\nadditional system column, count_t, in materialized views and delta tables of\nbase tables. \n\nHowever, the discussion about IVM is now stoped, so we would like to restart and\nprogress this.\n\n\nThrough our PoC inplementation and surveys, I think we need to think at least\nthe followings for implementing IVM.\n\n1. How to extract changes on base tables\n\nI think there would be at least two approaches for it.\n\n - Using transition table in AFTER triggers\n - Extracting changes from WAL using logical decoding\n\nIn our PoC implementation, we used AFTER trigger and transition tables, but using\nlogical decoding might be better from the point of performance of base table \nmodification.\n\nIf we can represent a change of UPDATE on a base table as query-like rather than\nOLD and NEW, it may be possible to update the materialized view directly instead\nof performing delete & insert.\n\n\n2. How to compute the delta to be applied to materialized views\n\nEssentially, IVM is based on relational algebra. Theorically, changes on base\ntables are represented as deltas on this, like \"R <- R + dR\", and the delta on\nthe materialized view is computed using base table deltas based on \"change\npropagation equations\". For implementation, we have to derive the equation from\nthe view definition query (Query tree, or Plan tree?) and describe this as SQL\nquery to compulte delta to be applied to the materialized view.\n\nThere could be several operations for view definition: selection, projection, \njoin, aggregation, union, difference, intersection, etc. If we can prepare a\nmodule for each operation, it makes IVM extensable, so we can start a simple \nview definition, and then support more complex views.\n\n\n3. How to identify rows to be modifed in materialized views\n\nWhen applying the delta to the materialized view, we have to identify which row\nin the matview is corresponding to a row in the delta. A naive method is matching\nby using all columns in a tuple, but clearly this is unefficient. If thematerialized\nview has unique index, we can use this. Maybe, we have to force materialized views\nto have all primary key colums in their base tables. In our PoC implementation, we\nused OID to identify rows, but this will be no longer available as said above.\n\n\n4. When to maintain materialized views\n\nThere are two candidates of the timing of maintenance, immediate (eager) or deferred.\n\nIn eager maintenance, the materialized view is updated in the same transaction\nwhere the base table is updated. In deferred maintenance, this is done after the\ntransaction is commited, for example, when view is accessed, as a response to user\nrequest, etc.\n\nIn the previous discussion[4], it is planned to start from \"eager\" approach. In our PoC\nimplementaion, we used the other aproach, that is, using REFRESH command to perform IVM.\nI am not sure which is better as a start point, but I begin to think that the eager\napproach may be more simple since we don't have to maintain base table changes in other\npast transactions.\n\nIn the eager maintenance approache, we have to consider a race condition where two\ndifferent transactions change base tables simultaneously as discussed in [4].\n\n\n[1] https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n[2] https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1 (Japanese only)\n[3] https://dl.acm.org/citation.cfm?id=2750546\n[4] https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n[5] https://dl.acm.org/citation.cfm?id=170066\n\nRegards,\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Tue, 18 Feb 2020 22:03:47 +0900",
"msg_from": "nuko yokohama <nuko.yokohama@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 18 Feb 2020 22:03:47 +0900\nnuko yokohama <nuko.yokohama@gmail.com> wrote:\n\n> Hi.\n> \n> SELECT statements with a TABLESAMPLE clause should be rejected.\n> \n> Currently, CREATE INCREMENTAL MATERIALIZED VIEW allows SELECT statements\n> with the TABLESAMPLE clause.\n> However, the result of this SELECT statement is undefined and should be\n> rejected when specified in CREATE INCREMENTAL MATERIALIZED VIEW.\n> (similar to handling non-immutable functions)\n\nThanks! We totally agree with you. We are now working on improvement of\nquery checks at creating IMMV. TABLESAMPLE will also be checked in this.\n\nRegards,\nYugo Nagata\n\n> Regard.\n> \n> 2020年2月8日(土) 11:15 nuko yokohama <nuko.yokohama@gmail.com>:\n> \n> > Hi.\n> >\n> > UNION query problem.(server crash)\n> >\n> > When creating an INCREMENTAL MATERIALIZED VIEW,\n> > the server process crashes if you specify a query with a UNION.\n> >\n> > (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> >\n> > execute log.\n> >\n> > ```\n> > [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> > DROP TABLE IF EXISTS table_x CASCADE;\n> > psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> > DROP TABLE\n> > DROP TABLE IF EXISTS table_y CASCADE;\n> > DROP TABLE\n> > CREATE TABLE table_x (id int, data numeric);\n> > CREATE TABLE\n> > CREATE TABLE table_y (id int, data numeric);\n> > CREATE TABLE\n> > INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > SELECT * FROM table_x;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.950724735058774\n> > 2 | 0.0222670808201144\n> > 3 | 0.391258547114841\n> > (3 rows)\n> >\n> > SELECT * FROM table_y;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.991717347778337\n> > 2 | 0.0528458947672874\n> > 3 | 0.965044982911163\n> > (3 rows)\n> >\n> > CREATE VIEW xy_union_v AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > CREATE VIEW\n> > TABLE xy_union_v;\n> > name | id | data\n> > ---------+----+--------------------\n> > table_y | 2 | 0.0528458947672874\n> > table_x | 2 | 0.0222670808201144\n> > table_y | 3 | 0.965044982911163\n> > table_x | 1 | 0.950724735058774\n> > table_x | 3 | 0.391258547114841\n> > table_y | 1 | 0.991717347778337\n> > (6 rows)\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > psql:union_query_crash.sql:28: fatal: connection to server was lost\n> > ```\n> > UNION query problem.(server crash)\n> >\n> > When creating an INCREMENTAL MATERIALIZED VIEW,\n> > the server process crashes if you specify a query with a UNION.\n> >\n> > (commit id = 23151be7be8d8f8f9c35c2d0e4e5353aedf2b31e)\n> >\n> > execute log.\n> >\n> > ```\n> > [ec2-user@ip-10-0-1-10 ivm]$ psql testdb -e -f union_query_crash.sql\n> > DROP TABLE IF EXISTS table_x CASCADE;\n> > psql:union_query_crash.sql:6: NOTICE: drop cascades to view xy_union_v\n> > DROP TABLE\n> > DROP TABLE IF EXISTS table_y CASCADE;\n> > DROP TABLE\n> > CREATE TABLE table_x (id int, data numeric);\n> > CREATE TABLE\n> > CREATE TABLE table_y (id int, data numeric);\n> > CREATE TABLE\n> > INSERT INTO table_x VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > INSERT INTO table_y VALUES (generate_series(1, 3), random()::numeric);\n> > INSERT 0 3\n> > SELECT * FROM table_x;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.950724735058774\n> > 2 | 0.0222670808201144\n> > 3 | 0.391258547114841\n> > (3 rows)\n> >\n> > SELECT * FROM table_y;\n> > id | data\n> > ----+--------------------\n> > 1 | 0.991717347778337\n> > 2 | 0.0528458947672874\n> > 3 | 0.965044982911163\n> > (3 rows)\n> >\n> > CREATE VIEW xy_union_v AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > CREATE VIEW\n> > TABLE xy_union_v;\n> > name | id | data\n> > ---------+----+--------------------\n> > table_y | 2 | 0.0528458947672874\n> > table_x | 2 | 0.0222670808201144\n> > table_y | 3 | 0.965044982911163\n> > table_x | 1 | 0.950724735058774\n> > table_x | 3 | 0.391258547114841\n> > table_y | 1 | 0.991717347778337\n> > (6 rows)\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW xy_imv AS\n> > SELECT 'table_x' AS name, * FROM table_x\n> > UNION\n> > SELECT 'table_y' AS name, * FROM table_y\n> > ;\n> > psql:union_query_crash.sql:28: server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > psql:union_query_crash.sql:28: fatal: connection to server was lost\n> > ```\n> >\n> > 2018年12月27日(木) 21:57 Yugo Nagata <nagata@sraoss.co.jp>:\n> >\n> >> Hi,\n> >>\n> >> I would like to implement Incremental View Maintenance (IVM) on\n> >> PostgreSQL.\n> >> IVM is a technique to maintain materialized views which computes and\n> >> applies\n> >> only the incremental changes to the materialized views rather than\n> >> recomputate the contents as the current REFRESH command does.\n> >>\n> >> I had a presentation on our PoC implementation of IVM at PGConf.eu 2018\n> >> [1].\n> >> Our implementation uses row OIDs to compute deltas for materialized\n> >> views.\n> >> The basic idea is that if we have information about which rows in base\n> >> tables\n> >> are contributing to generate a certain row in a matview then we can\n> >> identify\n> >> the affected rows when a base table is updated. This is based on an idea\n> >> of\n> >> Dr. Masunaga [2] who is a member of our group and inspired from ID-based\n> >> approach[3].\n> >>\n> >> In our implementation, the mapping of the row OIDs of the materialized\n> >> view\n> >> and the base tables are stored in \"OID map\". When a base relation is\n> >> modified,\n> >> AFTER trigger is executed and the delta is recorded in delta tables using\n> >> the transition table feature. The accual udpate of the matview is triggerd\n> >> by REFRESH command with INCREMENTALLY option.\n> >>\n> >> However, we realize problems of our implementation. First, WITH OIDS will\n> >> be removed since PG12, so OIDs are no longer available. Besides this, it\n> >> would\n> >> be hard to implement this since it needs many changes of executor nodes to\n> >> collect base tables's OIDs during execuing a query. Also, the cost of\n> >> maintaining\n> >> OID map would be high.\n> >>\n> >> For these reasons, we started to think to implement IVM without relying\n> >> on OIDs\n> >> and made a bit more surveys.\n> >>\n> >> We also looked at Kevin Grittner's discussion [4] on incremental matview\n> >> maintenance. In this discussion, Kevin proposed to use counting\n> >> algorithm [5]\n> >> to handle projection views (using DISTNICT) properly. This algorithm need\n> >> an\n> >> additional system column, count_t, in materialized views and delta tables\n> >> of\n> >> base tables.\n> >>\n> >> However, the discussion about IVM is now stoped, so we would like to\n> >> restart and\n> >> progress this.\n> >>\n> >>\n> >> Through our PoC inplementation and surveys, I think we need to think at\n> >> least\n> >> the followings for implementing IVM.\n> >>\n> >> 1. How to extract changes on base tables\n> >>\n> >> I think there would be at least two approaches for it.\n> >>\n> >> - Using transition table in AFTER triggers\n> >> - Extracting changes from WAL using logical decoding\n> >>\n> >> In our PoC implementation, we used AFTER trigger and transition tables,\n> >> but using\n> >> logical decoding might be better from the point of performance of base\n> >> table\n> >> modification.\n> >>\n> >> If we can represent a change of UPDATE on a base table as query-like\n> >> rather than\n> >> OLD and NEW, it may be possible to update the materialized view directly\n> >> instead\n> >> of performing delete & insert.\n> >>\n> >>\n> >> 2. How to compute the delta to be applied to materialized views\n> >>\n> >> Essentially, IVM is based on relational algebra. Theorically, changes on\n> >> base\n> >> tables are represented as deltas on this, like \"R <- R + dR\", and the\n> >> delta on\n> >> the materialized view is computed using base table deltas based on \"change\n> >> propagation equations\". For implementation, we have to derive the\n> >> equation from\n> >> the view definition query (Query tree, or Plan tree?) and describe this\n> >> as SQL\n> >> query to compulte delta to be applied to the materialized view.\n> >>\n> >> There could be several operations for view definition: selection,\n> >> projection,\n> >> join, aggregation, union, difference, intersection, etc. If we can\n> >> prepare a\n> >> module for each operation, it makes IVM extensable, so we can start a\n> >> simple\n> >> view definition, and then support more complex views.\n> >>\n> >>\n> >> 3. How to identify rows to be modifed in materialized views\n> >>\n> >> When applying the delta to the materialized view, we have to identify\n> >> which row\n> >> in the matview is corresponding to a row in the delta. A naive method is\n> >> matching\n> >> by using all columns in a tuple, but clearly this is unefficient. If\n> >> thematerialized\n> >> view has unique index, we can use this. Maybe, we have to force\n> >> materialized views\n> >> to have all primary key colums in their base tables. In our PoC\n> >> implementation, we\n> >> used OID to identify rows, but this will be no longer available as said\n> >> above.\n> >>\n> >>\n> >> 4. When to maintain materialized views\n> >>\n> >> There are two candidates of the timing of maintenance, immediate (eager)\n> >> or deferred.\n> >>\n> >> In eager maintenance, the materialized view is updated in the same\n> >> transaction\n> >> where the base table is updated. In deferred maintenance, this is done\n> >> after the\n> >> transaction is commited, for example, when view is accessed, as a\n> >> response to user\n> >> request, etc.\n> >>\n> >> In the previous discussion[4], it is planned to start from \"eager\"\n> >> approach. In our PoC\n> >> implementaion, we used the other aproach, that is, using REFRESH command\n> >> to perform IVM.\n> >> I am not sure which is better as a start point, but I begin to think that\n> >> the eager\n> >> approach may be more simple since we don't have to maintain base table\n> >> changes in other\n> >> past transactions.\n> >>\n> >> In the eager maintenance approache, we have to consider a race condition\n> >> where two\n> >> different transactions change base tables simultaneously as discussed in\n> >> [4].\n> >>\n> >>\n> >> [1]\n> >> https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2195-implementing-incremental-view-maintenance-on-postgresql/\n> >> [2]\n> >> https://ipsj.ixsq.nii.ac.jp/ej/index.php?active_action=repository_view_main_item_detail&page_id=13&block_id=8&item_id=191254&item_no=1\n> >> (Japanese only)\n> >> [3] https://dl.acm.org/citation.cfm?id=2750546\n> >> [4]\n> >> https://www.postgresql.org/message-id/flat/1368561126.64093.YahooMailNeo%40web162904.mail.bf1.yahoo.com\n> >> [5] https://dl.acm.org/citation.cfm?id=170066\n> >>\n> >> Regards,\n> >> --\n> >> Yugo Nagata <nagata@sraoss.co.jp>\n> >>\n> >>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 19 Feb 2020 09:11:35 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, \n\nAttached is the latest patch (v14) to add support for Incremental Materialized\nView Maintenance (IVM). It is possible to apply to current latest master branch.\n\nDifferences from the previous patch (v13) include:\n\n* Support base tables using RLS\n\nIf a table has the Row Level Security (RLS) policy, IMMV is updated based on\nthe view owner's policy when a base table is updated. However, when a policy\nof base table is changed or created after creating IMMV, IMMV is not updated\nbased on the new RLS policy. In this case, REFRESH command must be executed.\n\n* Use ENR instead of temporary tables for internal operation\n\nPreviously, IVM create and use a temporary tables to store view delta rows.\nHowever it caused out of shared memory, and Tom Lane pointed out that \nusing temp tables in IVM trigger is not good.\n\nCurrently, IVM uses tuplestores and ephemeral named relation (ENR) instead\nof temporary tables. it doesn't cause previous problem like below:\n\ntestdb=# create table b1 (id integer, x numeric(10,3));\nCREATE TABLE\ntestdb=# create incremental materialized view mv1 \ntestdb-# as select id, count(*),sum(x) from b1 group by id;\nSELECT 0\ntestdb=# \ntestdb=# do $$ \ntestdb$# declare \ntestdb$# i integer;\ntestdb$# begin \ntestdb$# for i in 1..10000 \ntestdb$# loop \ntestdb$# insert into b1 values (1,1); \ntestdb$# end loop; \ntestdb$# end;\ntestdb$# $$\ntestdb-# ;\nDO\ntestdb=# \n\nThis issue is reported by PAscal.\nhttps://www.postgresql.org/message-id/1577564109604-0.post@n3.nabble.com\n\n\n* Support pg_dump/pg_restore for IVM\n\nIVM supports pg_dump/pg_restore command.\n\n* Prohibit rename and unique index creation on IVM columns\n\nWhen a user make a unique index on ivm columns such as ivm_count, IVM will fail due to\nthe unique constraint violation, so IVM prohibits it.\nAlso, rename of these columns also causes IVM fails, so IVM prohibits it too.\n\n* Fix incorrect WHERE condition check for outer-join views\n\nThe check for non null-rejecting condition check was incorrect.\n\nBest Regards,\nTakuma Hoshiai\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_date": "Thu, 27 Feb 2020 15:06:49 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> I have tried to use an other patch with yours:\n> \"Planning counters in pg_stat_statements (using pgss_store)\"\n\n> setting\n> shared_preload_libraries='pg_stat_statements'\n> pg_stat_statements.track=all\n> restarting the cluster and creating the extension\n\n\n> When trying following syntax:\n\n> create table b1 (id integer, x numeric(10,3));\n> create incremental materialized view mv1 as select id, count(*),sum(x) \n> from b1 group by id;\n> insert into b1 values (1,1)\n>\n> I got an ASSERT FAILURE in pg_stat_statements.c\n> on\n> \tAssert(query != NULL);\n>\n> comming from matview.c\n>\trefresh_matview_datafill(dest_old, query, queryEnv, NULL);\n> or\n>\trefresh_matview_datafill(dest_new, query, queryEnv, NULL);\n>\n> If this (last) NULL field was replaced by the query text, \n> a comment or just \"n/a\",\n> it would fix the problem.\n\n> Could this be investigated ?\n\nHello,\n\nthank you for patch v14, that fix problems inherited from temporary tables.\nit seems that this ASSERT problem with pgss patch is still present ;o(\n\nCould we have a look ?\n\nThanks in advance\nRegards\nPAscal\n\n\n\n--\nSent from:\nhttps://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Thu, 27 Feb 2020 14:35:55 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 27 Feb 2020 14:35:55 -0700 (MST)\nlegrand legrand <legrand_legrand@hotmail.com> wrote:\n\n> > I have tried to use an other patch with yours:\n> > \"Planning counters in pg_stat_statements (using pgss_store)\"\n> \n> > setting\n> > shared_preload_libraries='pg_stat_statements'\n> > pg_stat_statements.track=all\n> > restarting the cluster and creating the extension\n> \n> \n> > When trying following syntax:\n> \n> > create table b1 (id integer, x numeric(10,3));\n> > create incremental materialized view mv1 as select id, count(*),sum(x) \n> > from b1 group by id;\n> > insert into b1 values (1,1)\n> >\n> > I got an ASSERT FAILURE in pg_stat_statements.c\n> > on\n> > \tAssert(query != NULL);\n> >\n> > comming from matview.c\n> >\trefresh_matview_datafill(dest_old, query, queryEnv, NULL);\n> > or\n> >\trefresh_matview_datafill(dest_new, query, queryEnv, NULL);\n> >\n> > If this (last) NULL field was replaced by the query text, \n> > a comment or just \"n/a\",\n> > it would fix the problem.\n> \n> > Could this be investigated ?\n> \n> Hello,\n> \n> thank you for patch v14, that fix problems inherited from temporary tables.\n> it seems that this ASSERT problem with pgss patch is still present ;o(\n> \n> Could we have a look ?\n\nSorry but we are busy on fixing and improving IVM patches. I think fixing\nthe assertion failure needs non trivial changes to other part of PosthreSQL.\nSo we would like to work on the issue you reported after the pgss patch\ngets committed.\n\nBest Regards,\n\nTakuma Hoshiai\n\n \n> Thanks in advance\n> Regards\n> PAscal\n> \n> \n> \n> --\n> Sent from:\n> https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n> \n> \n> \n> \n> \n> --\n> Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n> \n> \n> \n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n\n",
"msg_date": "Fri, 28 Feb 2020 16:29:14 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> thank you for patch v14, that fix problems inherited from temporary\ntables.\n>> it seems that this ASSERT problem with pgss patch is still present ;o(\n>> \n>\n> Sorry but we are busy on fixing and improving IVM patches. I think fixing\n> the assertion failure needs non trivial changes to other part of\n> PosthreSQL.\n> So we would like to work on the issue you reported after the pgss patch\n> gets committed.\n\nImagine it will happen tomorrow !\nYou may say I'm a dreamer\nBut I'm not the only one\n...\n...\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 28 Feb 2020 02:23:25 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, \n\nAttached is the latest patch (v15) to add support for Incremental Materialized\nView Maintenance (IVM). It is possible to apply to current latest master branch.\n\nDifferences from the previous patch (v14) include:\n\n* Fix to not use generate_series when views are queried\n \nIn the previous implementation, multiplicity of each tuple was stored\nin ivm_count column in views. When SELECT was issued for views with\nduplicate, the view was replaced with a subquery in which each tuple\nwas joined with generate_series function in order to output tuples\nof the number of ivm_count.\n\nThis was problematic for following reasons:\n \n- The overhead was huge. When almost of tuples in a view were selected,\n it took much longer time than the original query. This lost the meaning\n of materialized views.\n \n- Optimizer could not estimate row numbers correctly because this had to\n know ivm_count values stored in tuples.\n \n- System columns of materialized views like cmin, xmin, xmax could not\n be used because a view was replaced with a subquery.\n \nTo resolve this, the new implementation doen't store multiplicities\nfor views with tuple duplicates, and doesn't use generate_series\nwhen SELECT query is issued for such views.\n \nNote that we still have to use ivm_count for supporting DISTINCT and\naggregates.\n\n* Add query checks for IVM restrictions\n \nQuery checks for following restrictions are added:\n \n- DISTINCT ON\n- TABLESAMPLE parameter\n- inheritance parent table\n- window function\n- some aggregate options(such as FILTER, DISTINCT, ORDER and GROUPING SETS)\n- targetlist containing IVM column\n- simple subquery is only supported\n- FOR UPDATE/SHARE\n- empty target list\n- UNION/INTERSECT/EXCEPT\n- GROUPING SETS clauses\n\n* Improve error messages\n \nAdd error code ERRCODE_FEATURE_NOT_SUPPORTED to each IVM error message.\nAlso, the message format was unified.\n\n* Support subqueries containig joins in FROM clause\n \nPreviously, when multi tables are updated simultaneously, incremental\nview maintenance with subqueries including JOIN didn't work correctly\ndue to a bug. \n\nBest Regards,\nTakuma Hoshiai\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 10 Apr 2020 23:26:58 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 10 Apr 2020 23:26:58 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi, \n> \n> Attached is the latest patch (v15) to add support for Incremental Materialized\n> View Maintenance (IVM). It is possible to apply to current latest master branch.\n\nI found a mistake of splitting patch, so I attached the fixed patch (v15a).\n \n> Differences from the previous patch (v14) include:\n> \n> * Fix to not use generate_series when views are queried\n> \n> In the previous implementation, multiplicity of each tuple was stored\n> in ivm_count column in views. When SELECT was issued for views with\n> duplicate, the view was replaced with a subquery in which each tuple\n> was joined with generate_series function in order to output tuples\n> of the number of ivm_count.\n> \n> This was problematic for following reasons:\n> \n> - The overhead was huge. When almost of tuples in a view were selected,\n> it took much longer time than the original query. This lost the meaning\n> of materialized views.\n> \n> - Optimizer could not estimate row numbers correctly because this had to\n> know ivm_count values stored in tuples.\n> \n> - System columns of materialized views like cmin, xmin, xmax could not\n> be used because a view was replaced with a subquery.\n> \n> To resolve this, the new implementation doen't store multiplicities\n> for views with tuple duplicates, and doesn't use generate_series\n> when SELECT query is issued for such views.\n> \n> Note that we still have to use ivm_count for supporting DISTINCT and\n> aggregates.\n\nI also explain the way of updating views with tuple duplicates.\n\nAlthough a view itself doesn't have ivm_count column, multiplicities\nfor old delta and new delta are calculated and the count value is\ncontained in a column named __ivm_count__ in each delta table.\n \nThe old delta table is applied using ctid and row_number function. \nrow_number is used to numbering tuples in the view, and tuples whose\nnumber is equal or is less than __ivm_count__ are deleted from the\nview using a query like:\n \n DELETE FROM matviewname WHERE ctid IN (\n SELECT tid FROM (\n SELECT row_number() over (partition by c1, c2, ...) AS __ivm_row_number__,\n mv.ctid AS tid,\n diff.__ivm_count__\n FROM matviewname AS mv, old_delta AS diff \"\n WHERE mv.c1 = diff.c1 AND mv.c2 = diff.c2 AND ... ) v\n WHERE v.__ivm_row_number__ <= v.__ivm_count__\n \nThe new delta is applied using generate_seriese to insert mutiple same\ntuples, using a query like:\n\n INSERT INTO matviewname (c1, c2, ...)\n SELECT c1,c2,... FROM (\n SELECT diff.*, generate_series(\n\n> \n> * Add query checks for IVM restrictions\n> \n> Query checks for following restrictions are added:\n> \n> - DISTINCT ON\n> - TABLESAMPLE parameter\n> - inheritance parent table\n> - window function\n> - some aggregate options(such as FILTER, DISTINCT, ORDER and GROUPING SETS)\n> - targetlist containing IVM column\n> - simple subquery is only supported\n> - FOR UPDATE/SHARE\n> - empty target list\n> - UNION/INTERSECT/EXCEPT\n> - GROUPING SETS clauses\n> \n> * Improve error messages\n> \n> Add error code ERRCODE_FEATURE_NOT_SUPPORTED to each IVM error message.\n> Also, the message format was unified.\n> \n> * Support subqueries containig joins in FROM clause\n> \n> Previously, when multi tables are updated simultaneously, incremental\n> view maintenance with subqueries including JOIN didn't work correctly\n> due to a bug. \n> \n> Best Regards,\n> Takuma Hoshiai\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 13 Apr 2020 14:18:35 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> Hi, \n>> \n>> Attached is the latest patch (v15) to add support for Incremental Materialized\n>> View Maintenance (IVM). It is possible to apply to current latest master branch.\n\nI have tried to use IVM against TPC-DS (http://www.tpc.org/tpcds/)\nqueries. TPC-DS models decision support systems and those queries are\nmodestly complex. So I thought applying IVM to those queries could\nshow how IVM covers real world queries.\n\nSince IVM does not support queries including ORDER BY and LIMIT, I\nremoved them from the queries before the test.\n\nHere are some facts so far learned in this attempt.\n\n- Number of TPC-DS query files is 99.\n- IVM was successfully applied to 20 queries.\n- 33 queries failed because they use WITH clause (CTE) (currenly IVM does not support CTE).\n- Error messages from failed queries (except those using WITH) are below:\n (the number indicates how many queries failed by the same reason)\n\n11\t aggregate functions in nested query are not supported on incrementally maintainable materialized view\n8\t window functions are not supported on incrementally maintainable materialized view\n7\t UNION/INTERSECT/EXCEPT statements are not supported on incrementally maintainable materialized view\n5\t WHERE clause only support subquery with EXISTS clause\n3\t GROUPING SETS, ROLLUP, or CUBE clauses is not supported on incrementally maintainable materialized view\n3\t aggregate function and EXISTS condition are not supported at the same time\n2\t GROUP BY expression not appeared in select list is not supported on incrementally maintainable materialized view\n2\t aggregate function with DISTINCT arguments is not supported on incrementally maintainable materialized view\n2\t aggregate is not supported with outer join\n1\t aggregate function stddev_samp(integer) is not supported on incrementally maintainable materialized view\n1\t HAVING clause is not supported on incrementally maintainable materialized view\n1\t subquery is not supported with outer join\n1\tcolumn \"avg\" specified more than once\n\nAttached are the queries IVM are successfully applied.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 08 May 2020 10:13:06 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, May 8, 2020 at 9:13 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> Hi,\n> >>\n> >> Attached is the latest patch (v15) to add support for Incremental\n> Materialized\n> >> View Maintenance (IVM). It is possible to apply to current latest\n> master branch.\n>\n> I have tried to use IVM against TPC-DS (http://www.tpc.org/tpcds/)\n> queries. TPC-DS models decision support systems and those queries are\n> modestly complex. So I thought applying IVM to those queries could\n> show how IVM covers real world queries.\n>\n> +1, This is a smart idea. How did you test it? AFAIK, we can test it\nwith:\n\n1. For any query like SELECT xxx, we create view like CREATE MATERIAL VIEW\nmv_name as SELECT xxx; to test if the features in the query are supported.\n2. Update the data and then compare the result with SELECT XXX with SELECT\n* from mv_name to test if the data is correctly sync.\n\nBest Regards\nAndy Fan\n\nOn Fri, May 8, 2020 at 9:13 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:>> Hi, \n>> \n>> Attached is the latest patch (v15) to add support for Incremental Materialized\n>> View Maintenance (IVM). It is possible to apply to current latest master branch.\n\nI have tried to use IVM against TPC-DS (http://www.tpc.org/tpcds/)\nqueries. TPC-DS models decision support systems and those queries are\nmodestly complex. So I thought applying IVM to those queries could\nshow how IVM covers real world queries.\n+1, This is a smart idea. How did you test it? AFAIK, we can test it with:1. For any query like SELECT xxx, we create view like CREATE MATERIAL VIEWmv_name as SELECT xxx; to test if the features in the query are supported.2. Update the data and then compare the result with SELECT XXX with SELECT* from mv_name to test if the data is correctly sync. Best RegardsAndy Fan",
"msg_date": "Fri, 8 May 2020 13:08:50 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> +1, This is a smart idea. How did you test it? AFAIK, we can test it\n> with:\n> \n> 1. For any query like SELECT xxx, we create view like CREATE MATERIAL VIEW\n> mv_name as SELECT xxx; to test if the features in the query are supported.\n\nNo I didn't test the correctness of IVM with TPC-DS data for now.\nTPC-DS comes with a data generator and we can test IVM something like:\n\nSELECT * FROM IVM_vew EXCEPT SELECT ... (TPC-DS original query);\n\nIf this produces 0 row, then the IVM is correct for the initial data.\n(of course actually we need to add appropreate ORDER BY and LIMIT\nclause to the SELECT statement for IVM if neccessary).\n\n> 2. Update the data and then compare the result with SELECT XXX with SELECT\n> * from mv_name to test if the data is correctly sync.\n\nI wanted to test the data updating but I am still struggling how to\nextract correct updating data from TPC-DS data set.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 08 May 2020 15:52:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Thanks for the patch!\n\n\n> Query checks for following restrictions are added:\n\n\nAre all known supported cases listed below?\n\n\n> - inheritance parent table\n> ...\n> - targetlist containing IVM column\n> - simple subquery is only supported\n>\n\nHow to understand 3 items above?\n\n-\nBest Regards\nAndy Fan\n\nThanks for the patch! \nQuery checks for following restrictions are added:Are all known supported cases listed below? \n- inheritance parent table...\n- targetlist containing IVM column\n- simple subquery is only supportedHow to understand 3 items above? - Best RegardsAndy Fan",
"msg_date": "Tue, 7 Jul 2020 11:11:08 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": ">> Query checks for following restrictions are added:\n> \n> \n> Are all known supported cases listed below?\n\nThey are \"restrictions\" and are not supported.\n> \n>> - inheritance parent table\n>> ...\n>> - targetlist containing IVM column\n>> - simple subquery is only supported\n>>\n> \n> How to understand 3 items above?\n\nThe best way to understand them is looking into regression test.\nsrc/test/regress/expected/incremental_matview.out.\n\n>> - inheritance parent table\n-- inheritance parent is not supported with IVM\"\nBEGIN;\nCREATE TABLE parent (i int, v int);\nCREATE TABLE child_a(options text) INHERITS(parent);\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm21 AS SELECT * FROM parent;\nERROR: inheritance parent is not supported on incrementally maintainable materialized view\n\n>> - targetlist containing IVM column\n\n-- tartget list cannot contain ivm clumn that start with '__ivm'\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm28 AS SELECT i AS \"__ivm_count__\" FROM mv_base_a;\nERROR: column name __ivm_count__ is not supported on incrementally maintainable materialized view\n\n>> - simple subquery is only supported\n-- subquery is not supported with outer join\nCREATE INCREMENTAL MATERIALIZED VIEW mv(a,b) AS SELECT a.i, b.i FROM mv_base_a a LEFT JOIN (SELECT * FROM mv_base_b) b ON a.i=b.i;\nERROR: this query is not allowed on incrementally maintainable materialized view\nHINT: subquery is not supported with outer join\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 07 Jul 2020 16:26:34 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, Jul 7, 2020 at 3:26 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> Query checks for following restrictions are added:\n> >\n> >\n> > Are all known supported cases listed below?\n>\n> They are \"restrictions\" and are not supported.\n>\n\nYes, I missed the \"not\" word:(\n\n>\n> >> - inheritance parent table\n> >> ...\n> >> - targetlist containing IVM column\n> >> - simple subquery is only supported\n> >>\n> >\n> > How to understand 3 items above?\n>\n> The best way to understand them is looking into regression test.\n>\n\nThanks for sharing, I will look into it.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Jul 7, 2020 at 3:26 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:>> Query checks for following restrictions are added:\n> \n> \n> Are all known supported cases listed below?\n\nThey are \"restrictions\" and are not supported.Yes, I missed the \"not\" word:( \n> \n>> - inheritance parent table\n>> ...\n>> - targetlist containing IVM column\n>> - simple subquery is only supported\n>>\n> \n> How to understand 3 items above?\n\nThe best way to understand them is looking into regression test.Thanks for sharing, I will look into it. -- Best RegardsAndy Fan",
"msg_date": "Thu, 9 Jul 2020 20:20:41 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the rebased patch (v16) to add support for Incremental\nMaterialized View Maintenance (IVM). It is able to be applied to\ncurrent latest master branch.\n\nThis also includes the following small fixes:\n\n- Add a query check for expressions containing aggregates in it\n- [doc] Add description about which situations IVM is effective or not in\n- Improve hint in log messages\n- Reorganize include directives in codes\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 18 Aug 2020 18:52:36 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "I have looked into this.\n\n> Hi,\n> \n> Attached is the rebased patch (v16) to add support for Incremental\n> Materialized View Maintenance (IVM). It is able to be applied to\n> current latest master branch.\n> \n> This also includes the following small fixes:\n> \n> - Add a query check for expressions containing aggregates in it\n> - [doc] Add description about which situations IVM is effective or not in\n> - Improve hint in log messages\n> - Reorganize include directives in codes\n\n- make check passed.\n- make check-world passed.\n\n- 0004-Allow-to-prolong-life-span-of-transition-tables-unti.patch:\n This one needs a comment to describe what the function does etc.\n\n +void\n +SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n +{\n\n\n- 0007-Add-aggregates-support-in-IVM.patch\n \"Check if the given aggregate function is supporting\" shouldn't be\n \"Check if the given aggregate function is supporting IVM\"?\n\n+ * check_aggregate_supports_ivm\n+ *\n+ * Check if the given aggregate function is supporting\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 19 Aug 2020 10:02:42 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 19 Aug 2020 10:02:42 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> I have looked into this.\n\nThank you for your reviewing!\n \n> - 0004-Allow-to-prolong-life-span-of-transition-tables-unti.patch:\n> This one needs a comment to describe what the function does etc.\n> \n> +void\n> +SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n> +{\n\nI added a comment for this function and related places.\n\n+/*\n+ * SetTransitionTablePreserved\n+ *\n+ * Prolong lifespan of transition tables corresponding specified relid and\n+ * command type to the end of the outmost query instead of each nested query.\n+ * This enables to use nested AFTER trigger's transition tables from outer\n+ * query's triggers. Currently, only immediate incremental view maintenance\n+ * uses this.\n+ */\n+void\n+SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n\nAlso, I removed releted unnecessary code which was left accidentally.\n\n \n> - 0007-Add-aggregates-support-in-IVM.patch\n> \"Check if the given aggregate function is supporting\" shouldn't be\n> \"Check if the given aggregate function is supporting IVM\"?\n\nYes, you are right. I fixed this, too.\n\n> \n> + * check_aggregate_supports_ivm\n> + *\n> + * Check if the given aggregate function is supporting\n\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 21 Aug 2020 17:23:20 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Yugo NAGATA <nagata@sraoss.co.jp>\nSubject: Re: Implementing Incremental View Maintenance\nDate: Fri, 21 Aug 2020 17:23:20 +0900\nMessage-ID: <20200821172320.a2506577d5244b6066f69331@sraoss.co.jp>\n\n> On Wed, 19 Aug 2020 10:02:42 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> I have looked into this.\n> \n> Thank you for your reviewing!\n> \n>> - 0004-Allow-to-prolong-life-span-of-transition-tables-unti.patch:\n>> This one needs a comment to describe what the function does etc.\n>> \n>> +void\n>> +SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n>> +{\n> \n> I added a comment for this function and related places.\n> \n> +/*\n> + * SetTransitionTablePreserved\n> + *\n> + * Prolong lifespan of transition tables corresponding specified relid and\n> + * command type to the end of the outmost query instead of each nested query.\n> + * This enables to use nested AFTER trigger's transition tables from outer\n> + * query's triggers. Currently, only immediate incremental view maintenance\n> + * uses this.\n> + */\n> +void\n> +SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n> \n> Also, I removed releted unnecessary code which was left accidentally.\n> \n> \n>> - 0007-Add-aggregates-support-in-IVM.patch\n>> \"Check if the given aggregate function is supporting\" shouldn't be\n>> \"Check if the given aggregate function is supporting IVM\"?\n> \n> Yes, you are right. I fixed this, too.\n> \n>> \n>> + * check_aggregate_supports_ivm\n>> + *\n>> + * Check if the given aggregate function is supporting\n\nThanks for the fixes. I have changed the commit fest status to \"Ready\nfor Committer\".\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 21 Aug 2020 21:40:50 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nI updated the wiki page.\nhttps://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\nOn Fri, 21 Aug 2020 21:40:50 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> From: Yugo NAGATA <nagata@sraoss.co.jp>\n> Subject: Re: Implementing Incremental View Maintenance\n> Date: Fri, 21 Aug 2020 17:23:20 +0900\n> Message-ID: <20200821172320.a2506577d5244b6066f69331@sraoss.co.jp>\n> \n> > On Wed, 19 Aug 2020 10:02:42 +0900 (JST)\n> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > \n> >> I have looked into this.\n> > \n> > Thank you for your reviewing!\n> > \n> >> - 0004-Allow-to-prolong-life-span-of-transition-tables-unti.patch:\n> >> This one needs a comment to describe what the function does etc.\n> >> \n> >> +void\n> >> +SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n> >> +{\n> > \n> > I added a comment for this function and related places.\n> > \n> > +/*\n> > + * SetTransitionTablePreserved\n> > + *\n> > + * Prolong lifespan of transition tables corresponding specified relid and\n> > + * command type to the end of the outmost query instead of each nested query.\n> > + * This enables to use nested AFTER trigger's transition tables from outer\n> > + * query's triggers. Currently, only immediate incremental view maintenance\n> > + * uses this.\n> > + */\n> > +void\n> > +SetTransitionTablePreserved(Oid relid, CmdType cmdType)\n> > \n> > Also, I removed releted unnecessary code which was left accidentally.\n> > \n> > \n> >> - 0007-Add-aggregates-support-in-IVM.patch\n> >> \"Check if the given aggregate function is supporting\" shouldn't be\n> >> \"Check if the given aggregate function is supporting IVM\"?\n> > \n> > Yes, you are right. I fixed this, too.\n> > \n> >> \n> >> + * check_aggregate_supports_ivm\n> >> + *\n> >> + * Check if the given aggregate function is supporting\n> \n> Thanks for the fixes. I have changed the commit fest status to \"Ready\n> for Committer\".\n> \n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 31 Aug 2020 14:31:10 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Nagata-san,\n\nOn Mon, Aug 31, 2020 at 5:32 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n\nThanks for writing this!\n\n+ /*\n+ * Wait for concurrent transactions which update this materialized view at\n+ * READ COMMITED. This is needed to see changes committed in other\n+ * transactions. No wait and raise an error at REPEATABLE READ or\n+ * SERIALIZABLE to prevent update anomalies of matviews.\n+ * XXX: dead-lock is possible here.\n+ */\n+ if (!IsolationUsesXactSnapshot())\n+ LockRelationOid(matviewOid, ExclusiveLock);\n+ else if (!ConditionalLockRelationOid(matviewOid, ExclusiveLock))\n\nCould you please say a bit more about your plans for concurrency control?\n\nSimple hand-crafted \"rollup\" triggers typically conflict only when\nmodifying the same output rows due to update/insert conflicts, or\nperhaps some explicit row level locking if they're doing something\ncomplex (unfortunately, they also very often have concurrency\nbugs...). In some initial reading about MV maintenance I did today in\nthe hope of understanding some more context for this very impressive\nbut rather intimidating patch set, I gained the impression that\naggregate-row locking granularity is assumed as a baseline for eager\nincremental aggregate maintenance. I understand that our\nMVCC/snapshot scheme introduces extra problems, but I'm wondering if\nthese problems can be solved using the usual update semantics (the\nEvalPlanQual mechanism), and perhaps also some UPSERT logic. Why is\nit not sufficient to have locked all the base table rows that you have\nmodified, captured the before-and-after values generated by those\nupdates, and also locked all the IMV aggregate rows you will read, and\nin the process acquired a view of the latest committed state of the\nIMV aggregate rows you will modify (possibly having waited first)? In\nother words, what other data do you look at, while computing the\nincremental update, that might suffer from anomalies because of\nsnapshots and concurrency? For one thing, I am aware that unique\nindexes for groups would probably be necessary; perhaps some subtle\nproblems of the sort usually solved with predicate locks lurk there?\n\n(Newer papers describe locking schemes that avoid even aggregate-row\nlevel conflicts, by taking advantage of the associativity and\ncommutativity of aggregates like SUM and COUNT. You can allow N\nwriters to update the aggregate concurrently, and if any transaction\nhas to roll back it subtracts what it added, not necessarily restoring\nthe original value, so that nobody conflicts with anyone else, or\nsomething like that... Contemplating an MVCC, no-rollbacks version of\nthat sort of thing leads to ideas like, I dunno, update chains\ncontaining differential update trees to be compacted later... egad!)\n\n\n",
"msg_date": "Sat, 5 Sep 2020 17:56:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Thomas,\n\nThank you for your comment!\n\nOn Sat, 5 Sep 2020 17:56:18 +1200\nThomas Munro <thomas.munro@gmail.com> wrote:\n> + /*\n> + * Wait for concurrent transactions which update this materialized view at\n> + * READ COMMITED. This is needed to see changes committed in other\n> + * transactions. No wait and raise an error at REPEATABLE READ or\n> + * SERIALIZABLE to prevent update anomalies of matviews.\n> + * XXX: dead-lock is possible here.\n> + */\n> + if (!IsolationUsesXactSnapshot())\n> + LockRelationOid(matviewOid, ExclusiveLock);\n> + else if (!ConditionalLockRelationOid(matviewOid, ExclusiveLock))\n> \n> Could you please say a bit more about your plans for concurrency control?\n> \n> Simple hand-crafted \"rollup\" triggers typically conflict only when\n> modifying the same output rows due to update/insert conflicts, or\n> perhaps some explicit row level locking if they're doing something\n> complex (unfortunately, they also very often have concurrency\n> bugs...). In some initial reading about MV maintenance I did today in\n> the hope of understanding some more context for this very impressive\n> but rather intimidating patch set, I gained the impression that\n> aggregate-row locking granularity is assumed as a baseline for eager\n> incremental aggregate maintenance. I understand that our\n> MVCC/snapshot scheme introduces extra problems, but I'm wondering if\n> these problems can be solved using the usual update semantics (the\n> EvalPlanQual mechanism), and perhaps also some UPSERT logic. Why is\n> it not sufficient to have locked all the base table rows that you have\n> modified, captured the before-and-after values generated by those\n> updates, and also locked all the IMV aggregate rows you will read, and\n> in the process acquired a view of the latest committed state of the\n> IMV aggregate rows you will modify (possibly having waited first)? In\n> other words, what other data do you look at, while computing the\n> incremental update, that might suffer from anomalies because of\n> snapshots and concurrency? For one thing, I am aware that unique\n> indexes for groups would probably be necessary; perhaps some subtle\n> problems of the sort usually solved with predicate locks lurk there?\n\nI decided to lock a matview considering views joining tables. \nFor example, let V = R*S is an incrementally maintainable materialized \nview which joins tables R and S. Suppose there are two concurrent \ntransactions T1 which changes table R to R' and T2 which changes S to S'. \nWithout any lock, in READ COMMITTED mode, V would be updated to\nrepresent V=R'*S in T1, and V=R*S' in T2, so it would cause inconsistency. \nBy locking the view V, transactions T1, T2 are processed serially and this \ninconsistency can be avoided.\n\nI also thought it might be resolved using tuple locks and EvalPlanQual\ninstead of table level lock, but there is still a unavoidable case. For\nexample, suppose that tuple dR is inserted into R in T1, and dS is inserted\ninto S in T2. Also, suppose that dR and dS will be joined in according to\nthe view definition. In this situation, without any lock, the change of V is\ncomputed as dV=dR*S in T1, dV=R*dS in T2, respectively, and dR*dS would not\nbe included in the results. This causes inconsistency. I don't think this\ncould be resolved even if we use tuple locks.\n\nAs to aggregate view without join , however, we might be able to use a lock\nof more low granularity as you said, because if rows belonging a group in a\ntable is changes, we just update (or delete) corresponding rows in the view. \nEven if there are concurrent transactions updating the same table, we would\nbe able to make one of them wait using tuple lock. If concurrent transactions\nare trying to insert a tuple into the same table, we might need to use unique\nindex and UPSERT to avoid to insert multiple rows with same group key into\nthe view.\n\nTherefore, usual update semantics (tuple locks and EvalPlanQual) and UPSERT\ncan be used for optimization for some classes of view, but we don't have any\nother better idea than using table lock for views joining tables. We would\nappreciate it if you could suggest better solution. \n\n> (Newer papers describe locking schemes that avoid even aggregate-row\n> level conflicts, by taking advantage of the associativity and\n> commutativity of aggregates like SUM and COUNT. You can allow N\n> writers to update the aggregate concurrently, and if any transaction\n> has to roll back it subtracts what it added, not necessarily restoring\n> the original value, so that nobody conflicts with anyone else, or\n> something like that... Contemplating an MVCC, no-rollbacks version of\n> that sort of thing leads to ideas like, I dunno, update chains\n> containing differential update trees to be compacted later... egad!)\n\nI am interested in papers you mentioned! Are they literatures in context of\nincremental view maintenance?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 9 Sep 2020 09:27:52 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, Sep 9, 2020 at 12:29 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> I also thought it might be resolved using tuple locks and EvalPlanQual\n> instead of table level lock, but there is still a unavoidable case. For\n> example, suppose that tuple dR is inserted into R in T1, and dS is inserted\n> into S in T2. Also, suppose that dR and dS will be joined in according to\n> the view definition. In this situation, without any lock, the change of V is\n> computed as dV=dR*S in T1, dV=R*dS in T2, respectively, and dR*dS would not\n> be included in the results. This causes inconsistency. I don't think this\n> could be resolved even if we use tuple locks.\n\nI see. Thanks for the explanation!\n\n> As to aggregate view without join , however, we might be able to use a lock\n> of more low granularity as you said, because if rows belonging a group in a\n> table is changes, we just update (or delete) corresponding rows in the view.\n> Even if there are concurrent transactions updating the same table, we would\n> be able to make one of them wait using tuple lock. If concurrent transactions\n> are trying to insert a tuple into the same table, we might need to use unique\n> index and UPSERT to avoid to insert multiple rows with same group key into\n> the view.\n>\n> Therefore, usual update semantics (tuple locks and EvalPlanQual) and UPSERT\n> can be used for optimization for some classes of view, but we don't have any\n> other better idea than using table lock for views joining tables. We would\n> appreciate it if you could suggest better solution.\n\nI have nothing, I'm just reading starter papers and trying to learn a\nbit more about the concepts at this stage. I was thinking of\nreviewing some of the more mechanical parts of the patch set, though,\nlike perhaps the transition table lifetime management, since I have\nworked on that area before.\n\n> > (Newer papers describe locking schemes that avoid even aggregate-row\n> > level conflicts, by taking advantage of the associativity and\n> > commutativity of aggregates like SUM and COUNT. You can allow N\n> > writers to update the aggregate concurrently, and if any transaction\n> > has to roll back it subtracts what it added, not necessarily restoring\n> > the original value, so that nobody conflicts with anyone else, or\n> > something like that... Contemplating an MVCC, no-rollbacks version of\n> > that sort of thing leads to ideas like, I dunno, update chains\n> > containing differential update trees to be compacted later... egad!)\n>\n> I am interested in papers you mentioned! Are they literatures in context of\n> incremental view maintenance?\n\nYeah. I was skim-reading some parts of [1] including section 2.5.1\n\"Concurrency Control\", which opens with some comments about\naggregates, locking and pointers to \"V-locking\" [2] for high\nconcurrency aggregates. There is also a pointer to G. Graefe and M.\nJ. Zwilling, \"Transaction support for indexed views,\" which I haven't\nlocated; apparently indexed views are Graefe's name for MVs, and\napparently this paper has a section on MVCC systems which sounds\ninteresting for us.\n\n[1] https://dsf.berkeley.edu/cs286/papers/mv-fntdb2012.pdf\n[2] http://pages.cs.wisc.edu/~gangluo/latch.pdf\n\n\n",
"msg_date": "Wed, 9 Sep 2020 14:22:28 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 9 Sep 2020 14:22:28 +1200\nThomas Munro <thomas.munro@gmail.com> wrote:\n\n> > Therefore, usual update semantics (tuple locks and EvalPlanQual) and UPSERT\n> > can be used for optimization for some classes of view, but we don't have any\n> > other better idea than using table lock for views joining tables. We would\n> > appreciate it if you could suggest better solution.\n> \n> I have nothing, I'm just reading starter papers and trying to learn a\n> bit more about the concepts at this stage. I was thinking of\n> reviewing some of the more mechanical parts of the patch set, though,\n> like perhaps the transition table lifetime management, since I have\n> worked on that area before.\n\nThank you for your interrest. It would be greatly appreciated if you\ncould review the patch.\n\n> > > (Newer papers describe locking schemes that avoid even aggregate-row\n> > > level conflicts, by taking advantage of the associativity and\n> > > commutativity of aggregates like SUM and COUNT. You can allow N\n> > > writers to update the aggregate concurrently, and if any transaction\n> > > has to roll back it subtracts what it added, not necessarily restoring\n> > > the original value, so that nobody conflicts with anyone else, or\n> > > something like that... Contemplating an MVCC, no-rollbacks version of\n> > > that sort of thing leads to ideas like, I dunno, update chains\n> > > containing differential update trees to be compacted later... egad!)\n> >\n> > I am interested in papers you mentioned! Are they literatures in context of\n> > incremental view maintenance?\n> \n> Yeah. I was skim-reading some parts of [1] including section 2.5.1\n> \"Concurrency Control\", which opens with some comments about\n> aggregates, locking and pointers to \"V-locking\" [2] for high\n> concurrency aggregates. There is also a pointer to G. Graefe and M.\n> J. Zwilling, \"Transaction support for indexed views,\" which I haven't\n> located; apparently indexed views are Graefe's name for MVs, and\n> apparently this paper has a section on MVCC systems which sounds\n> interesting for us.\n> \n> [1] https://dsf.berkeley.edu/cs286/papers/mv-fntdb2012.pdf\n> [2] http://pages.cs.wisc.edu/~gangluo/latch.pdf\n\nThanks for your information! I will also check references\nregarding with IVM and concurrency control.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 9 Sep 2020 14:49:24 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> I have nothing, I'm just reading starter papers and trying to learn a\n> bit more about the concepts at this stage. I was thinking of\n> reviewing some of the more mechanical parts of the patch set, though,\n> like perhaps the transition table lifetime management, since I have\n> worked on that area before.\n\nDo you have comments on this part?\n\nI am asking because these patch sets are now getting closer to\ncommittable state in my opinion, and if there's someting wrong, it\nshould be fixed soon so that these patches are getting into the master\nbranch.\n\nI think this feature has been long awaited by users and merging the\npatches should be a benefit for them.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 17 Sep 2020 09:42:45 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, Sep 17, 2020 at 09:42:45AM +0900, Tatsuo Ishii wrote:\n> I am asking because these patch sets are now getting closer to\n> committable state in my opinion, and if there's someting wrong, it\n> should be fixed soon so that these patches are getting into the master\n> branch.\n> \n> I think this feature has been long awaited by users and merging the\n> patches should be a benefit for them.\n\nI don't have much thoughts to offer about that, but this patch is\nfailing to apply, so a rebase is at least necessary.\n--\nMichael",
"msg_date": "Thu, 1 Oct 2020 12:34:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> On Thu, Sep 17, 2020 at 09:42:45AM +0900, Tatsuo Ishii wrote:\n>> I am asking because these patch sets are now getting closer to\n>> committable state in my opinion, and if there's someting wrong, it\n>> should be fixed soon so that these patches are getting into the master\n>> branch.\n>> \n>> I think this feature has been long awaited by users and merging the\n>> patches should be a benefit for them.\n> \n> I don't have much thoughts to offer about that, but this patch is\n> failing to apply, so a rebase is at least necessary.\n\nYes. I think he is going to post a new patch (possibly with\nenhancements) soon.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 01 Oct 2020 13:03:51 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\n\nOn 2020/10/01 13:03, Tatsuo Ishii wrote:\n>> On Thu, Sep 17, 2020 at 09:42:45AM +0900, Tatsuo Ishii wrote:\n>>> I am asking because these patch sets are now getting closer to\n>>> committable state in my opinion, and if there's someting wrong, it\n>>> should be fixed soon so that these patches are getting into the master\n>>> branch.\n>>>\n>>> I think this feature has been long awaited by users and merging the\n>>> patches should be a benefit for them.\n>>\n>> I don't have much thoughts to offer about that, but this patch is\n>> failing to apply, so a rebase is at least necessary.\n> \n> Yes. I think he is going to post a new patch (possibly with\n> enhancements) soon.\n\nWhen I glanced the doc patch (i.e., 0012), I found some typos.\n\n+ <command>CRATE INCREMENTAL MATERIALIZED VIEW</command>, for example:\n\nTypo: CRATE should be CREATE ?\n\n+ with <literal>__ivm_</literal> and they contains information required\n\nTypo: contains should be contain ?\n\n+ For exmaple, here are two materialized views based on the same view\n\nTypo: exmaple should be example ?\n\n+ maintenance can be lager than <command>REFRESH MATERIALIZED VIEW</command>\n\nTypo: lager should be larger ?\n\n+postgres=# SELECt * FROM m; -- automatically updated\n\nTypo: SELECt should be SELECT ?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 1 Oct 2020 13:43:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 1 Oct 2020 13:43:49 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n \n> When I glanced the doc patch (i.e., 0012), I found some typos.\n\nThank you for your pointing out typos! I'll fix it.\n\n\n> \n> + <command>CRATE INCREMENTAL MATERIALIZED VIEW</command>, for example:\n> \n> Typo: CRATE should be CREATE ?\n> \n> + with <literal>__ivm_</literal> and they contains information required\n> \n> Typo: contains should be contain ?\n> \n> + For exmaple, here are two materialized views based on the same view\n> \n> Typo: exmaple should be example ?\n> \n> + maintenance can be lager than <command>REFRESH MATERIALIZED VIEW</command>\n> \n> Typo: lager should be larger ?\n> \n> +postgres=# SELECt * FROM m; -- automatically updated\n> \n> Typo: SELECt should be SELECT ?\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 1 Oct 2020 14:06:27 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the rebased patch (v18) to add support for Incremental\nMaterialized View Maintenance (IVM). It is able to be applied to\ncurrent latest master branch.\n\nAlso, this now supports simple CTEs (WITH clauses) which do not contain\naggregates or DISTINCT like simple sub-queries. This feature is provided\nas an additional patch segment \"0010-Add-CTE-support-in-IVM.patch\".\n\n==== Example ====\n\ncte=# TABLE r;\n i | v \n---+----\n 1 | 10\n 2 | 20\n(2 rows)\n\ncte=# TABLE s;\n i | v \n---+-----\n 2 | 200\n 3 | 300\n(2 rows)\n\ncte=# \\d+ mv\n Materialized view \"public.mv\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+---------+-----------+----------+---------+---------+--------------+-------------\n r | integer | | | | plain | | \n x | integer | | | | plain | | \nView definition:\n WITH x AS (\n SELECT s.i,\n s.v\n FROM s\n )\n SELECT r.v AS r,\n x.v AS x\n FROM r,\n x\n WHERE r.i = x.i;\nAccess method: heap\nIncremental view maintenance: yes\n\ncte=# SELECT * FROM mv;\n r | x \n----+-----\n 20 | 200\n(1 row)\n\ncte=# INSERT INTO r VALUES (3,30);\nINSERT 0 1\ncte=# INSERT INTO s VALUES (1,100);\nINSERT 0 1\ncte=# SELECT * FROM mv;\n r | x \n----+-----\n 20 | 200\n 30 | 300\n 10 | 100\n(3 rows)\n\n======================\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 5 Oct 2020 18:16:18 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nI have reviewed the past discussions in this thread on IVM implementation\nof the proposed patch[1], and summarized it as following . We would appreciate\nany comments or suggestions on the patch as regard of them.\n\n* Aggregate support\n\nThe current patch supports several built-in aggregates, that is, count, sum, \navg, min, and max. Other built-in aggregates or user-defined aggregates are\nnot supported.\n\nAggregates in a materialized view definition is checked if this is supported\nusing OIDs of aggregate function. For this end, Gen_fmgrtab.pl is changed to\noutput aggregate function's OIDs to fmgroids.h\n(by 0006-Change-Gen_fmgrtab.pl-to-output-aggregate-function-s.patch). \nThe logic for each aggregate function to update aggregate values in\nmaterialized views is enbedded in a trigger function.\n \nThere was another option in the past discussion. That is, we could add one\nor more new attribute to pg_aggregate which provides information about if\neach aggregate function supports IVM and its logic[2]. If we have a mechanism\nto support IVM in pg_aggregate, we may use more general aggregate functions\nincluding user-defined aggregate in materialized views for IVM.\n\nFor example, the current pg_aggregate has aggcombinefn attribute for\nsupporting partial aggregation. Maybe we could use combine functions to\ncalculate new aggregate values in materialized views when tuples are\ninserted into a base table. However, in the context of IVM, we also need\nother function used when tuples are deleted from a base table, so we can not\nuse partial aggregation for IVM in the current implementation. \n\nMaybe, we could support the deletion case by adding a new support function,\nsay, \"inverse combine function\". The \"inverse combine function\" would take \naggregate value in a materialized view and aggregate value calculated from a\ndelta of view, and produces the new aggregate value which equals the result\nafter tuples in a base table are deleted.\n\nHowever, we don't have concrete plan for the new design of pg_aggregate. \nIn addition, even if make a new support function in pg_aggregate for IVM, \nwe can't use this in the current IVM code because our code uses SQL via SPI\nin order to update a materialized view and we can't call \"internal\" type\nfunction directly in SQL.\n\nFor these reasons, in the current patch, we decided to left supporting\ngeneral aggregates to the next version for simplicity, so the current patch\nsupports only some built-in aggregates and checks if they can be used in IVM\nby their OIDs.\n\n* Hidden columns\n\nFor supporting aggregates, DISTINCT, and EXISTS, the current implementation\nautomatically create hidden columns whose name starts with \"__ivm_\" in\nmaterialized views.\n\nThe columns starting with \"__ivm_\" are hidden, so when \"SELECT * FROM ...\" is\nissued to a materialized view, these are invisible for users. Users can not\nuse such name as a user column in materialized views with IVM support. \n\nAs for how to make internal columns invisible to SELECT *, previously there\nhave been discussions about doing that using a new flag in pg_attribute[3]. \nHowever, the discussion is no longer active. So, we decided to use column\nname for checking if this is special or not in our IVM implementation\nfor now.\n\n* TRUNCATE support\n\nCurrently, TRUNCATE on base tables are not supported. When TRUNCATE command\nis executed on a base table, it is ignored and nothing occurs on materialized\nviews. \n\nThere are another options as followings:\n\n- Raise an error or warning when a base table is TRUNCATEed.\n- Make the view non-scannable (like REFRESH WITH NO DATA)\n- Update the view in any way. It would be easy for inner joins\n or aggregate views, but there is some difficult with outer joins.\n\nWhich is the best way? Should we support TRUNCATE in the first version?\nAny suggestions would be greatly appreciated.\n\n[1] https://wiki.postgresql.org/wiki/Incremental_View_Maintenance\n[2] https://www.postgresql.org/message-id/20191129173328.e5a0e9f81e369a3769c4fd0c%40sraoss.co.jp\n[3] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n\n\nRegard,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 16 Oct 2020 19:30:34 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> * Aggregate support\n> \n> The current patch supports several built-in aggregates, that is, count, sum, \n> avg, min, and max. Other built-in aggregates or user-defined aggregates are\n> not supported.\n> \n> Aggregates in a materialized view definition is checked if this is supported\n> using OIDs of aggregate function. For this end, Gen_fmgrtab.pl is changed to\n> output aggregate function's OIDs to fmgroids.h\n> (by 0006-Change-Gen_fmgrtab.pl-to-output-aggregate-function-s.patch). \n> The logic for each aggregate function to update aggregate values in\n> materialized views is enbedded in a trigger function.\n> \n> There was another option in the past discussion. That is, we could add one\n> or more new attribute to pg_aggregate which provides information about if\n> each aggregate function supports IVM and its logic[2]. If we have a mechanism\n> to support IVM in pg_aggregate, we may use more general aggregate functions\n> including user-defined aggregate in materialized views for IVM.\n> \n> For example, the current pg_aggregate has aggcombinefn attribute for\n> supporting partial aggregation. Maybe we could use combine functions to\n> calculate new aggregate values in materialized views when tuples are\n> inserted into a base table. However, in the context of IVM, we also need\n> other function used when tuples are deleted from a base table, so we can not\n> use partial aggregation for IVM in the current implementation. \n> \n> Maybe, we could support the deletion case by adding a new support function,\n> say, \"inverse combine function\". The \"inverse combine function\" would take \n> aggregate value in a materialized view and aggregate value calculated from a\n> delta of view, and produces the new aggregate value which equals the result\n> after tuples in a base table are deleted.\n> \n> However, we don't have concrete plan for the new design of pg_aggregate. \n> In addition, even if make a new support function in pg_aggregate for IVM, \n> we can't use this in the current IVM code because our code uses SQL via SPI\n> in order to update a materialized view and we can't call \"internal\" type\n> function directly in SQL.\n> \n> For these reasons, in the current patch, we decided to left supporting\n> general aggregates to the next version for simplicity, so the current patch\n> supports only some built-in aggregates and checks if they can be used in IVM\n> by their OIDs.\n\nCurrent patch for IVM is already large. I think implementing above\nwill make the patch size even larger, which makes reviewer's work\ndifficult. So I personally think we should commit the patch as it is,\nthen enhance IVM to support user defined and other aggregates in later\nversion of PostgreSQL.\n\nHowever, if supporting user defined and other aggregates is quite\nimportant for certain users, then we should rethink about this. It\nwill be nice if we could know how high such demand is.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 19 Oct 2020 12:24:32 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Adam Brusselback,\n\nOn Mon, 31 Dec 2018 11:20:11 -0500\nAdam Brusselback <adambrusselback@gmail.com> wrote:\n\n> Hi all, just wanted to say I am very happy to see progress made on this,\n> my codebase has multiple \"materialized tables\" which are maintained with\n> statement triggers (transition tables) and custom functions. They are ugly\n> and a pain to maintain, but they work because I have no other\n> solution...for now at least.\n\nWe are want to find sutable use cases of the IVM patch being discussed in this\nthread, and I remembered your post that said you used statement triggers and\ncustom functions. We hope the patch will help you.\n\nThe patch implements IVM of immediate, that is, eager approach. Materialized\nviews are updated immediately when its base tables are modified. While the view\nis always up-to-date, there is a overhead on base table modification. \n\nWe would appreciate it if you could tell us what your use cases of materialized\nview is and whether our implementation suits your needs or not.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 22 Oct 2020 12:21:26 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hey there Yugo,\nI've asked a coworker to prepare a self contained example that encapsulates\nour multiple use cases.\n\nThe immediate/eager approach is exactly what we need, as within the same\ntransaction we have statements that can cause one of those \"materialized\ntables\" to be updated, and then sometimes have the need to query that\n\"materialized table\" in a subsequent statement and need to see the changes\nreflected.\n\nAs soon as my coworker gets that example built up I'll send a followup with\nit attached.\nThank you,\nAdam Brusselback\n\nHey there Yugo, I've asked a coworker to prepare a self contained example that encapsulates our multiple use cases.The immediate/eager approach is exactly what we need, as within the same transaction we have statements that can cause one of those \"materialized tables\" to be updated, and then sometimes have the need to query that \"materialized table\" in a subsequent statement and need to see the changes reflected.As soon as my coworker gets that example built up I'll send a followup with it attached.Thank you,Adam Brusselback",
"msg_date": "Thu, 22 Oct 2020 10:07:29 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Adam,\n\nOn Thu, 22 Oct 2020 10:07:29 -0400\nAdam Brusselback <adambrusselback@gmail.com> wrote:\n\n> Hey there Yugo,\n> I've asked a coworker to prepare a self contained example that encapsulates\n> our multiple use cases.\n\nThank you very much!\n\n> The immediate/eager approach is exactly what we need, as within the same\n> transaction we have statements that can cause one of those \"materialized\n> tables\" to be updated, and then sometimes have the need to query that\n> \"materialized table\" in a subsequent statement and need to see the changes\n> reflected.\n\nThe proposed patch provides the exact this feature and I think this will meet\nyour needs.\n\n> As soon as my coworker gets that example built up I'll send a followup with\n> it attached.\n\nGreat! We are looking forward to it. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 23 Oct 2020 16:57:26 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "That was a good bit more work to get ready than I expected. It's broken\ninto two scripts, one to create the schema, the other to load data and\ncontaining a couple check queries to ensure things are working properly\n(checking the materialized tables against a regular view for accuracy).\n\nThe first test case is to give us a definitive result on what \"agreed\npricing\" is in effect at a point in time based on a product hierarchy\nour customers setup, and allow pricing to be set on nodes in that\nhierarchy, as well as specific products (with an order of precedence).\nThe second test case maintains some aggregated amounts / counts / boolean\nlogic at an \"invoice\" level for all the detail lines which make up that\ninvoice.\n\nBoth of these are real-world use cases which were simplified a bit to make\nthem easier to understand. We have other use cases as well, but with how\nmuch time this took to prepare i'll keep it at this for now.\nIf you need anything clarified or have any issues, just let me know.\n\nOn Fri, Oct 23, 2020 at 3:58 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi Adam,\n>\n> On Thu, 22 Oct 2020 10:07:29 -0400\n> Adam Brusselback <adambrusselback@gmail.com> wrote:\n>\n> > Hey there Yugo,\n> > I've asked a coworker to prepare a self contained example that\n> encapsulates\n> > our multiple use cases.\n>\n> Thank you very much!\n>\n> > The immediate/eager approach is exactly what we need, as within the same\n> > transaction we have statements that can cause one of those \"materialized\n> > tables\" to be updated, and then sometimes have the need to query that\n> > \"materialized table\" in a subsequent statement and need to see the\n> changes\n> > reflected.\n>\n> The proposed patch provides the exact this feature and I think this will\n> meet\n> your needs.\n>\n> > As soon as my coworker gets that example built up I'll send a followup\n> with\n> > it attached.\n>\n> Great! We are looking forward to it.\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>",
"msg_date": "Tue, 27 Oct 2020 12:14:52 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Anastasia Lubennikova,\n\nI am writing this to you because I would like to ask the commitfest\nmanager something. \n\nThe status of the patch was changed to \"Waiting on Author\" from\n\"Ready for Committer\" at the beginning of this montfor the reason\nthat rebase was necessary. Now I updated the patch, so can I change\nthe status back to \"Ready for Committer\"?\n\nRegards,\nYugo Nagata\n\nOn Mon, 5 Oct 2020 18:16:18 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n> \n> Attached is the rebased patch (v18) to add support for Incremental\n> Materialized View Maintenance (IVM). It is able to be applied to\n> current latest master branch.\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 28 Oct 2020 14:00:51 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 27 Oct 2020 12:14:52 -0400\nAdam Brusselback <adambrusselback@gmail.com> wrote:\n\n> That was a good bit more work to get ready than I expected. It's broken\n> into two scripts, one to create the schema, the other to load data and\n> containing a couple check queries to ensure things are working properly\n> (checking the materialized tables against a regular view for accuracy).\n\nThank you very much! I am really grateful.\n \n> The first test case is to give us a definitive result on what \"agreed\n> pricing\" is in effect at a point in time based on a product hierarchy\n> our customers setup, and allow pricing to be set on nodes in that\n> hierarchy, as well as specific products (with an order of precedence).\n> The second test case maintains some aggregated amounts / counts / boolean\n> logic at an \"invoice\" level for all the detail lines which make up that\n> invoice.\n> \n> Both of these are real-world use cases which were simplified a bit to make\n> them easier to understand. We have other use cases as well, but with how\n> much time this took to prepare i'll keep it at this for now.\n> If you need anything clarified or have any issues, just let me know.\n\nAlthough I have not look into it in details yet, in my understanding, it seems\nthat materialized views are used to show \"pricing\" or \"invoice\" information before\nthe order is confirmed, that is, before the transaction is committed. Definitely,\nthese will be use cases where immediate view maintenance is useful. \n\nI am happy because I found concrete use cases of immediate IVM. However, \nunfortunately, the view definitions in your cases are complex, and the current\nimplementation of the patch doesn't support it. We would like to improve the\nfeature in future so that more complex views could benefit from IVM.\n\nRegards,\nYugo Nagata\n\n> On Fri, Oct 23, 2020 at 3:58 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi Adam,\n> >\n> > On Thu, 22 Oct 2020 10:07:29 -0400\n> > Adam Brusselback <adambrusselback@gmail.com> wrote:\n> >\n> > > Hey there Yugo,\n> > > I've asked a coworker to prepare a self contained example that\n> > encapsulates\n> > > our multiple use cases.\n> >\n> > Thank you very much!\n> >\n> > > The immediate/eager approach is exactly what we need, as within the same\n> > > transaction we have statements that can cause one of those \"materialized\n> > > tables\" to be updated, and then sometimes have the need to query that\n> > > \"materialized table\" in a subsequent statement and need to see the\n> > changes\n> > > reflected.\n> >\n> > The proposed patch provides the exact this feature and I think this will\n> > meet\n> > your needs.\n> >\n> > > As soon as my coworker gets that example built up I'll send a followup\n> > with\n> > > it attached.\n> >\n> > Great! We are looking forward to it.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> > --\n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> >\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 28 Oct 2020 17:26:40 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "ср, 28 окт. 2020 г. в 08:02, Yugo NAGATA <nagata@sraoss.co.jp>:\n\n> Hi Anastasia Lubennikova,\n>\n> I am writing this to you because I would like to ask the commitfest\n> manager something.\n>\n> The status of the patch was changed to \"Waiting on Author\" from\n> \"Ready for Committer\" at the beginning of this montfor the reason\n> that rebase was necessary. Now I updated the patch, so can I change\n> the status back to \"Ready for Committer\"?\n>\n> Regards,\n> Yugo Nagata\n>\n>\nYes, go ahead. As far as I see, the patch is in a good shape and there are\nno unanswered questions from reviewers.\nFeel free to change the status of CF entries, when it seems reasonable to\nyou.\n\nP.S. Please, avoid top-posting, It makes it harder to follow the\ndiscussion, in-line replies are customary in pgsql mailing lists.\nSee https://en.wikipedia.org/wiki/Posting_style#Top-posting for details.\n-- \nBest regards,\nLubennikova Anastasia\n\nср, 28 окт. 2020 г. в 08:02, Yugo NAGATA <nagata@sraoss.co.jp>:Hi Anastasia Lubennikova,\n\nI am writing this to you because I would like to ask the commitfest\nmanager something. \n\nThe status of the patch was changed to \"Waiting on Author\" from\n\"Ready for Committer\" at the beginning of this montfor the reason\nthat rebase was necessary. Now I updated the patch, so can I change\nthe status back to \"Ready for Committer\"?\n\nRegards,\nYugo NagataYes, go ahead. As far as I see, the patch is in a good shape and there are no unanswered questions from reviewers. Feel free to change the status of CF entries, when it seems reasonable to you.P.S. Please, avoid top-posting, It makes it harder to follow the discussion, in-line replies are customary in pgsql mailing lists.See https://en.wikipedia.org/wiki/Posting_style#Top-posting for details.-- Best regards,Lubennikova Anastasia",
"msg_date": "Wed, 28 Oct 2020 12:01:58 +0300",
"msg_from": "Anastasia Lubennikova <lubennikovaav@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 28 Oct 2020 12:01:58 +0300\nAnastasia Lubennikova <lubennikovaav@gmail.com> wrote:\n\n> ср, 28 окт. 2020 г. в 08:02, Yugo NAGATA <nagata@sraoss.co.jp>:\n> \n> > Hi Anastasia Lubennikova,\n> >\n> > I am writing this to you because I would like to ask the commitfest\n> > manager something.\n> >\n> > The status of the patch was changed to \"Waiting on Author\" from\n> > \"Ready for Committer\" at the beginning of this montfor the reason\n> > that rebase was necessary. Now I updated the patch, so can I change\n> > the status back to \"Ready for Committer\"?\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> >\n> Yes, go ahead. As far as I see, the patch is in a good shape and there are\n> no unanswered questions from reviewers.\n> Feel free to change the status of CF entries, when it seems reasonable to\n> you.\n\nThank you for your response! I get it.\n \n> P.S. Please, avoid top-posting, It makes it harder to follow the\n> discussion, in-line replies are customary in pgsql mailing lists.\n> See https://en.wikipedia.org/wiki/Posting_style#Top-posting for details.\n\nI understand it.\n\nRegards,\nYugo Nagata\n\n> -- \n> Best regards,\n> Lubennikova Anastasia\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 28 Oct 2020 18:11:46 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, Oct 05, 2020 at 06:16:18PM +0900, Yugo NAGATA wrote:\n> Hi,\n> \n> Attached is the rebased patch (v18) to add support for Incremental\n\nThis needs to be rebased again - the last version doesn't apply anymore.\nhttp://cfbot.cputube.org/yugo-nagata.html\n\nI looked though it a bit and attach some fixes to the user-facing docs.\n\nThere's some more typos in the source that I didn't fix:\nconstrains\nmaterliazied\ncluase\nimmediaite\nclumn\nTemrs\nmigth\nrecalculaetd\nspeified\nsecuirty\n\ncommit message: comletion\n\npsql and pg_dump say 13 but should say 14 now:\npset.sversion >= 130000 \n\n# bag union\nbig union?\n\n+ <structfield>relisivm</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ True if materialized view enables incremental view maintenance\n\nThis isn't clear, but I think it should say \"True for materialized views which\nare enabled for incremental view maintenance (IVM).\"\n\n-- \nJustin",
"msg_date": "Thu, 5 Nov 2020 22:58:25 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\n\nOn 05.10.2020 12:16, Yugo NAGATA wrote:\n> Hi,\n>\n> Attached is the rebased patch (v18) to add support for Incremental\n> Materialized View Maintenance (IVM). It is able to be applied to\n> current latest master branch.\n>\n\nThank you very much for this work.\nI consider incremental materialized views as \"reincarnation\" of OLAP \nhypercubes.\nThere are two approaches of making OLAP queries faster:\n1. speed up query execution (using JIT, columnar store, vector \noperations and parallel execution)\n2. precalculate requested data\n\nIncremental materialize views make it possible to implement second \napproach. But how competitive it is?\nI do not know current limitations of incremental materialized views, but \nI checked that basic OLAP functionality:\nJOIN+GROUP_BY+AGGREGATION is supported.\n\nThe patch is not applied to the current master because makeFuncCall \nprototype is changed,\nI fixed it by adding COAERCE_CALL_EXPLICIT.\nThen I did the following simple test:\n\n1. Create pgbench database with scale 100.\npgbench speed at my desktop is about 10k TPS:\n\npgbench -M prepared -N -c 10 -j 4 -T 30 -P 1 postgres\ntps = 10194.951827 (including connections establishing)\n\n2. Then I created incremental materialized view:\n\ncreate incremental materialized view teller_sums as select \nt.tid,sum(abalance) from pgbench_accounts a join pgbench_tellers t on \na.bid=t.bid group by t.tid;\nSELECT 1000\nTime: 20805.230 ms (00:20.805)\n\n20 second is reasonable time, comparable with time of database \ninitialization.\n\nThen obviously we see advantages of precalculated aggregates:\n\npostgres=# select * from teller_sums where tid=1;\n �tid |� sum\n-----+--------\n �� 1 | -96427\n(1 row)\n\nTime: 0.871 ms\npostgres=# select t.tid,sum(abalance) from pgbench_accounts a join \npgbench_tellers t on a.bid=t.bid group by t.tid having t.tid=1\n;\n �tid |� sum\n-----+--------\n �� 1 | -96427\n(1 row)\n\nTime: 915.508 ms\n\nAmazing. Almost 1000 times difference!\n\n3. Run pgbench once again:\n\nOoops! Now TPS are much lower:\n\ntps = 141.767347 (including connections establishing)\n\nSpeed of updates is reduced more than 70 times!\nLooks like we loose parallelism because almost the same result I get \nwith just one connection.\n\n4. Finally let's create one more view (it is reasonable to expect that \nanalytics will run many different queries and so need multiple views).\n\ncreate incremental materialized view teller_avgs as select \nt.tid,avg(abalance) from pgbench_accounts a join pgbench_tellers t on \na.bid=t.bid group by t.tid;\n\nIt is great that not only simple aggregates like SUM are supported, but \nalso AVG.\nBut insertion speed now is reduced twice - 72TPS.\n\nI tried to make some profiling but didn't see something unusual:\n\n � 16.41%� postgres� postgres����������� [.] ExecInterpExpr\n �� 8.78%� postgres� postgres����������� [.] slot_deform_heap_tuple\n �� 3.23%� postgres� postgres����������� [.] ExecMaterial\n �� 2.71%� postgres� postgres����������� [.] AllocSetCheck\n �� 2.33%� postgres� postgres����������� [.] AllocSetAlloc\n �� 2.29%� postgres� postgres����������� [.] slot_getsomeattrs_int\n �� 2.26%� postgres� postgres����������� [.] ExecNestLoop\n �� 2.11%� postgres� postgres����������� [.] MemoryContextReset\n �� 1.98%� postgres� postgres����������� [.] tts_minimal_store_tuple\n �� 1.87%� postgres� postgres����������� [.] heap_compute_data_size\n �� 1.78%� postgres� postgres����������� [.] fill_val\n �� 1.56%� postgres� postgres����������� [.] tuplestore_gettuple\n �� 1.44%� postgres� postgres����������� [.] sentinel_ok\n �� 1.35%� postgres� postgres����������� [.] heap_fill_tuple\n �� 1.27%� postgres� postgres����������� [.] tuplestore_gettupleslot\n �� 1.17%� postgres� postgres����������� [.] ExecQual\n �� 1.14%� postgres� postgres����������� [.] tts_minimal_clear\n �� 1.13%� postgres� postgres����������� [.] CheckOpSlotCompatibility\n �� 1.10%� postgres� postgres����������� [.] base_yyparse\n �� 1.10%� postgres� postgres����������� [.] heapgetpage\n �� 1.04%� postgres� postgres����������� [.] heap_form_minimal_tuple\n �� 1.00%� postgres� postgres����������� [.] slot_getsomeattrs\n\nSo good news is that incremental materialized views really work.\nAnd bad news is that maintenance overhead is too large which \nsignificantly restrict applicability of this approach.\nCertainly in case of dominated read-only workload such materialized \nviews can significantly improve performance.\nBut unfortunately my dream that them allow to combine OLAP+OLPT is not \ncurrently realized.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 19:10:35 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Konstantin,\nI remember testing it with pg_stat_statements (and planning counters\nenabled). Maybe identifying internal queries associated with this (simple)\ntest case, could help dev team ?\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 11 Nov 2020 09:57:06 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 5 Nov 2020 22:58:25 -0600\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Oct 05, 2020 at 06:16:18PM +0900, Yugo NAGATA wrote:\n> This needs to be rebased again - the last version doesn't apply anymore.\n> http://cfbot.cputube.org/yugo-nagata.html\n\nI attached the rebased patch (v19).\n \n> I looked though it a bit and attach some fixes to the user-facing docs.\n\nThank you for pointing out a lot of typos and making the patch to fix it!\nYour fixes are included in the latest patch.\n \n> There's some more typos in the source that I didn't fix:\n> constrains\n> materliazied\n> cluase\n> immediaite\n> clumn\n> Temrs\n> migth\n> recalculaetd\n> speified\n> secuirty\n> \n> commit message: comletion\n>\n> psql and pg_dump say 13 but should say 14 now:\n> pset.sversion >= 130000 \n\nThese were also fixed.\n \n> # bag union\n> big union?\n\n\"bag union\" is union operation of bag (multi-set) that does not eliminate duplicate of tuples.\n \n> + <structfield>relisivm</structfield> <type>bool</type>\n> + </para>\n> + <para>\n> + True if materialized view enables incremental view maintenance\n> \n> This isn't clear, but I think it should say \"True for materialized views which\n> are enabled for incremental view maintenance (IVM).\"\n\nYes, you are right. I also fixed it in this way.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Thu, 12 Nov 2020 17:47:48 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> 1. Create pgbench database with scale 100.\n> pgbench speed at my desktop is about 10k TPS:\n> \n> pgbench -M prepared -N -c 10 -j 4 -T 30 -P 1 postgres\n> tps = 10194.951827 (including connections establishing)\n> \n> 2. Then I created incremental materialized view:\n> \n> create incremental materialized view teller_sums as select\n> t.tid,sum(abalance) from pgbench_accounts a join pgbench_tellers t on\n> a.bid=t.bid group by t.tid;\n> SELECT 1000\n> Time: 20805.230 ms (00:20.805)\n> \n> 20 second is reasonable time, comparable with time of database\n> initialization.\n> \n> Then obviously we see advantages of precalculated aggregates:\n> \n> postgres=# select * from teller_sums where tid=1;\n> tid | sum\n> -----+--------\n> 1 | -96427\n> (1 row)\n> \n> Time: 0.871 ms\n> postgres=# select t.tid,sum(abalance) from pgbench_accounts a join\n> pgbench_tellers t on a.bid=t.bid group by t.tid having t.tid=1\n> ;\n> tid | sum\n> -----+--------\n> 1 | -96427\n> (1 row)\n> \n> Time: 915.508 ms\n> \n> Amazing. Almost 1000 times difference!\n> \n> 3. Run pgbench once again:\n> \n> Ooops! Now TPS are much lower:\n> \n> tps = 141.767347 (including connections establishing)\n> \n> Speed of updates is reduced more than 70 times!\n> Looks like we loose parallelism because almost the same result I get\n> with just one connection.\n\nHow much TPS do you get if you execute pgbench -c 1 without\nincremental materialized view defined? If it's around 141 then we\ncould surely confirm that the major bottle neck is locking contention.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 Nov 2020 20:53:55 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On 12.11.2020 14:53, Tatsuo Ishii wrote:\n>> 1. Create pgbench database with scale 100.\n>> pgbench speed at my desktop is about 10k TPS:\n>>\n>> pgbench -M prepared -N -c 10 -j 4 -T 30 -P 1 postgres\n>> tps = 10194.951827 (including connections establishing)\n>>\n>> 2. Then I created incremental materialized view:\n>>\n>> create incremental materialized view teller_sums as select\n>> t.tid,sum(abalance) from pgbench_accounts a join pgbench_tellers t on\n>> a.bid=t.bid group by t.tid;\n>> SELECT 1000\n>> Time: 20805.230 ms (00:20.805)\n>>\n>> 20 second is reasonable time, comparable with time of database\n>> initialization.\n>>\n>> Then obviously we see advantages of precalculated aggregates:\n>>\n>> postgres=# select * from teller_sums where tid=1;\n>> tid | sum\n>> -----+--------\n>> 1 | -96427\n>> (1 row)\n>>\n>> Time: 0.871 ms\n>> postgres=# select t.tid,sum(abalance) from pgbench_accounts a join\n>> pgbench_tellers t on a.bid=t.bid group by t.tid having t.tid=1\n>> ;\n>> tid | sum\n>> -----+--------\n>> 1 | -96427\n>> (1 row)\n>>\n>> Time: 915.508 ms\n>>\n>> Amazing. Almost 1000 times difference!\n>>\n>> 3. Run pgbench once again:\n>>\n>> Ooops! Now TPS are much lower:\n>>\n>> tps = 141.767347 (including connections establishing)\n>>\n>> Speed of updates is reduced more than 70 times!\n>> Looks like we loose parallelism because almost the same result I get\n>> with just one connection.\n> How much TPS do you get if you execute pgbench -c 1 without\n> incremental materialized view defined? If it's around 141 then we\n> could surely confirm that the major bottle neck is locking contention.\n>\n\nMy desktop has just 4 physical cores, so performance with one connection \nis about 2k TPS:\n\npgbench -M prepared -N -c 1 -T 60 -P 1 postgres\ntps = 1949.233532 (including connections establishing)\n\nSo there is still large gap (~14 times) between insert speed \nwith/without incremental view.\nI did more investigations and found out that one pf the reasons of bad \nperformance in this case is lack of index on materialized view,\nso update has to perform sequential scan through 1000 elements.\n\nWell, creation of proper indexes for table is certainly responsibility \nof DBA.\nBut users may not consider materialized view as normal table. So the \nidea that index should\nbe explicitly created for materialized view seems to be not so obvious.\n From the other side, implementation of materialized view knows which \nindex is needed for performing efficient incremental update.\nI wonder if it can create such index itself implicitly or at least \nproduce notice with proposal to create such index.\n\nIn any case, after creation of index on tid column of materialized view, \npgbench speed is increased from 141 to 331 TPS\n(more than two times). It is with single connection. But if I run \npgbench with 10 connections, then performance is even slightly slower: \n289 TPS.\n\nI looked throw your patch for exclusive table locks and found this \nfragment in matview.c:\n\n /*\n * Wait for concurrent transactions which update this materialized \nview at\n * READ COMMITED. This is needed to see changes committed in other\n * transactions. No wait and raise an error at REPEATABLE READ or\n * SERIALIZABLE to prevent update anomalies of matviews.\n * XXX: dead-lock is possible here.\n */\n if (!IsolationUsesXactSnapshot())\n LockRelationOid(matviewOid, ExclusiveLock);\n else if (!ConditionalLockRelationOid(matviewOid, ExclusiveLock))\n\n\nI replaced it with RowExlusiveLock and ... got 1437 TPS with 10 connections.\nIt is still about 7 times slower than performance without incremental view.\nBut now the gap is not so dramatic. And it seems to be clear that this \nexclusive lock on matview is real show stopper for concurrent updates.\nI do not know which race conditions and anomalies we can get if replace \ntable-level lock with row-level lock here.\nBut I think that this problem should be addressed in any case: single \nclient update mode is very rare scenario.\n\nI attached to this mail profile of pgbench workload with defined \nincremental view (with index).\nMay be you will find it useful.\n\n\nOne more disappointing observation of materialized views (now \nnon-incremental).\nTime of creation of non-incremental materialized view is about 18 seconds:\n\npostgres=# create materialized view teller_avgs as select \nt.tid,avg(abalance) from pgbench_accounts a join pgbench_tellers t on \na.bid=t.bid group by t.tid;\nSELECT 1000\nTime: 17795.395 ms (00:17.795)\n\nBut refresh of such view takes 55 seconds:\n\npostgres=# refresh materialized view teller_avgs;\nREFRESH MATERIALIZED VIEW\nTime: 55500.381 ms (00:55.500)\n\nAnd refresh time doesn't depend on amount of updates since last refresh:\nI got almost the same time when I ran pgbench for one minute before \nrefresh and\nwhen two refreshes are performed subsequently.\n\nAdding index doesn't help much in this case and concurrent refresh is \neven slower:\n\npostgres=# refresh materialized view concurrently teller_avgs;\nREFRESH MATERIALIZED VIEW\nTime: 56981.772 ms (00:56.982)\n\nSo it seems to be more efficient to drop and recreate materialized view \nrather than refresh it. At least in this case.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 12 Nov 2020 15:37:42 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 11 Nov 2020 19:10:35 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\nThank you for reviewing this patch!\n\n> \n> The patch is not applied to the current master because makeFuncCall \n> prototype is changed,\n> I fixed it by adding COAERCE_CALL_EXPLICIT.\n\nThe rebased patch was submitted.\n\n> Ooops! Now TPS are much lower:\n> \n> tps = 141.767347 (including connections establishing)\n> \n> Speed of updates is reduced more than 70 times!\n> Looks like we loose parallelism because almost the same result I get \n> with just one connection.\n\nAs you and Ishii-san mentioned in other posts, I think the reason would be a\ntable lock on the materialized view that is acquired during view maintenance.\nI will explain more a bit in another post.\n\n> 4. Finally let's create one more view (it is reasonable to expect that \n> analytics will run many different queries and so need multiple views).\n> \n> create incremental materialized view teller_avgs as select \n> t.tid,avg(abalance) from pgbench_accounts a join pgbench_tellers t on \n> a.bid=t.bid group by t.tid;\n> \n> It is great that not only simple aggregates like SUM are supported, but \n> also AVG.\n> But insertion speed now is reduced twice - 72TPS.\n\nYes, the current implementation takes twice time for updating a table time\nwhen a new incrementally maintainable materialized view is defined on the\ntable because view maintenance is performed for each view.\n\n> \n> So good news is that incremental materialized views really work.\n> And bad news is that maintenance overhead is too large which \n> significantly restrict applicability of this approach.\n> Certainly in case of dominated read-only workload such materialized \n> views can significantly improve performance.\n> But unfortunately my dream that them allow to combine OLAP+OLPT is not \n> currently realized.\n\nAs you concluded, there is a large overhead on updating base tables in the\ncurrent implementation because it is immediate maintenance in which the view\nis updated in the same sentence where its base table is modified. Therefore,\nthis is not suitable to OLTP workload where there are frequent updates of\ntables. \n\nFor suppressing maintenance overhead in such workload, we have to implement\n\"deferred maintenance\" which collects table change logs and updates the view\nin another transaction afterward.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 24 Nov 2020 18:21:14 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 12 Nov 2020 15:37:42 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> Well, creation of proper indexes for table is certainly responsibility \n> of DBA.\n> But users may not consider materialized view as normal table. So the \n> idea that index should\n> be explicitly created for materialized view seems to be not so obvious.\n> From the other side, implementation of materialized view knows which \n> index is needed for performing efficient incremental update.\n> I wonder if it can create such index itself implicitly or at least \n> produce notice with proposal to create such index.\n\nThat makes sense. Especially for aggregate views, it is obvious that\ncreating an index on expressions used in GROUP BY is effective. For\nother views, creating an index on columns that come from primary keys\nof base tables would be effective if any.\n\nHowever, if any base table doesn't have a primary or unique key or such\nkey column is not contained in the view's target list, it is hard to\ndecide an appropriate index on the view. We can create an index on all\ncolumns in the target list, but it could cause overhead on view maintenance. \nSo, just producing notice would be better for such cases. \n\n> I looked throw your patch for exclusive table locks and found this \n> fragment in matview.c:\n> \n> /*\n> * Wait for concurrent transactions which update this materialized \n> view at\n> * READ COMMITED. This is needed to see changes committed in other\n> * transactions. No wait and raise an error at REPEATABLE READ or\n> * SERIALIZABLE to prevent update anomalies of matviews.\n> * XXX: dead-lock is possible here.\n> */\n> if (!IsolationUsesXactSnapshot())\n> LockRelationOid(matviewOid, ExclusiveLock);\n> else if (!ConditionalLockRelationOid(matviewOid, ExclusiveLock))\n> \n> \n> I replaced it with RowExlusiveLock and ... got 1437 TPS with 10 connections.\n> It is still about 7 times slower than performance without incremental view.\n> But now the gap is not so dramatic. And it seems to be clear that this \n> exclusive lock on matview is real show stopper for concurrent updates.\n> I do not know which race conditions and anomalies we can get if replace \n> table-level lock with row-level lock here.\n\nI explained it here:\nhttps://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n \nFor example, suppose there is a view V = R*S that joins tables R and S,\nand there are two concurrent transactions T1 which changes table R to R'\nand T2 which changes S to S'. Without any lock, in READ COMMITTED mode,\nV would be updated to R'*S in T1, and R*S' in T2, so it would cause\ninconsistency. By locking the view V, transactions T1, T2 are processed\nserially and this inconsistency can be avoided.\n\nEspecially, suppose that tuple dR is inserted into R in T1, and dS is\ninserted into S in T2, where dR and dS will be joined in according to\nthe view definition. In this situation, without any lock, the change of V is\ncomputed as dV=dR*S in T1, dV=R*dS in T2, respectively, and dR*dS would not\nbe included in the results. This inconsistency could not be resolved by\nrow-level lock.\n\n> But I think that this problem should be addressed in any case: single \n> client update mode is very rare scenario.\n\nThis behavior is explained in rules.sgml like this:\n\n+<sect2>\n+<title>Concurrent Transactions</title>\n+<para>\n+ Suppose an <acronym>IMMV</acronym> is defined on two base tables and each\n+ table was modified in different a concurrent transaction simultaneously.\n+ In the transaction which was committed first, <acronym>IMMV</acronym> can \n+ be updated considering only the change which happened in this transaction.\n+ On the other hand, in order to update the view correctly in the transaction\n+ which was committed later, we need to know the changes occurred in\n+ both transactions. For this reason, <literal>ExclusiveLock</literal>\n+ is held on an <acronym>IMMV</acronym> immediately after a base table is\n+ modified in <literal>READ COMMITTED</literal> mode to make sure that\n+ the <acronym>IMMV</acronym> is updated in the latter transaction after\n+ the former transaction is committed. In <literal>REPEATABLE READ</literal>\n+ or <literal>SERIALIZABLE</literal> mode, an error is raised immediately\n+ if lock acquisition fails because any changes which occurred in\n+ other transactions are not be visible in these modes and \n+ <acronym>IMMV</acronym> cannot be updated correctly in such situations.\n+</para>\n+</sect2>\n\nHoever, should we describe explicitly its impact on performance here?\n \n> I attached to this mail profile of pgbench workload with defined \n> incremental view (with index).\n> May be you will find it useful.\n\nThank you for your profiling! Hmm, it shows that overhead of executing\nquery for calculating the delta (refresh_mateview_datfill) and applying\nthe delta (SPI_exec) is large.... I will investigate if more optimizations\nto reduce the overhead is possible.\n\n> \n> One more disappointing observation of materialized views (now \n> non-incremental).\n> Time of creation of non-incremental materialized view is about 18 seconds:\n> \n> postgres=# create materialized view teller_avgs as select \n> t.tid,avg(abalance) from pgbench_accounts a join pgbench_tellers t on \n> a.bid=t.bid group by t.tid;\n> SELECT 1000\n> Time: 17795.395 ms (00:17.795)\n> \n> But refresh of such view takes 55 seconds:\n> \n> postgres=# refresh materialized view teller_avgs;\n> REFRESH MATERIALIZED VIEW\n> Time: 55500.381 ms (00:55.500)\n\nHmm, interesting... I would like to investigate this issue, too.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 24 Nov 2020 18:21:33 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\n\nOn 24.11.2020 12:21, Yugo NAGATA wrote:\n>\n>> I replaced it with RowExlusiveLock and ... got 1437 TPS with 10 connections.\n>> It is still about 7 times slower than performance without incremental view.\n>> But now the gap is not so dramatic. And it seems to be clear that this\n>> exclusive lock on matview is real show stopper for concurrent updates.\n>> I do not know which race conditions and anomalies we can get if replace\n>> table-level lock with row-level lock here.\n> I explained it here:\n> https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> \n> For example, suppose there is a view V = R*S that joins tables R and S,\n> and there are two concurrent transactions T1 which changes table R to R'\n> and T2 which changes S to S'. Without any lock, in READ COMMITTED mode,\n> V would be updated to R'*S in T1, and R*S' in T2, so it would cause\n> inconsistency. By locking the view V, transactions T1, T2 are processed\n> serially and this inconsistency can be avoided.\n>\n> Especially, suppose that tuple dR is inserted into R in T1, and dS is\n> inserted into S in T2, where dR and dS will be joined in according to\n> the view definition. In this situation, without any lock, the change of V is\n> computed as dV=dR*S in T1, dV=R*dS in T2, respectively, and dR*dS would not\n> be included in the results. This inconsistency could not be resolved by\n> row-level lock.\n>\n>> But I think that this problem should be addressed in any case: single\n>> client update mode is very rare scenario.\n> This behavior is explained in rules.sgml like this:\n>\n> +<sect2>\n> +<title>Concurrent Transactions</title>\n> +<para>\n> + Suppose an <acronym>IMMV</acronym> is defined on two base tables and each\n> + table was modified in different a concurrent transaction simultaneously.\n> + In the transaction which was committed first, <acronym>IMMV</acronym> can\n> + be updated considering only the change which happened in this transaction.\n> + On the other hand, in order to update the view correctly in the transaction\n> + which was committed later, we need to know the changes occurred in\n> + both transactions. For this reason, <literal>ExclusiveLock</literal>\n> + is held on an <acronym>IMMV</acronym> immediately after a base table is\n> + modified in <literal>READ COMMITTED</literal> mode to make sure that\n> + the <acronym>IMMV</acronym> is updated in the latter transaction after\n> + the former transaction is committed. In <literal>REPEATABLE READ</literal>\n> + or <literal>SERIALIZABLE</literal> mode, an error is raised immediately\n> + if lock acquisition fails because any changes which occurred in\n> + other transactions are not be visible in these modes and\n> + <acronym>IMMV</acronym> cannot be updated correctly in such situations.\n> +</para>\n> +</sect2>\n>\n> Hoever, should we describe explicitly its impact on performance here?\n> \n\nSorry, I didn't think much about this problem.\nBut I think that it is very important to try to find some solution of \nthe problem.\nThe most obvious optimization is not to use exclusive table lock if view \ndepends just on one table (contains no joins).\nLooks like there are no any anomalies in this case, are there?\n\nYes, most analytic queries contain joins (just two queries among 22 \nTPC-H have no joins).\nSo may be this optimization will not help much.\n\nI wonder if it is possible to somehow use predicate locking mechanism of \nPostgres to avoid this anomalies without global lock?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 24 Nov 2020 12:46:57 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 24 Nov 2020 12:46:57 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> \n> \n> On 24.11.2020 12:21, Yugo NAGATA wrote:\n> >\n> >> I replaced it with RowExlusiveLock and ... got 1437 TPS with 10 connections.\n> >> It is still about 7 times slower than performance without incremental view.\n> >> But now the gap is not so dramatic. And it seems to be clear that this\n> >> exclusive lock on matview is real show stopper for concurrent updates.\n> >> I do not know which race conditions and anomalies we can get if replace\n> >> table-level lock with row-level lock here.\n> > I explained it here:\n> > https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> > \n> > For example, suppose there is a view V = R*S that joins tables R and S,\n> > and there are two concurrent transactions T1 which changes table R to R'\n> > and T2 which changes S to S'. Without any lock, in READ COMMITTED mode,\n> > V would be updated to R'*S in T1, and R*S' in T2, so it would cause\n> > inconsistency. By locking the view V, transactions T1, T2 are processed\n> > serially and this inconsistency can be avoided.\n> >\n> > Especially, suppose that tuple dR is inserted into R in T1, and dS is\n> > inserted into S in T2, where dR and dS will be joined in according to\n> > the view definition. In this situation, without any lock, the change of V is\n> > computed as dV=dR*S in T1, dV=R*dS in T2, respectively, and dR*dS would not\n> > be included in the results. This inconsistency could not be resolved by\n> > row-level lock.\n> >\n> >> But I think that this problem should be addressed in any case: single\n> >> client update mode is very rare scenario.\n> > This behavior is explained in rules.sgml like this:\n> >\n> > +<sect2>\n> > +<title>Concurrent Transactions</title>\n> > +<para>\n> > + Suppose an <acronym>IMMV</acronym> is defined on two base tables and each\n> > + table was modified in different a concurrent transaction simultaneously.\n> > + In the transaction which was committed first, <acronym>IMMV</acronym> can\n> > + be updated considering only the change which happened in this transaction.\n> > + On the other hand, in order to update the view correctly in the transaction\n> > + which was committed later, we need to know the changes occurred in\n> > + both transactions. For this reason, <literal>ExclusiveLock</literal>\n> > + is held on an <acronym>IMMV</acronym> immediately after a base table is\n> > + modified in <literal>READ COMMITTED</literal> mode to make sure that\n> > + the <acronym>IMMV</acronym> is updated in the latter transaction after\n> > + the former transaction is committed. In <literal>REPEATABLE READ</literal>\n> > + or <literal>SERIALIZABLE</literal> mode, an error is raised immediately\n> > + if lock acquisition fails because any changes which occurred in\n> > + other transactions are not be visible in these modes and\n> > + <acronym>IMMV</acronym> cannot be updated correctly in such situations.\n> > +</para>\n> > +</sect2>\n> >\n> > Hoever, should we describe explicitly its impact on performance here?\n> > \n> \n> Sorry, I didn't think much about this problem.\n> But I think that it is very important to try to find some solution of \n> the problem.\n> The most obvious optimization is not to use exclusive table lock if view \n> depends just on one table (contains no joins).\n> Looks like there are no any anomalies in this case, are there?\n\nThank you for your suggestion! That makes sense.\n \n> Yes, most analytic queries contain joins (just two queries among 22 \n> TPC-H have no joins).\n> So may be this optimization will not help much.\n\nYes, but if a user want to incrementally maintain only aggregate views on a large\ntable, like TPC-H Q1, it will be helpful. For this optimization, we have to only\ncheck the number of RTE in the rtable list and it would be cheap.\n\n> I wonder if it is possible to somehow use predicate locking mechanism of \n> Postgres to avoid this anomalies without global lock?\n\nYou mean that, ,instead of using any table lock, if any possibility of the\nanomaly is detected using predlock mechanism then abort the transaction?\n\nI don't have concrete idea to implement it and know if it is possible yet,\nbut I think it is worth to consider this. Thanks.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 24 Nov 2020 19:11:38 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\n\nOn 24.11.2020 13:11, Yugo NAGATA wrote:\n>\n>> I wonder if it is possible to somehow use predicate locking mechanism of\n>> Postgres to avoid this anomalies without global lock?\n> You mean that, ,instead of using any table lock, if any possibility of the\n> anomaly is detected using predlock mechanism then abort the transaction?\n\nYes. If both transactions are using serializable isolation level, then \nlock is not needed, isn't it?\nSo at least you can add yet another simple optimization: if transaction \nhas serializable isolation level,\nthen exclusive lock is not required.\n\nBut I wonder if we can go further so that even if transaction is using \nread-committed or repeatable-read isolation level,\nwe still can replace exclusive table lock with predicate locks.\n\nThe main problem with this approach (from my point of view) is the \npredicate locks are able to detect conflict but not able to prevent it.\nI.e. if such conflict is detected then transaction has to be aborted.\nAnd it is not always desirable, especially because user doesn't expect \nit: how can insertion of single record with unique keys in a table cause \ntransaction conflict?\nAnd this is what will happen in your example with transactions T1 and T2 \ninserting records in R and S tables.\n\nAnd what do you think about backrgound update of materialized view?\nOn update/insert trigger will just add record to some \"delta\" table and \nthen some background worker will update view.\nCertainly in this case we loose synchronization between main table and \nmaterialized view (last one may contain slightly deteriorated data).\nBut in this case no exclusive lock is needed, isn't it?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 25 Nov 2020 15:16:05 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 25 Nov 2020 15:16:05 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> \n> \n> On 24.11.2020 13:11, Yugo NAGATA wrote:\n> >\n> >> I wonder if it is possible to somehow use predicate locking mechanism of\n> >> Postgres to avoid this anomalies without global lock?\n> > You mean that, ,instead of using any table lock, if any possibility of the\n> > anomaly is detected using predlock mechanism then abort the transaction?\n> \n> Yes. If both transactions are using serializable isolation level, then \n> lock is not needed, isn't it?\n> So at least you can add yet another simple optimization: if transaction \n> has serializable isolation level,\n> then exclusive lock is not required.\n\nAs long as we use the trigger approach, we can't handle concurrent view maintenance\nin either repeatable read or serializable isolation level. It is because one\ntransaction (R= R+dR) cannot see changes occurred in another transaction (S'= S+dS)\nin such cases, and we cannot get the incremental change on the view (dV=dR*dS). \nTherefore, in the current implementation, the transaction is aborted when the\nconcurrent view maintenance happens in repeatable read or serializable.\n \n> But I wonder if we can go further so that even if transaction is using \n> read-committed or repeatable-read isolation level,\n> we still can replace exclusive table lock with predicate locks.\n> \n> The main problem with this approach (from my point of view) is the \n> predicate locks are able to detect conflict but not able to prevent it.\n> I.e. if such conflict is detected then transaction has to be aborted.\n> And it is not always desirable, especially because user doesn't expect \n> it: how can insertion of single record with unique keys in a table cause \n> transaction conflict?\n> And this is what will happen in your example with transactions T1 and T2 \n> inserting records in R and S tables.\n\nYes. I wonder that either aborting transaction or waiting on locks is unavoidable\nwhen a view is incrementally updated concurrently (at least in the immediate\nmaintenance where a view is update in the same transaction that updates the base\ntable).\n \n> And what do you think about backrgound update of materialized view?\n> On update/insert trigger will just add record to some \"delta\" table and \n> then some background worker will update view.\n> Certainly in this case we loose synchronization between main table and \n> materialized view (last one may contain slightly deteriorated data).\n> But in this case no exclusive lock is needed, isn't it?\n\nOf course, we are considering this type of view maintenance. This is\ndeferred maintenance where a view is update after the transaction\nthat updates the base tables is committed. Views can be updated in\nbacground in a appropreate timing or as a response of a user command.\n\nTo implement this, we needs a mechanism to maintain change logs which\nrecords changes of base tables. We think that implementing this infrastructure\nis not trivial work, so, in the first patch proposal, we decided to start from\nimmediate approach which needs less code. \n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 25 Nov 2020 22:06:28 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\n\nOn 25.11.2020 16:06, Yugo NAGATA wrote:\n> On Wed, 25 Nov 2020 15:16:05 +0300\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n>\n>>\n>> On 24.11.2020 13:11, Yugo NAGATA wrote:\n>>>> I wonder if it is possible to somehow use predicate locking mechanism of\n>>>> Postgres to avoid this anomalies without global lock?\n>>> You mean that, ,instead of using any table lock, if any possibility of the\n>>> anomaly is detected using predlock mechanism then abort the transaction?\n>> Yes. If both transactions are using serializable isolation level, then\n>> lock is not needed, isn't it?\n>> So at least you can add yet another simple optimization: if transaction\n>> has serializable isolation level,\n>> then exclusive lock is not required.\n> As long as we use the trigger approach, we can't handle concurrent view maintenance\n> in either repeatable read or serializable isolation level. It is because one\n> transaction (R= R+dR) cannot see changes occurred in another transaction (S'= S+dS)\n> in such cases, and we cannot get the incremental change on the view (dV=dR*dS).\n> Therefore, in the current implementation, the transaction is aborted when the\n> concurrent view maintenance happens in repeatable read or serializable.\n\nSorry, may be I do not correctly understand you or you do not understand me.\nLets consider two serializable transactions (I do not use view or \ntriggers, but perform correspondent updates manually):\n\n\n\ncreate table t(pk integer, val int);\ncreate table mat_view(gby_key integer primary key, total bigint);\ninsert into t values (1,0),(2,0);\ninsert into mat_view values (1,0),(2,0);\n\nSession 1: Session 2:\n\nbegin isolation level serializable;\nbegin isolation level serializable;\ninsert into t values (1,200); insert into t \nvalues (1,300);\nupdate mat_view set total=total+200 where gby_key=1;\nupdate mat_view set total=total+300 where gby_key=1;\n<blocked>\ncommit;\nERROR: could not serialize access due to concurrent update\n\nSo both transactions are aborted.\nIt is expected behavior for serializable transactions.\nBut if transactions updating different records of mat_view, then them \ncan be executed concurrently:\n\nSession 1: Session 2:\n\nbegin isolation level serializable;\nbegin isolation level serializable;\ninsert into t values (1,200); insert into t \nvalues (2,300);\nupdate mat_view set total=total+200 where gby_key=1;\nupdate mat_view set total=total+300 where gby_key=2;\ncommit; commit;\n\nSo, if transactions are using serializable isolation level, then we can \nupdate mat view without exclusive lock\nand if there is not conflict, this transaction can be executed concurrently.\n\nPlease notice, that exclusive lock doesn't prevent conflict in first case:\n\nSession 1: Session 2:\n\nbegin isolation level serializable;\nbegin isolation level serializable;\ninsert into t values (1,200); insert into t \nvalues (1,300);\nlock table mat_view;\nupdate mat_view set total=total+200 where gby_key=1;\nlock table mat_view;\n<blocked>\ncommit;\nupdate mat_view set total=total+300 where gby_key=1;\ncommit;\nERROR: could not serialize access due to concurrent update\n\n\nSo do you agree that there are no reasons for using explicit lock for \nserializable transactions?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 25 Nov 2020 18:00:16 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 25 Nov 2020 18:00:16 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> \n> \n> On 25.11.2020 16:06, Yugo NAGATA wrote:\n> > On Wed, 25 Nov 2020 15:16:05 +0300\n> > Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> >\n> >>\n> >> On 24.11.2020 13:11, Yugo NAGATA wrote:\n> >>>> I wonder if it is possible to somehow use predicate locking mechanism of\n> >>>> Postgres to avoid this anomalies without global lock?\n> >>> You mean that, ,instead of using any table lock, if any possibility of the\n> >>> anomaly is detected using predlock mechanism then abort the transaction?\n> >> Yes. If both transactions are using serializable isolation level, then\n> >> lock is not needed, isn't it?\n> >> So at least you can add yet another simple optimization: if transaction\n> >> has serializable isolation level,\n> >> then exclusive lock is not required.\n> > As long as we use the trigger approach, we can't handle concurrent view maintenance\n> > in either repeatable read or serializable isolation level. It is because one\n> > transaction (R= R+dR) cannot see changes occurred in another transaction (S'= S+dS)\n> > in such cases, and we cannot get the incremental change on the view (dV=dR*dS).\n> > Therefore, in the current implementation, the transaction is aborted when the\n> > concurrent view maintenance happens in repeatable read or serializable.\n> \n> Sorry, may be I do not correctly understand you or you do not understand me.\n> Lets consider two serializable transactions (I do not use view or \n> triggers, but perform correspondent updates manually):\n> \n> \n> \n> create table t(pk integer, val int);\n> create table mat_view(gby_key integer primary key, total bigint);\n> insert into t values (1,0),(2,0);\n> insert into mat_view values (1,0),(2,0);\n> \n> Session 1: Session 2:\n> \n> begin isolation level serializable;\n> begin isolation level serializable;\n> insert into t values (1,200); insert into t \n> values (1,300);\n> update mat_view set total=total+200 where gby_key=1;\n> update mat_view set total=total+300 where gby_key=1;\n> <blocked>\n> commit;\n> ERROR: could not serialize access due to concurrent update\n> \n> So both transactions are aborted.\n> It is expected behavior for serializable transactions.\n> But if transactions updating different records of mat_view, then them \n> can be executed concurrently:\n> \n> Session 1: Session 2:\n> \n> begin isolation level serializable;\n> begin isolation level serializable;\n> insert into t values (1,200); insert into t \n> values (2,300);\n> update mat_view set total=total+200 where gby_key=1;\n> update mat_view set total=total+300 where gby_key=2;\n> commit; commit;\n> \n> So, if transactions are using serializable isolation level, then we can \n> update mat view without exclusive lock\n> and if there is not conflict, this transaction can be executed concurrently.\n> \n> Please notice, that exclusive lock doesn't prevent conflict in first case:\n> \n> Session 1: Session 2:\n> \n> begin isolation level serializable;\n> begin isolation level serializable;\n> insert into t values (1,200); insert into t \n> values (1,300);\n> lock table mat_view;\n> update mat_view set total=total+200 where gby_key=1;\n> lock table mat_view;\n> <blocked>\n> commit;\n> update mat_view set total=total+300 where gby_key=1;\n> commit;\n> ERROR: could not serialize access due to concurrent update\n> \n> \n> So do you agree that there are no reasons for using explicit lock for \n> serializable transactions?\n\nYes, I agree. I said an anomaly could occur in repeatable read and serializable\nisolation level, but it was wrong. In serializable, the transaction will be\naborted in programmable cases due to predicate locks, and we don't need the lock. \n\nHowever, in repeatable read, the anomaly still could occurs when the table is\ndefined on more than one base tables even if we lock the view. To prevent it,\nthe only way I found is aborting the transaction forcedly in such cases for now.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 30 Nov 2020 11:52:05 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the revised patch (v20) to add support for Incremental\nMaterialized View Maintenance (IVM).\n\nIn according with Konstantin's suggestion, I made a few optimizations.\n\n1. Creating an index on the matview automatically\n\nWhen creating incremental maintainable materialized view (IMMV)s,\na unique index on IMMV is created automatically if possible.\n\nIf the view definition query has a GROUP BY clause, the index is created\non the columns of GROUP BY expressions. Otherwise, if the view contains\nall primary key attributes of its base tables in the target list, the index\nis created on these attributes. Also, if the view has DISTINCT,\na unique index is created on all columns in the target list.\nIn other cases, no index is created.\n\nIn all cases, a NOTICE message is output to inform users that an index is\ncreated or that an appropriate index is necessary for efficient IVM.\n\n2. Use a weaker lock on the matview if possible\n\nIf the view has only one base table in this query, RowExclusiveLock is\nheld on the view instead of AccessExclusiveLock, because we don't\nneed to wait other concurrent transaction's result in order to\nmaintain the view in this case. When the same row in the view is\naffected due to concurrent maintenances, a row level lock will\nprotect it.\n\nOn Tue, 24 Nov 2020 12:46:57 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> The most obvious optimization is not to use exclusive table lock if view \n> depends just on one table (contains no joins).\n> Looks like there are no any anomalies in this case, are there?\n\nI confirmed the effect of this optimizations.\n\nFirst, when I performed pgbench (SF=100) without any materialized views,\nthe results is :\n \n pgbench test4 -T 300 -c 8 -j 4\n latency average = 6.493 ms\n tps = 1232.146229 (including connections establishing)\n\nNext, created a view as below, I performed the same pgbench.\n CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm2 AS\n SELECT bid, count(abalance), sum(abalance), avg(abalance)\n FROM pgbench_accounts GROUP BY bid;\n\nThe result is here:\n\n[the previous version (v19 with exclusive table lock)]\n - latency average = 77.677 ms\n - tps = 102.990159 (including connections establishing)\n\n[In the latest version (v20 with weaker lock)]\n - latency average = 17.576 ms\n - tps = 455.159644 (including connections establishing)\n\nThere is still substantial overhead, but we can see that the effect\nof the optimization.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 22 Dec 2020 21:51:36 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi hackers,\n\nI heard the opinion that this patch is too big and hard to review.\nSo, I wander that we should downsize the patch by eliminating some\nfeatures and leaving other basic features.\n\nIf there are more opinions this makes it easer for reviewers to look\nat this patch, I would like do it. If so, we plan to support only\nselection, projection, inner-join, and some aggregates in the first\nrelease and leave sub-queries, outer-join, and CTE supports to the\nnext release.\n\nRegards,\nYugo Nagata\n\nOn Tue, 22 Dec 2020 21:51:36 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n> Hi,\n> \n> Attached is the revised patch (v20) to add support for Incremental\n> Materialized View Maintenance (IVM).\n> \n> In according with Konstantin's suggestion, I made a few optimizations.\n> \n> 1. Creating an index on the matview automatically\n> \n> When creating incremental maintainable materialized view (IMMV)s,\n> a unique index on IMMV is created automatically if possible.\n> \n> If the view definition query has a GROUP BY clause, the index is created\n> on the columns of GROUP BY expressions. Otherwise, if the view contains\n> all primary key attributes of its base tables in the target list, the index\n> is created on these attributes. Also, if the view has DISTINCT,\n> a unique index is created on all columns in the target list.\n> In other cases, no index is created.\n> \n> In all cases, a NOTICE message is output to inform users that an index is\n> created or that an appropriate index is necessary for efficient IVM.\n> \n> 2. Use a weaker lock on the matview if possible\n> \n> If the view has only one base table in this query, RowExclusiveLock is\n> held on the view instead of AccessExclusiveLock, because we don't\n> need to wait other concurrent transaction's result in order to\n> maintain the view in this case. When the same row in the view is\n> affected due to concurrent maintenances, a row level lock will\n> protect it.\n> \n> On Tue, 24 Nov 2020 12:46:57 +0300\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> \n> > The most obvious optimization is not to use exclusive table lock if view \n> > depends just on one table (contains no joins).\n> > Looks like there are no any anomalies in this case, are there?\n> \n> I confirmed the effect of this optimizations.\n> \n> First, when I performed pgbench (SF=100) without any materialized views,\n> the results is :\n> \n> pgbench test4 -T 300 -c 8 -j 4\n> latency average = 6.493 ms\n> tps = 1232.146229 (including connections establishing)\n> \n> Next, created a view as below, I performed the same pgbench.\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm2 AS\n> SELECT bid, count(abalance), sum(abalance), avg(abalance)\n> FROM pgbench_accounts GROUP BY bid;\n> \n> The result is here:\n> \n> [the previous version (v19 with exclusive table lock)]\n> - latency average = 77.677 ms\n> - tps = 102.990159 (including connections establishing)\n> \n> [In the latest version (v20 with weaker lock)]\n> - latency average = 17.576 ms\n> - tps = 455.159644 (including connections establishing)\n> \n> There is still substantial overhead, but we can see that the effect\n> of the optimization.\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 22 Dec 2020 22:24:22 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Yugo,\n\n> 1. Creating an index on the matview automatically\n\nNice.\n\n> 2. Use a weaker lock on the matview if possible\n> \n> If the view has only one base table in this query, RowExclusiveLock is\n> held on the view instead of AccessExclusiveLock, because we don't\n> need to wait other concurrent transaction's result in order to\n> maintain the view in this case. When the same row in the view is\n> affected due to concurrent maintenances, a row level lock will\n> protect it.\n> \n> On Tue, 24 Nov 2020 12:46:57 +0300\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> \n>> The most obvious optimization is not to use exclusive table lock if view \n>> depends just on one table (contains no joins).\n>> Looks like there are no any anomalies in this case, are there?\n> \n> I confirmed the effect of this optimizations.\n> \n> First, when I performed pgbench (SF=100) without any materialized views,\n> the results is :\n> \n> pgbench test4 -T 300 -c 8 -j 4\n> latency average = 6.493 ms\n> tps = 1232.146229 (including connections establishing)\n> \n> Next, created a view as below, I performed the same pgbench.\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm2 AS\n> SELECT bid, count(abalance), sum(abalance), avg(abalance)\n> FROM pgbench_accounts GROUP BY bid;\n> \n> The result is here:\n> \n> [the previous version (v19 with exclusive table lock)]\n> - latency average = 77.677 ms\n> - tps = 102.990159 (including connections establishing)\n> \n> [In the latest version (v20 with weaker lock)]\n> - latency average = 17.576 ms\n> - tps = 455.159644 (including connections establishing)\n> \n> There is still substantial overhead, but we can see that the effect\n> of the optimization.\n\nGreat improvement. Now with this patch more than 4x faster than\nprevious one!\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 24 Dec 2020 06:54:17 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is the revised patch (v21) to add support for Incremental\nMaterialized View Maintenance (IVM).\n\nIn addition to some typos in the previous enhancement, I fixed a check to\nprevent a view from containing an expression including aggregates like\nsum(x)/sum(y) in this revision.\n\nRegards,\nYugo Nagata\n\nOn Tue, 22 Dec 2020 22:24:22 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi hackers,\n> \n> I heard the opinion that this patch is too big and hard to review.\n> So, I wander that we should downsize the patch by eliminating some\n> features and leaving other basic features.\n> \n> If there are more opinions this makes it easer for reviewers to look\n> at this patch, I would like do it. If so, we plan to support only\n> selection, projection, inner-join, and some aggregates in the first\n> release and leave sub-queries, outer-join, and CTE supports to the\n> next release.\n> \n> Regards,\n> Yugo Nagata\n> \n> On Tue, 22 Dec 2020 21:51:36 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > Hi,\n> > \n> > Attached is the revised patch (v20) to add support for Incremental\n> > Materialized View Maintenance (IVM).\n> > \n> > In according with Konstantin's suggestion, I made a few optimizations.\n> > \n> > 1. Creating an index on the matview automatically\n> > \n> > When creating incremental maintainable materialized view (IMMV)s,\n> > a unique index on IMMV is created automatically if possible.\n> > \n> > If the view definition query has a GROUP BY clause, the index is created\n> > on the columns of GROUP BY expressions. Otherwise, if the view contains\n> > all primary key attributes of its base tables in the target list, the index\n> > is created on these attributes. Also, if the view has DISTINCT,\n> > a unique index is created on all columns in the target list.\n> > In other cases, no index is created.\n> > \n> > In all cases, a NOTICE message is output to inform users that an index is\n> > created or that an appropriate index is necessary for efficient IVM.\n> > \n> > 2. Use a weaker lock on the matview if possible\n> > \n> > If the view has only one base table in this query, RowExclusiveLock is\n> > held on the view instead of AccessExclusiveLock, because we don't\n> > need to wait other concurrent transaction's result in order to\n> > maintain the view in this case. When the same row in the view is\n> > affected due to concurrent maintenances, a row level lock will\n> > protect it.\n> > \n> > On Tue, 24 Nov 2020 12:46:57 +0300\n> > Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> > \n> > > The most obvious optimization is not to use exclusive table lock if view \n> > > depends just on one table (contains no joins).\n> > > Looks like there are no any anomalies in this case, are there?\n> > \n> > I confirmed the effect of this optimizations.\n> > \n> > First, when I performed pgbench (SF=100) without any materialized views,\n> > the results is :\n> > \n> > pgbench test4 -T 300 -c 8 -j 4\n> > latency average = 6.493 ms\n> > tps = 1232.146229 (including connections establishing)\n> > \n> > Next, created a view as below, I performed the same pgbench.\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm2 AS\n> > SELECT bid, count(abalance), sum(abalance), avg(abalance)\n> > FROM pgbench_accounts GROUP BY bid;\n> > \n> > The result is here:\n> > \n> > [the previous version (v19 with exclusive table lock)]\n> > - latency average = 77.677 ms\n> > - tps = 102.990159 (including connections establishing)\n> > \n> > [In the latest version (v20 with weaker lock)]\n> > - latency average = 17.576 ms\n> > - tps = 455.159644 (including connections establishing)\n> > \n> > There is still substantial overhead, but we can see that the effect\n> > of the optimization.\n> > \n> > Regards,\n> > Yugo Nagata\n> > \n> > -- \n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 12 Jan 2021 19:03:08 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is a revised patch (v22) rebased for the latest master head.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 22 Jan 2021 16:59:54 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nAttached is a rebased patch (v22a).\n\nRagards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 16 Feb 2021 10:31:55 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n>\n> Attached is a rebased patch (v22a).\n>\n\nThanks for the patch. Will you think posting a patch with the latest commit\nat that\ntime is helpful? If so, when others want to review it, they know which\ncommit to\napply the patch without asking for a new rebase usually.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi,\n\nAttached is a rebased patch (v22a).Thanks for the patch. Will you think posting a patch with the latest commit at thattime is helpful? If so, when others want to review it, they know which commit to apply the patch without asking for a new rebase usually. -- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Thu, 18 Feb 2021 19:38:44 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 18 Feb 2021 19:38:44 +0800\nAndy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> On Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi,\n> >\n> > Attached is a rebased patch (v22a).\n> >\n> \n> Thanks for the patch. Will you think posting a patch with the latest commit\n> at that\n> time is helpful? If so, when others want to review it, they know which\n> commit to\n> apply the patch without asking for a new rebase usually.\n\nI rebased the patch because cfbot failed.\nhttp://cfbot.cputube.org/\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 19 Feb 2021 11:01:26 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\nOn 2/18/21 9:01 PM, Yugo NAGATA wrote:\n> On Thu, 18 Feb 2021 19:38:44 +0800\n> Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> On Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>>\n>>> Hi,\n>>>\n>>> Attached is a rebased patch (v22a).\n>>>\n>> Thanks for the patch. Will you think posting a patch with the latest commit\n>> at that\n>> time is helpful? If so, when others want to review it, they know which\n>> commit to\n>> apply the patch without asking for a new rebase usually.\n> I rebased the patch because cfbot failed.\n> http://cfbot.cputube.org/\n>\n\nIt's bitrotted a bit more dues to commits bb437f995d and 25936fd46c\n\n\n(A useful feature of the cfbot might be to notify the authors and\nreviewers when it detects bitrot for a previously passing entry.)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 15:42:00 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 8 Mar 2021 15:42:00 -0500\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> \n> On 2/18/21 9:01 PM, Yugo NAGATA wrote:\n> > On Thu, 18 Feb 2021 19:38:44 +0800\n> > Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >> On Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >>\n> >>> Hi,\n> >>>\n> >>> Attached is a rebased patch (v22a).\n> >>>\n> >> Thanks for the patch. Will you think posting a patch with the latest commit\n> >> at that\n> >> time is helpful? If so, when others want to review it, they know which\n> >> commit to\n> >> apply the patch without asking for a new rebase usually.\n> > I rebased the patch because cfbot failed.\n> > http://cfbot.cputube.org/\n> >\n> \n> It's bitrotted a bit more dues to commits bb437f995d and 25936fd46c\n\nThank you for letting me konw. I'll rebase it soon.\n\n> \n> \n> (A useful feature of the cfbot might be to notify the authors and\n> reviewers when it detects bitrot for a previously passing entry.)\n\n+1\nThe feature notifying it authors seems to me nice.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 9 Mar 2021 09:20:49 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 1:22 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> On Mon, 8 Mar 2021 15:42:00 -0500\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n> > (A useful feature of the cfbot might be to notify the authors and\n> > reviewers when it detects bitrot for a previously passing entry.)\n>\n> +1\n> The feature notifying it authors seems to me nice.\n\nNice idea. I was initially afraid of teaching cfbot to send email,\nfor fear of creating an out of control spam machine. Probably the\nmain thing would be the ability to interact with it to turn it on/off.\nIt's probably time to move forward with the plan of pushing the\nresults into a commitfest.postgresql.org API, and then making Magnus\net al write the email spam code with a preferences screen linked to\nyour community account :-D\n\n\n",
"msg_date": "Tue, 9 Mar 2021 13:27:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "From: Thomas Munro <thomas.munro@gmail.com>\r\n> It's probably time to move forward with the plan of pushing the\r\n> results into a commitfest.postgresql.org API, and then making Magnus\r\n> et al write the email spam code with a preferences screen linked to\r\n> your community account :-D\r\n\r\n+1\r\nI wish to see all the patch status information on the CF app.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 9 Mar 2021 00:34:42 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 9 Mar 2021 09:20:49 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 8 Mar 2021 15:42:00 -0500\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> > \n> > On 2/18/21 9:01 PM, Yugo NAGATA wrote:\n> > > On Thu, 18 Feb 2021 19:38:44 +0800\n> > > Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > >\n> > >> On Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >>\n> > >>> Hi,\n> > >>>\n> > >>> Attached is a rebased patch (v22a).\n> > >>>\n> > >> Thanks for the patch. Will you think posting a patch with the latest commit\n> > >> at that\n> > >> time is helpful? If so, when others want to review it, they know which\n> > >> commit to\n> > >> apply the patch without asking for a new rebase usually.\n> > > I rebased the patch because cfbot failed.\n> > > http://cfbot.cputube.org/\n> > >\n> > \n> > It's bitrotted a bit more dues to commits bb437f995d and 25936fd46c\n> \n> Thank you for letting me konw. I'll rebase it soon.\n\nDone. Attached is a rebased patch set.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 9 Mar 2021 17:27:50 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, \n\nI rebased the patch because the cfbot failed.\n\nRegards,\nYugo Nagata\n\nOn Tue, 9 Mar 2021 17:27:50 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Tue, 9 Mar 2021 09:20:49 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > On Mon, 8 Mar 2021 15:42:00 -0500\n> > Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> > > \n> > > On 2/18/21 9:01 PM, Yugo NAGATA wrote:\n> > > > On Thu, 18 Feb 2021 19:38:44 +0800\n> > > > Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > > >\n> > > >> On Tue, Feb 16, 2021 at 9:33 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >>\n> > > >>> Hi,\n> > > >>>\n> > > >>> Attached is a rebased patch (v22a).\n> > > >>>\n> > > >> Thanks for the patch. Will you think posting a patch with the latest commit\n> > > >> at that\n> > > >> time is helpful? If so, when others want to review it, they know which\n> > > >> commit to\n> > > >> apply the patch without asking for a new rebase usually.\n> > > > I rebased the patch because cfbot failed.\n> > > > http://cfbot.cputube.org/\n> > > >\n> > > \n> > > It's bitrotted a bit more dues to commits bb437f995d and 25936fd46c\n> > \n> > Thank you for letting me konw. I'll rebase it soon.\n> \n> Done. Attached is a rebased patch set.\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 7 Apr 2021 18:25:37 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "\nOn 4/7/21 5:25 AM, Yugo NAGATA wrote:\n> Hi, \n>\n> I rebased the patch because the cfbot failed.\n>\n> Regards,\n> Yugo Nagata\n\n\n\nThis patch (v22c) just crashed for me with an assertion failure on\nFedora 31. Here's the stack trace:\n\n\n[New LWP 333090]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: andrew regression [local]\nINSERT����������������������������������� '.\nProgram terminated with signal SIGABRT, Aborted.\n#0� 0x00007f8981caa625 in raise () from /lib64/libc.so.6\n#0� 0x00007f8981caa625 in raise () from /lib64/libc.so.6\n#1� 0x00007f8981c938d9 in abort () from /lib64/libc.so.6\n#2� 0x000000000094a54a in ExceptionalCondition\n(conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\nNULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\nfileName=fileName@entry=0xa91468\n\"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\nlineNumber=lineNumber@entry=199) at\n/home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n#3� 0x00000000006c0e17 in standard_ExecutorStart (queryDesc=0x226af98,\neflags=0) at /home/andrew/pgl/pg_head/src/backend/executor/execMain.c:199\n#4� 0x00000000006737b2 in refresh_matview_datafill (dest=0x21cf428,\nquery=<optimized out>, queryEnv=0x2245fd0,\nresultTupleDesc=0x7ffd5e764888, queryString=0x0) at\n/home/andrew/pgl/pg_head/src/backend/commands/matview.c:719\n#5� 0x0000000000678042 in calc_delta (queryEnv=0x2245fd0,\ntupdesc_new=0x7ffd5e764888, tupdesc_old=0x7ffd5e764880,\ndest_new=0x21cf428, dest_old=0x0, query=0x2246108, rte_path=0x2228a60,\ntable=<optimized out>) at\n/home/andrew/pgl/pg_head/src/backend/commands/matview.c:2907\n#6� IVM_immediate_maintenance (fcinfo=<optimized out>) at\n/home/andrew/pgl/pg_head/src/backend/commands/matview.c:1683\n#7� 0x000000000069e483 in ExecCallTriggerFunc (trigdata=0x7ffd5e764bb0,\ntgindx=2, finfo=0x22345f8, instr=0x0, per_tuple_context=0x2245eb0) at\n/home/andrew/pgl/pg_head/src/backend/commands/trigger.c:2142\n#8� 0x000000000069fc4c in AfterTriggerExecute (trigdesc=0x2233db8,\ntrigdesc=0x2233db8, trig_tuple_slot2=0x0, trig_tuple_slot1=0x0,\nper_tuple_context=0x2245eb0, instr=0x0, finfo=0x2234598,\nrelInfo=0x2233ba0, event=0x222d380, estate=0x2233710) at\n/home/andrew/pgl/pg_head/src/backend/commands/trigger.c:4041\n#9� afterTriggerInvokeEvents (events=0x21cece8, firing_id=1,\nestate=0x2233710, delete_ok=false) at\n/home/andrew/pgl/pg_head/src/backend/commands/trigger.c:4255\n#10 0x00000000006a4173 in AfterTriggerEndQuery\n(estate=estate@entry=0x2233710) at\n/home/andrew/pgl/pg_head/src/backend/commands/trigger.c:4632\n#11 0x00000000006c04c8 in standard_ExecutorFinish (queryDesc=0x2237300)\nat /home/andrew/pgl/pg_head/src/backend/executor/execMain.c:436\n#12 0x00000000008415d8 in ProcessQuery (plan=<optimized out>,\nsourceText=0x21490a0 \"INSERT INTO mv_base_b VALUES(5,105);\", params=0x0,\nqueryEnv=0x0, dest=0x2221010, qc=0x7ffd5e764f00) at\n/home/andrew/pgl/pg_head/src/backend/tcop/pquery.c:190\n#13 0x00000000008417f2 in PortalRunMulti (portal=portal@entry=0x21ac3c0,\nisTopLevel=isTopLevel@entry=true,\nsetHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x2221010,\naltdest=altdest@entry=0x2221010, qc=qc@entry=0x7ffd5e764f00) at\n/home/andrew/pgl/pg_head/src/backend/tcop/pquery.c:1267\n#14 0x0000000000842415 in PortalRun (portal=portal@entry=0x21ac3c0,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x2221010,\naltdest=altdest@entry=0x2221010, qc=0x7ffd5e764f00) at\n/home/andrew/pgl/pg_head/src/backend/tcop/pquery.c:779\n#15 0x000000000083e3ca in exec_simple_query (query_string=0x21490a0\n\"INSERT INTO mv_base_b VALUES(5,105);\") at\n/home/andrew/pgl/pg_head/src/backend/tcop/postgres.c:1196\n#16 0x0000000000840075 in PostgresMain (argc=argc@entry=1,\nargv=argv@entry=0x7ffd5e765450, dbname=<optimized out>,\nusername=<optimized out>) at\n/home/andrew/pgl/pg_head/src/backend/tcop/postgres.c:4458\n#17 0x00000000007b8054 in BackendRun (port=<optimized out>,\nport=<optimized out>) at\n/home/andrew/pgl/pg_head/src/backend/postmaster/postmaster.c:4488\n#18 BackendStartup (port=<optimized out>) at\n/home/andrew/pgl/pg_head/src/backend/postmaster/postmaster.c:4210\n#19 ServerLoop () at\n/home/andrew/pgl/pg_head/src/backend/postmaster/postmaster.c:1742\n#20 0x00000000007b8ebf in PostmasterMain (argc=argc@entry=8,\nargv=argv@entry=0x21435c0) at\n/home/andrew/pgl/pg_head/src/backend/postmaster/postmaster.c:1414\n#21 0x000000000050e030 in main (argc=8, argv=0x21435c0) at\n/home/andrew/pgl/pg_head/src/backend/main/main.c:209\n$1 = {si_signo = 6, si_errno = 0, si_code = -6, _sifields = {_pad =\n{333090, 500, 0 <repeats 26 times>}, _kill = {si_pid = 333090, si_uid =\n500}, _timer = {si_tid = 333090, si_overrun = 500, si_sigval =\n{sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 333090, si_uid = 500,\nsi_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid =\n333090, si_uid = 500, si_status = 0, si_utime = 0, si_stime = 0},\n_sigfault = {si_addr = 0x1f400051522, _addr_lsb = 0, _addr_bnd = {_lower\n= 0x0, _upper = 0x0}}, _sigpoll = {si_band = 2147483981090, si_fd = 0}}}\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:47:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> This patch (v22c) just crashed for me with an assertion failure on\n> Fedora 31. Here's the stack trace:\n\n> #2 0x000000000094a54a in ExceptionalCondition\n> (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> fileName=fileName@entry=0xa91468\n> \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> lineNumber=lineNumber@entry=199) at\n> /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n\nThat assert just got added a few days ago, so that's why the patch\nseemed OK before.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:40:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 19 Apr 2021 17:40:31 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > This patch (v22c) just crashed for me with an assertion failure on\n> > Fedora 31. Here's the stack trace:\n> \n> > #2 0x000000000094a54a in ExceptionalCondition\n> > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > fileName=fileName@entry=0xa91468\n> > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > lineNumber=lineNumber@entry=199) at\n> > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> \n> That assert just got added a few days ago, so that's why the patch\n> seemed OK before.\n\nThank you for letting me know. I'll fix it.\n\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 20 Apr 2021 09:51:34 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 20 Apr 2021 09:51:34 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 19 Apr 2021 17:40:31 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > This patch (v22c) just crashed for me with an assertion failure on\n> > > Fedora 31. Here's the stack trace:\n> > \n> > > #2 0x000000000094a54a in ExceptionalCondition\n> > > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > > fileName=fileName@entry=0xa91468\n> > > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > > lineNumber=lineNumber@entry=199) at\n> > > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> > \n> > That assert just got added a few days ago, so that's why the patch\n> > seemed OK before.\n> \n> Thank you for letting me know. I'll fix it.\n\nAttached is the fixed patch.\n\nqueryDesc->sourceText cannot be NULL after commit 1111b2668d8, \nso now we pass an empty string \"\" for refresh_matview_datafill() instead NULL\nwhen maintaining views incrementally.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 26 Apr 2021 15:46:21 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 26 Apr 2021 15:46:21 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Tue, 20 Apr 2021 09:51:34 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > On Mon, 19 Apr 2021 17:40:31 -0400\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > > This patch (v22c) just crashed for me with an assertion failure on\n> > > > Fedora 31. Here's the stack trace:\n> > > \n> > > > #2 0x000000000094a54a in ExceptionalCondition\n> > > > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > > > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > > > fileName=fileName@entry=0xa91468\n> > > > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > > > lineNumber=lineNumber@entry=199) at\n> > > > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> > > \n> > > That assert just got added a few days ago, so that's why the patch\n> > > seemed OK before.\n> > \n> > Thank you for letting me know. I'll fix it.\n> \n> Attached is the fixed patch.\n> \n> queryDesc->sourceText cannot be NULL after commit 1111b2668d8, \n> so now we pass an empty string \"\" for refresh_matview_datafill() instead NULL\n> when maintaining views incrementally.\n\nI am sorry, I forgot to include a fix for 8aba9322511.\nAttached is the fixed version.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 26 Apr 2021 16:03:48 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, 26 Apr 2021 16:03:48 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 26 Apr 2021 15:46:21 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > On Tue, 20 Apr 2021 09:51:34 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > \n> > > On Mon, 19 Apr 2021 17:40:31 -0400\n> > > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > \n> > > > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > > > This patch (v22c) just crashed for me with an assertion failure on\n> > > > > Fedora 31. Here's the stack trace:\n> > > > \n> > > > > #2 0x000000000094a54a in ExceptionalCondition\n> > > > > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > > > > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > > > > fileName=fileName@entry=0xa91468\n> > > > > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > > > > lineNumber=lineNumber@entry=199) at\n> > > > > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> > > > \n> > > > That assert just got added a few days ago, so that's why the patch\n> > > > seemed OK before.\n> > > \n> > > Thank you for letting me know. I'll fix it.\n> > \n> > Attached is the fixed patch.\n> > \n> > queryDesc->sourceText cannot be NULL after commit 1111b2668d8, \n> > so now we pass an empty string \"\" for refresh_matview_datafill() instead NULL\n> > when maintaining views incrementally.\n> \n> I am sorry, I forgot to include a fix for 8aba9322511.\n> Attached is the fixed version.\n\nAttached is the rebased patch (for 6b8d29419d).\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 7 May 2021 14:14:16 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 7 May 2021 14:14:16 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 26 Apr 2021 16:03:48 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > On Mon, 26 Apr 2021 15:46:21 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > \n> > > On Tue, 20 Apr 2021 09:51:34 +0900\n> > > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > \n> > > > On Mon, 19 Apr 2021 17:40:31 -0400\n> > > > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > \n> > > > > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > > > > This patch (v22c) just crashed for me with an assertion failure on\n> > > > > > Fedora 31. Here's the stack trace:\n> > > > > \n> > > > > > #2 0x000000000094a54a in ExceptionalCondition\n> > > > > > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > > > > > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > > > > > fileName=fileName@entry=0xa91468\n> > > > > > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > > > > > lineNumber=lineNumber@entry=199) at\n> > > > > > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> > > > > \n> > > > > That assert just got added a few days ago, so that's why the patch\n> > > > > seemed OK before.\n> > > > \n> > > > Thank you for letting me know. I'll fix it.\n> > > \n> > > Attached is the fixed patch.\n> > > \n> > > queryDesc->sourceText cannot be NULL after commit 1111b2668d8, \n> > > so now we pass an empty string \"\" for refresh_matview_datafill() instead NULL\n> > > when maintaining views incrementally.\n> > \n> > I am sorry, I forgot to include a fix for 8aba9322511.\n> > Attached is the fixed version.\n> \n> Attached is the rebased patch (for 6b8d29419d).\n\nI attached a rebased patch.\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 17 May 2021 13:36:46 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Mon, May 17, 2021 at 10:08 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Fri, 7 May 2021 14:14:16 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> > On Mon, 26 Apr 2021 16:03:48 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > > On Mon, 26 Apr 2021 15:46:21 +0900\n> > > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > > On Tue, 20 Apr 2021 09:51:34 +0900\n> > > > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > > On Mon, 19 Apr 2021 17:40:31 -0400\n> > > > > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > >\n> > > > > > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > > > > > This patch (v22c) just crashed for me with an assertion failure on\n> > > > > > > Fedora 31. Here's the stack trace:\n> > > > > >\n> > > > > > > #2 0x000000000094a54a in ExceptionalCondition\n> > > > > > > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > > > > > > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > > > > > > fileName=fileName@entry=0xa91468\n> > > > > > > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > > > > > > lineNumber=lineNumber@entry=199) at\n> > > > > > > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> > > > > >\n> > > > > > That assert just got added a few days ago, so that's why the patch\n> > > > > > seemed OK before.\n> > > > >\n> > > > > Thank you for letting me know. I'll fix it.\n> > > >\n> > > > Attached is the fixed patch.\n> > > >\n> > > > queryDesc->sourceText cannot be NULL after commit 1111b2668d8,\n> > > > so now we pass an empty string \"\" for refresh_matview_datafill() instead NULL\n> > > > when maintaining views incrementally.\n> > >\n> > > I am sorry, I forgot to include a fix for 8aba9322511.\n> > > Attached is the fixed version.\n> >\n> > Attached is the rebased patch (for 6b8d29419d).\n>\n> I attached a rebased patch.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 21:22:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 14 Jul 2021 21:22:37 +0530\nvignesh C <vignesh21@gmail.com> wrote:\n\n> On Mon, May 17, 2021 at 10:08 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Fri, 7 May 2021 14:14:16 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > > On Mon, 26 Apr 2021 16:03:48 +0900\n> > > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > > On Mon, 26 Apr 2021 15:46:21 +0900\n> > > > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > > On Tue, 20 Apr 2021 09:51:34 +0900\n> > > > > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > > On Mon, 19 Apr 2021 17:40:31 -0400\n> > > > > > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > > >\n> > > > > > > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > > > > > > This patch (v22c) just crashed for me with an assertion failure on\n> > > > > > > > Fedora 31. Here's the stack trace:\n> > > > > > >\n> > > > > > > > #2 0x000000000094a54a in ExceptionalCondition\n> > > > > > > > (conditionName=conditionName@entry=0xa91dae \"queryDesc->sourceText !=\n> > > > > > > > NULL\", errorType=errorType@entry=0x99b468 \"FailedAssertion\",\n> > > > > > > > fileName=fileName@entry=0xa91468\n> > > > > > > > \"/home/andrew/pgl/pg_head/src/backend/executor/execMain.c\",\n> > > > > > > > lineNumber=lineNumber@entry=199) at\n> > > > > > > > /home/andrew/pgl/pg_head/src/backend/utils/error/assert.c:69\n> > > > > > >\n> > > > > > > That assert just got added a few days ago, so that's why the patch\n> > > > > > > seemed OK before.\n> > > > > >\n> > > > > > Thank you for letting me know. I'll fix it.\n> > > > >\n> > > > > Attached is the fixed patch.\n> > > > >\n> > > > > queryDesc->sourceText cannot be NULL after commit 1111b2668d8,\n> > > > > so now we pass an empty string \"\" for refresh_matview_datafill() instead NULL\n> > > > > when maintaining views incrementally.\n> > > >\n> > > > I am sorry, I forgot to include a fix for 8aba9322511.\n> > > > Attached is the fixed version.\n> > >\n> > > Attached is the rebased patch (for 6b8d29419d).\n> >\n> > I attached a rebased patch.\n> \n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nOk. I'll update the patch in a few days.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 19 Jul 2021 09:24:30 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi hackers,\n\nOn Mon, 19 Jul 2021 09:24:30 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 14 Jul 2021 21:22:37 +0530\n> vignesh C <vignesh21@gmail.com> wrote:\n\n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n> \n> Ok. I'll update the patch in a few days.\n\nAttached is the latest patch set to add support for Incremental\nMaterialized View Maintenance (IVM)\n\nThe patches are rebased to the master and also revised with some\ncode cleaning. \n\nIVM is a way to make materialized views up-to-date in which only\nincremental changes are computed and applied on views rather than\nrecomputing the contents from scratch as REFRESH MATERIALIZED VIEW\ndoes. IVM can update materialized views more efficiently\nthan recomputation when only small part of the view need updates.\n\nThe patch set implements a feature so that materialized views could be\nupdated automatically and immediately when a base table is modified.\n\nCurrently, our IVM implementation supports views which could contain\ntuple duplicates whose definition includes:\n\n - inner and outer joins including self-join\n - DISTINCT\n - some built-in aggregate functions (count, sum, agv, min, and max)\n - a part of subqueries\n -- simple subqueries in FROM clause\n -- EXISTS subqueries in WHERE clause\n - CTEs\n\nWe hope the IVM feature would be adopted into pg15. However, the size of\npatch set has grown too large through supporting above features. Therefore, \nI think it is better to consider only a part of these features for the first\nrelease. Especially, I would like propose the following features for pg15.\n\n - inner joins including self-join\n - DISTINCT and views with tuple duplicates\n - some built-in aggregate functions (count, sum, agv, min, and max)\n\nBy omitting outer-join, sub-queries, and CTE features, the patch size becomes\nless than half. I hope this will make a bit easer to review the IVM patch set.\n\nHere is a list of separated patches.\n\n- 0001: Add a new syntax:\n CREATE INCREMENTAL MATERIALIZED VIEW\n- 0002: Add a new column relisivm to pg_class\n- 0003: Add new deptype option 'm' in pg_depend\n- 0004: Change trigger.c to allow to prolong life span of tupestores\n containing Transition Tables generated via AFTER trigger\n- 0005: Add IVM supprot for pg_dump\n- 0006: Add IVM support for psql\n- 0007: Add the basic IVM future:\n This supports inner joins, DISTINCT, and tuple duplicates.\n- 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n- 0009: Add regression tests for IVM\n- 0010: Add documentation for IVM\n\nWe could split the patch furthermore if this would make reviews much easer. \nFor example, I think 0007 could be split into the more basic part and the part\nfor handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\nand other aggregates because handling min/max is a bit more complicated than\nothers.\n\nI also attached IVM_extra.tar.gz that contains patches for sub-quereis,\nouter-join, CTE support, just for your information. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 2 Aug 2021 15:28:34 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi hackers,\n>\n> On Mon, 19 Jul 2021 09:24:30 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> > On Wed, 14 Jul 2021 21:22:37 +0530\n> > vignesh C <vignesh21@gmail.com> wrote:\n>\n> > > The patch does not apply on Head anymore, could you rebase and post a\n> > > patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > Ok. I'll update the patch in a few days.\n>\n> Attached is the latest patch set to add support for Incremental\n> Materialized View Maintenance (IVM)\n>\n> The patches are rebased to the master and also revised with some\n> code cleaning.\n>\n> IVM is a way to make materialized views up-to-date in which only\n> incremental changes are computed and applied on views rather than\n> recomputing the contents from scratch as REFRESH MATERIALIZED VIEW\n> does. IVM can update materialized views more efficiently\n> than recomputation when only small part of the view need updates.\n>\n> The patch set implements a feature so that materialized views could be\n> updated automatically and immediately when a base table is modified.\n>\n> Currently, our IVM implementation supports views which could contain\n> tuple duplicates whose definition includes:\n>\n> - inner and outer joins including self-join\n> - DISTINCT\n> - some built-in aggregate functions (count, sum, agv, min, and max)\n> - a part of subqueries\n> -- simple subqueries in FROM clause\n> -- EXISTS subqueries in WHERE clause\n> - CTEs\n>\n> We hope the IVM feature would be adopted into pg15. However, the size of\n> patch set has grown too large through supporting above features.\n> Therefore,\n> I think it is better to consider only a part of these features for the\n> first\n> release. Especially, I would like propose the following features for pg15.\n>\n> - inner joins including self-join\n> - DISTINCT and views with tuple duplicates\n> - some built-in aggregate functions (count, sum, agv, min, and max)\n>\n> By omitting outer-join, sub-queries, and CTE features, the patch size\n> becomes\n> less than half. I hope this will make a bit easer to review the IVM patch\n> set.\n>\n> Here is a list of separated patches.\n>\n> - 0001: Add a new syntax:\n> CREATE INCREMENTAL MATERIALIZED VIEW\n> - 0002: Add a new column relisivm to pg_class\n> - 0003: Add new deptype option 'm' in pg_depend\n> - 0004: Change trigger.c to allow to prolong life span of tupestores\n> containing Transition Tables generated via AFTER trigger\n> - 0005: Add IVM supprot for pg_dump\n> - 0006: Add IVM support for psql\n> - 0007: Add the basic IVM future:\n> This supports inner joins, DISTINCT, and tuple duplicates.\n> - 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n> - 0009: Add regression tests for IVM\n> - 0010: Add documentation for IVM\n>\n> We could split the patch furthermore if this would make reviews much\n> easer.\n> For example, I think 0007 could be split into the more basic part and the\n> part\n> for handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\n> and other aggregates because handling min/max is a bit more complicated\n> than\n> others.\n>\n> I also attached IVM_extra.tar.gz that contains patches for sub-quereis,\n> outer-join, CTE support, just for your information.\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\nHi,\nFor v23-0008-Add-aggregates-support-in-IVM.patch :\n\nAs a restriction, expressions specified in GROUP BY must appear in\nthe target list because tuples to be updated in IMMV are identified\nby using this group keys.\n\nIMMV -> IMVM (Incremental Materialized View Maintenance, as said above)\nOr maybe it means 'incrementally maintainable materialized view'. It would\nbe better to use the same abbreviation.\n\nthis group keys -> this group key\n\n+ errmsg(\"GROUP BY expression not appeared in select\nlist is not supported on incrementally maintainable materialized view\")));\n\nexpression not appeared in select list -> expression not appearing in\nselect list\n\n+ * For aggregate functions except to count\n\nexcept to count -> except count\n\nCheers\n\nOn Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi hackers,\n\nOn Mon, 19 Jul 2021 09:24:30 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 14 Jul 2021 21:22:37 +0530\n> vignesh C <vignesh21@gmail.com> wrote:\n\n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n> \n> Ok. I'll update the patch in a few days.\n\nAttached is the latest patch set to add support for Incremental\nMaterialized View Maintenance (IVM)\n\nThe patches are rebased to the master and also revised with some\ncode cleaning. \n\nIVM is a way to make materialized views up-to-date in which only\nincremental changes are computed and applied on views rather than\nrecomputing the contents from scratch as REFRESH MATERIALIZED VIEW\ndoes. IVM can update materialized views more efficiently\nthan recomputation when only small part of the view need updates.\n\nThe patch set implements a feature so that materialized views could be\nupdated automatically and immediately when a base table is modified.\n\nCurrently, our IVM implementation supports views which could contain\ntuple duplicates whose definition includes:\n\n - inner and outer joins including self-join\n - DISTINCT\n - some built-in aggregate functions (count, sum, agv, min, and max)\n - a part of subqueries\n -- simple subqueries in FROM clause\n -- EXISTS subqueries in WHERE clause\n - CTEs\n\nWe hope the IVM feature would be adopted into pg15. However, the size of\npatch set has grown too large through supporting above features. Therefore, \nI think it is better to consider only a part of these features for the first\nrelease. Especially, I would like propose the following features for pg15.\n\n - inner joins including self-join\n - DISTINCT and views with tuple duplicates\n - some built-in aggregate functions (count, sum, agv, min, and max)\n\nBy omitting outer-join, sub-queries, and CTE features, the patch size becomes\nless than half. I hope this will make a bit easer to review the IVM patch set.\n\nHere is a list of separated patches.\n\n- 0001: Add a new syntax:\n CREATE INCREMENTAL MATERIALIZED VIEW\n- 0002: Add a new column relisivm to pg_class\n- 0003: Add new deptype option 'm' in pg_depend\n- 0004: Change trigger.c to allow to prolong life span of tupestores\n containing Transition Tables generated via AFTER trigger\n- 0005: Add IVM supprot for pg_dump\n- 0006: Add IVM support for psql\n- 0007: Add the basic IVM future:\n This supports inner joins, DISTINCT, and tuple duplicates.\n- 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n- 0009: Add regression tests for IVM\n- 0010: Add documentation for IVM\n\nWe could split the patch furthermore if this would make reviews much easer. \nFor example, I think 0007 could be split into the more basic part and the part\nfor handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\nand other aggregates because handling min/max is a bit more complicated than\nothers.\n\nI also attached IVM_extra.tar.gz that contains patches for sub-quereis,\nouter-join, CTE support, just for your information. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>Hi,For v23-0008-Add-aggregates-support-in-IVM.patch :As a restriction, expressions specified in GROUP BY must appear inthe target list because tuples to be updated in IMMV are identifiedby using this group keys.IMMV -> IMVM (Incremental Materialized View Maintenance, as said above)Or maybe it means 'incrementally maintainable materialized view'. It would be better to use the same abbreviation.this group keys -> this group key+ errmsg(\"GROUP BY expression not appeared in select list is not supported on incrementally maintainable materialized view\")));expression not appeared in select list -> expression not appearing in select list+ * For aggregate functions except to countexcept to count -> except countCheers",
"msg_date": "Mon, 2 Aug 2021 14:33:46 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Nagata-san,\n\n\nI am interested in this patch since it is good feature.\n\nI run some simple tests.\nI found the following problems.\n\n\n(1) \nFailed to \"make world\".\nI think there are extra \"<lineitem>\" in doc/src/sgml/ref/create_materialized_view.sgml\n(line 110 and 117)\n\n\n(2)\nIn the case of partition, it seems that IVM does not work well.\nI run as follows.\n\npostgres=# create table parent (c int) partition by range (c);\nCREATE TABLE\npostgres=# create table child partition of parent for values from (1) to (100);\nCREATE TABLE\npostgres=# create incremental materialized view ivm_parent as select c from parent;\nNOTICE: could not create an index on materialized view \"ivm_parent\" automatically\nHINT: Create an index on the materialized view for efficient incremental maintenance.\nSELECT 0\npostgres=# create incremental materialized view ivm_child as select c from child;\nNOTICE: could not create an index on materialized view \"ivm_child\" automatically\nHINT: Create an index on the materialized view for efficient incremental maintenance.\nSELECT 0\npostgres=# insert into parent values (1);\nINSERT 0 1\npostgres=# insert into child values (2);\nINSERT 0 1\npostgres=# select * from parent;\n c\n---\n 1\n 2\n(2 rows)\n\npostgres=# select * from child;\n c\n---\n 1\n 2\n(2 rows)\n\npostgres=# select * from ivm_parent;\n c\n---\n 1\n(1 row)\n\npostgres=# select * from ivm_child;\n c\n---\n 2\n(1 row)\n\n\nI think ivm_parent and ivm_child should return 2 rows.\n\n\n(3)\nI think IVM does not support foreign table, but try to make IVM.\n\npostgres=# create incremental materialized view ivm_foreign as select c from foreign_table;\nNOTICE: could not create an index on materialized view \"ivm_foreign\" automatically\nHINT: Create an index on the materialized view for efficient incremental maintenance.\nERROR: \"foreign_table\" is a foreign table\nDETAIL: Triggers on foreign tables cannot have transition tables.\n\nIt finally failed to make IVM, but I think it should be checked more early.\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Tue, 3 Aug 2021 10:15:42 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Zhihong Yu,\n\nOn Mon, 2 Aug 2021 14:33:46 -0700\nZhihong Yu <zyu@yugabyte.com> wrote:\n\n> On Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi hackers,\n> >\n> > On Mon, 19 Jul 2021 09:24:30 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > > On Wed, 14 Jul 2021 21:22:37 +0530\n> > > vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > > The patch does not apply on Head anymore, could you rebase and post a\n> > > > patch. I'm changing the status to \"Waiting for Author\".\n> > >\n> > > Ok. I'll update the patch in a few days.\n> >\n> > Attached is the latest patch set to add support for Incremental\n> > Materialized View Maintenance (IVM)\n> >\n> > The patches are rebased to the master and also revised with some\n> > code cleaning.\n> >\n> > IVM is a way to make materialized views up-to-date in which only\n> > incremental changes are computed and applied on views rather than\n> > recomputing the contents from scratch as REFRESH MATERIALIZED VIEW\n> > does. IVM can update materialized views more efficiently\n> > than recomputation when only small part of the view need updates.\n> >\n> > The patch set implements a feature so that materialized views could be\n> > updated automatically and immediately when a base table is modified.\n> >\n> > Currently, our IVM implementation supports views which could contain\n> > tuple duplicates whose definition includes:\n> >\n> > - inner and outer joins including self-join\n> > - DISTINCT\n> > - some built-in aggregate functions (count, sum, agv, min, and max)\n> > - a part of subqueries\n> > -- simple subqueries in FROM clause\n> > -- EXISTS subqueries in WHERE clause\n> > - CTEs\n> >\n> > We hope the IVM feature would be adopted into pg15. However, the size of\n> > patch set has grown too large through supporting above features.\n> > Therefore,\n> > I think it is better to consider only a part of these features for the\n> > first\n> > release. Especially, I would like propose the following features for pg15.\n> >\n> > - inner joins including self-join\n> > - DISTINCT and views with tuple duplicates\n> > - some built-in aggregate functions (count, sum, agv, min, and max)\n> >\n> > By omitting outer-join, sub-queries, and CTE features, the patch size\n> > becomes\n> > less than half. I hope this will make a bit easer to review the IVM patch\n> > set.\n> >\n> > Here is a list of separated patches.\n> >\n> > - 0001: Add a new syntax:\n> > CREATE INCREMENTAL MATERIALIZED VIEW\n> > - 0002: Add a new column relisivm to pg_class\n> > - 0003: Add new deptype option 'm' in pg_depend\n> > - 0004: Change trigger.c to allow to prolong life span of tupestores\n> > containing Transition Tables generated via AFTER trigger\n> > - 0005: Add IVM supprot for pg_dump\n> > - 0006: Add IVM support for psql\n> > - 0007: Add the basic IVM future:\n> > This supports inner joins, DISTINCT, and tuple duplicates.\n> > - 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n> > - 0009: Add regression tests for IVM\n> > - 0010: Add documentation for IVM\n> >\n> > We could split the patch furthermore if this would make reviews much\n> > easer.\n> > For example, I think 0007 could be split into the more basic part and the\n> > part\n> > for handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\n> > and other aggregates because handling min/max is a bit more complicated\n> > than\n> > others.\n> >\n> > I also attached IVM_extra.tar.gz that contains patches for sub-quereis,\n> > outer-join, CTE support, just for your information.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> > --\n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> >\n> >\n> > --\n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> >\n> Hi,\n> For v23-0008-Add-aggregates-support-in-IVM.patch :\n\nThank you for looking into this!\n\n> As a restriction, expressions specified in GROUP BY must appear in\n> the target list because tuples to be updated in IMMV are identified\n> by using this group keys.\n> \n> IMMV -> IMVM (Incremental Materialized View Maintenance, as said above)\n> Or maybe it means 'incrementally maintainable materialized view'. It would\n> be better to use the same abbreviation.\n\nIMMV is correct in the commit message of this patch. Rather, IMVM used\nin v23-0003-Add-new-deptype-option-m-in-pg_depend-system-cat.patch \nshould be corrected to IMMV. \n\n> this group keys -> this group key\n> \n> + errmsg(\"GROUP BY expression not appeared in select\n> list is not supported on incrementally maintainable materialized view\")));\n> \n> expression not appeared in select list -> expression not appearing in\n> select list\n> \n> + * For aggregate functions except to count\n> \n> except to count -> except count\n\nThank you for pointing out them. I'll fix.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 5 Aug 2021 12:29:59 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Takahashi-san,\n\nOn Tue, 3 Aug 2021 10:15:42 +0000\n\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n\n> Hi Nagata-san,\n> \n> \n> I am interested in this patch since it is good feature.\n> \n> I run some simple tests.\n> I found the following problems.\n\nThank you for your interest for this patch!\n\n> (1) \n> Failed to \"make world\".\n> I think there are extra \"<lineitem>\" in doc/src/sgml/ref/create_materialized_view.sgml\n> (line 110 and 117)\n\nOops. I'll fix it. \n\n> (2)\n> In the case of partition, it seems that IVM does not work well.\n> I run as follows.\n> \n> postgres=# create table parent (c int) partition by range (c);\n> CREATE TABLE\n> postgres=# create table child partition of parent for values from (1) to (100);\n> CREATE TABLE\n> postgres=# create incremental materialized view ivm_parent as select c from parent;\n> NOTICE: could not create an index on materialized view \"ivm_parent\" automatically\n> HINT: Create an index on the materialized view for efficient incremental maintenance.\n> SELECT 0\n> postgres=# create incremental materialized view ivm_child as select c from child;\n> NOTICE: could not create an index on materialized view \"ivm_child\" automatically\n> HINT: Create an index on the materialized view for efficient incremental maintenance.\n> SELECT 0\n> postgres=# insert into parent values (1);\n> INSERT 0 1\n> postgres=# insert into child values (2);\n> INSERT 0 1\n> postgres=# select * from parent;\n> c\n> ---\n> 1\n> 2\n> (2 rows)\n> \n> postgres=# select * from child;\n> c\n> ---\n> 1\n> 2\n> (2 rows)\n> \n> postgres=# select * from ivm_parent;\n> c\n> ---\n> 1\n> (1 row)\n> \n> postgres=# select * from ivm_child;\n> c\n> ---\n> 2\n> (1 row)\n> \n> \n> I think ivm_parent and ivm_child should return 2 rows.\n\nGood point!\nI'll investigate this more, but we may have to prohibit views on partitioned\ntable and partitions.\n\n> (3)\n> I think IVM does not support foreign table, but try to make IVM.\n> \n> postgres=# create incremental materialized view ivm_foreign as select c from foreign_table;\n> NOTICE: could not create an index on materialized view \"ivm_foreign\" automatically\n> HINT: Create an index on the materialized view for efficient incremental maintenance.\n> ERROR: \"foreign_table\" is a foreign table\n> DETAIL: Triggers on foreign tables cannot have transition tables.\n> \n> It finally failed to make IVM, but I think it should be checked more early.\n\nYou are right. We don't support foreign tables as long as we use triggers.\n\n I'll fix.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 5 Aug 2021 12:41:09 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Nagata-san,\n\n\nThank you for your reply.\n\n> I'll investigate this more, but we may have to prohibit views on partitioned\n> table and partitions.\n\nI think this restriction is strict.\nThis feature is useful when the base table is large and partitioning is also useful in such case.\n\n\nI have several additional comments on the patch.\n\n\n(1)\nThe following features are added to transition table.\n- Prolong lifespan of transition table\n- If table has row security policies, set them to the transition table\n- Calculate pre-state of the table\n\nAre these features only for IVM?\nIf there are other useful case, they should be separated from IVM patch and\nshould be independent patch for transition table.\n\n\n(2)\nDEPENDENCY_IMMV (m) is added to deptype of pg_depend.\nWhat is the difference compared with existing deptype such as DEPENDENCY_INTERNAL (i)?\n\n\n(3)\nConverting from normal materialized view to IVM or from IVM to normal materialized view is not implemented yet.\nIs it difficult?\n\nI think create/drop triggers and __ivm_ columns can achieve this feature.\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Thu, 5 Aug 2021 08:53:47 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi hackers,\n>\n> On Mon, 19 Jul 2021 09:24:30 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> > On Wed, 14 Jul 2021 21:22:37 +0530\n> > vignesh C <vignesh21@gmail.com> wrote:\n>\n> > > The patch does not apply on Head anymore, could you rebase and post a\n> > > patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > Ok. I'll update the patch in a few days.\n>\n> Attached is the latest patch set to add support for Incremental\n> Materialized View Maintenance (IVM)\n>\n> The patches are rebased to the master and also revised with some\n> code cleaning.\n>\n> IVM is a way to make materialized views up-to-date in which only\n> incremental changes are computed and applied on views rather than\n> recomputing the contents from scratch as REFRESH MATERIALIZED VIEW\n> does. IVM can update materialized views more efficiently\n> than recomputation when only small part of the view need updates.\n>\n> The patch set implements a feature so that materialized views could be\n> updated automatically and immediately when a base table is modified.\n>\n> Currently, our IVM implementation supports views which could contain\n> tuple duplicates whose definition includes:\n>\n> - inner and outer joins including self-join\n> - DISTINCT\n> - some built-in aggregate functions (count, sum, agv, min, and max)\n> - a part of subqueries\n> -- simple subqueries in FROM clause\n> -- EXISTS subqueries in WHERE clause\n> - CTEs\n>\n> We hope the IVM feature would be adopted into pg15. However, the size of\n> patch set has grown too large through supporting above features.\n> Therefore,\n> I think it is better to consider only a part of these features for the\n> first\n> release. Especially, I would like propose the following features for pg15.\n>\n> - inner joins including self-join\n> - DISTINCT and views with tuple duplicates\n> - some built-in aggregate functions (count, sum, agv, min, and max)\n>\n> By omitting outer-join, sub-queries, and CTE features, the patch size\n> becomes\n> less than half. I hope this will make a bit easer to review the IVM patch\n> set.\n>\n> Here is a list of separated patches.\n>\n> - 0001: Add a new syntax:\n> CREATE INCREMENTAL MATERIALIZED VIEW\n> - 0002: Add a new column relisivm to pg_class\n> - 0003: Add new deptype option 'm' in pg_depend\n> - 0004: Change trigger.c to allow to prolong life span of tupestores\n> containing Transition Tables generated via AFTER trigger\n> - 0005: Add IVM supprot for pg_dump\n> - 0006: Add IVM support for psql\n> - 0007: Add the basic IVM future:\n> This supports inner joins, DISTINCT, and tuple duplicates.\n> - 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n> - 0009: Add regression tests for IVM\n> - 0010: Add documentation for IVM\n>\n> We could split the patch furthermore if this would make reviews much\n> easer.\n> For example, I think 0007 could be split into the more basic part and the\n> part\n> for handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\n> and other aggregates because handling min/max is a bit more complicated\n> than\n> others.\n>\n> I also attached IVM_extra.tar.gz that contains patches for sub-quereis,\n> outer-join, CTE support, just for your information.\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n\nHi,\nFor v23-0007-Add-Incremental-View-Maintenance-support.patch :\n\nbq. In this implementation, AFTER triggers are used to collecting\ntuplestores\n\n'to collecting' -> to collect\n\nbq. are contained in a old transition table.\n\n'a old' -> an old\n\nbq. updates of more than one base tables\n\none base tables -> one base table\n\nbq. DISTINCT and tuple duplicates in views are supported\n\nSince distinct and duplicate have opposite meanings, it would be better to\nrephrase the above sentence.\n\nbq. The value in__ivm_count__ is updated\n\nI searched the patch for in__ivm_count__ - there was no (second) match. I\nthink there should be a space between in and underscore.\n\n+static void CreateIvmTriggersOnBaseTables_recurse(Query *qry, Node *node,\nOid matviewOid, Relids *relids, bool ex_lock);\n\nnit: long line. please wrap.\n\n+ if (rewritten->distinctClause)\n+ rewritten->groupClause = transformDistinctClause(NULL,\n&rewritten->targetList, rewritten->sortClause, false);\n+\n+ /* Add count(*) for counting distinct tuples in views */\n+ if (rewritten->distinctClause)\n\nIt seems the body of the two if statements can be combined into one.\n\nMore to follow for this patch.\n\nCheers\n\nOn Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi hackers,\n\nOn Mon, 19 Jul 2021 09:24:30 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 14 Jul 2021 21:22:37 +0530\n> vignesh C <vignesh21@gmail.com> wrote:\n\n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n> \n> Ok. I'll update the patch in a few days.\n\nAttached is the latest patch set to add support for Incremental\nMaterialized View Maintenance (IVM)\n\nThe patches are rebased to the master and also revised with some\ncode cleaning. \n\nIVM is a way to make materialized views up-to-date in which only\nincremental changes are computed and applied on views rather than\nrecomputing the contents from scratch as REFRESH MATERIALIZED VIEW\ndoes. IVM can update materialized views more efficiently\nthan recomputation when only small part of the view need updates.\n\nThe patch set implements a feature so that materialized views could be\nupdated automatically and immediately when a base table is modified.\n\nCurrently, our IVM implementation supports views which could contain\ntuple duplicates whose definition includes:\n\n - inner and outer joins including self-join\n - DISTINCT\n - some built-in aggregate functions (count, sum, agv, min, and max)\n - a part of subqueries\n -- simple subqueries in FROM clause\n -- EXISTS subqueries in WHERE clause\n - CTEs\n\nWe hope the IVM feature would be adopted into pg15. However, the size of\npatch set has grown too large through supporting above features. Therefore, \nI think it is better to consider only a part of these features for the first\nrelease. Especially, I would like propose the following features for pg15.\n\n - inner joins including self-join\n - DISTINCT and views with tuple duplicates\n - some built-in aggregate functions (count, sum, agv, min, and max)\n\nBy omitting outer-join, sub-queries, and CTE features, the patch size becomes\nless than half. I hope this will make a bit easer to review the IVM patch set.\n\nHere is a list of separated patches.\n\n- 0001: Add a new syntax:\n CREATE INCREMENTAL MATERIALIZED VIEW\n- 0002: Add a new column relisivm to pg_class\n- 0003: Add new deptype option 'm' in pg_depend\n- 0004: Change trigger.c to allow to prolong life span of tupestores\n containing Transition Tables generated via AFTER trigger\n- 0005: Add IVM supprot for pg_dump\n- 0006: Add IVM support for psql\n- 0007: Add the basic IVM future:\n This supports inner joins, DISTINCT, and tuple duplicates.\n- 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n- 0009: Add regression tests for IVM\n- 0010: Add documentation for IVM\n\nWe could split the patch furthermore if this would make reviews much easer. \nFor example, I think 0007 could be split into the more basic part and the part\nfor handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\nand other aggregates because handling min/max is a bit more complicated than\nothers.\n\nI also attached IVM_extra.tar.gz that contains patches for sub-quereis,\nouter-join, CTE support, just for your information. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>Hi,For v23-0007-Add-Incremental-View-Maintenance-support.patch :bq. In this implementation, AFTER triggers are used to collecting tuplestores 'to collecting' -> to collectbq. are contained in a old transition table.'a old' -> an oldbq. updates of more than one base tablesone base tables -> one base tablebq. DISTINCT and tuple duplicates in views are supportedSince distinct and duplicate have opposite meanings, it would be better to rephrase the above sentence.bq. The value in__ivm_count__ is updatedI searched the patch for in__ivm_count__ - there was no (second) match. I think there should be a space between in and underscore.+static void CreateIvmTriggersOnBaseTables_recurse(Query *qry, Node *node, Oid matviewOid, Relids *relids, bool ex_lock);nit: long line. please wrap.+ if (rewritten->distinctClause)+ rewritten->groupClause = transformDistinctClause(NULL, &rewritten->targetList, rewritten->sortClause, false);++ /* Add count(*) for counting distinct tuples in views */+ if (rewritten->distinctClause)It seems the body of the two if statements can be combined into one.More to follow for this patch.Cheers",
"msg_date": "Sat, 7 Aug 2021 00:00:44 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Sat, Aug 7, 2021 at 12:00 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n>> Hi hackers,\n>>\n>> On Mon, 19 Jul 2021 09:24:30 +0900\n>> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>>\n>> > On Wed, 14 Jul 2021 21:22:37 +0530\n>> > vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> > > The patch does not apply on Head anymore, could you rebase and post a\n>> > > patch. I'm changing the status to \"Waiting for Author\".\n>> >\n>> > Ok. I'll update the patch in a few days.\n>>\n>> Attached is the latest patch set to add support for Incremental\n>> Materialized View Maintenance (IVM)\n>>\n>> The patches are rebased to the master and also revised with some\n>> code cleaning.\n>>\n>> IVM is a way to make materialized views up-to-date in which only\n>> incremental changes are computed and applied on views rather than\n>> recomputing the contents from scratch as REFRESH MATERIALIZED VIEW\n>> does. IVM can update materialized views more efficiently\n>> than recomputation when only small part of the view need updates.\n>>\n>> The patch set implements a feature so that materialized views could be\n>> updated automatically and immediately when a base table is modified.\n>>\n>> Currently, our IVM implementation supports views which could contain\n>> tuple duplicates whose definition includes:\n>>\n>> - inner and outer joins including self-join\n>> - DISTINCT\n>> - some built-in aggregate functions (count, sum, agv, min, and max)\n>> - a part of subqueries\n>> -- simple subqueries in FROM clause\n>> -- EXISTS subqueries in WHERE clause\n>> - CTEs\n>>\n>> We hope the IVM feature would be adopted into pg15. However, the size of\n>> patch set has grown too large through supporting above features.\n>> Therefore,\n>> I think it is better to consider only a part of these features for the\n>> first\n>> release. Especially, I would like propose the following features for pg15.\n>>\n>> - inner joins including self-join\n>> - DISTINCT and views with tuple duplicates\n>> - some built-in aggregate functions (count, sum, agv, min, and max)\n>>\n>> By omitting outer-join, sub-queries, and CTE features, the patch size\n>> becomes\n>> less than half. I hope this will make a bit easer to review the IVM patch\n>> set.\n>>\n>> Here is a list of separated patches.\n>>\n>> - 0001: Add a new syntax:\n>> CREATE INCREMENTAL MATERIALIZED VIEW\n>> - 0002: Add a new column relisivm to pg_class\n>> - 0003: Add new deptype option 'm' in pg_depend\n>> - 0004: Change trigger.c to allow to prolong life span of tupestores\n>> containing Transition Tables generated via AFTER trigger\n>> - 0005: Add IVM supprot for pg_dump\n>> - 0006: Add IVM support for psql\n>> - 0007: Add the basic IVM future:\n>> This supports inner joins, DISTINCT, and tuple duplicates.\n>> - 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n>> - 0009: Add regression tests for IVM\n>> - 0010: Add documentation for IVM\n>>\n>> We could split the patch furthermore if this would make reviews much\n>> easer.\n>> For example, I think 0007 could be split into the more basic part and the\n>> part\n>> for handling tuple duplicates. Moreover, 0008 could be split into\n>> \"min/max\"\n>> and other aggregates because handling min/max is a bit more complicated\n>> than\n>> others.\n>>\n>> I also attached IVM_extra.tar.gz that contains patches for sub-quereis,\n>> outer-join, CTE support, just for your information.\n>>\n>> Regards,\n>> Yugo Nagata\n>>\n>> --\n>> Yugo NAGATA <nagata@sraoss.co.jp>\n>>\n>>\n>> --\n>> Yugo NAGATA <nagata@sraoss.co.jp>\n>>\n>\n> Hi,\n> For v23-0007-Add-Incremental-View-Maintenance-support.patch :\n>\n> bq. In this implementation, AFTER triggers are used to collecting\n> tuplestores\n>\n> 'to collecting' -> to collect\n>\n> bq. are contained in a old transition table.\n>\n> 'a old' -> an old\n>\n> bq. updates of more than one base tables\n>\n> one base tables -> one base table\n>\n> bq. DISTINCT and tuple duplicates in views are supported\n>\n> Since distinct and duplicate have opposite meanings, it would be better to\n> rephrase the above sentence.\n>\n> bq. The value in__ivm_count__ is updated\n>\n> I searched the patch for in__ivm_count__ - there was no (second) match. I\n> think there should be a space between in and underscore.\n>\n> +static void CreateIvmTriggersOnBaseTables_recurse(Query *qry, Node *node,\n> Oid matviewOid, Relids *relids, bool ex_lock);\n>\n> nit: long line. please wrap.\n>\n> + if (rewritten->distinctClause)\n> + rewritten->groupClause = transformDistinctClause(NULL,\n> &rewritten->targetList, rewritten->sortClause, false);\n> +\n> + /* Add count(*) for counting distinct tuples in views */\n> + if (rewritten->distinctClause)\n>\n> It seems the body of the two if statements can be combined into one.\n>\n> More to follow for this patch.\n>\n> Cheers\n>\nHi,\n\n+ CreateIvmTriggersOnBaseTables_recurse(qry, (Node *)qry, matviewOid,\n&relids, ex_lock);\n\nLooking at existing recursive functions, e.g.\n\nsrc/backend/executor/execPartition.c:find_matching_subplans_recurse(PartitionPruningData\n*prunedata,\n\nthe letters in the function name are all lower case. I think following the\nconvention would be nice.\n\n+ if (rte->rtekind == RTE_RELATION)\n+ {\n+ if (!bms_is_member(rte->relid, *relids))\n\nThe conditions for the two if statements can be combined (saving some\nindentation).\n\n+ check_stack_depth();\n+\n+ if (node == NULL)\n+ return false;\n\nIt seems the node check can be placed ahead of the stack depth check.\n\n+ * CreateindexOnIMMV\n\nCreateindexOnIMMV -> CreateIndexOnIMMV\n\n+ (errmsg(\"could not create an index on materialized view\n\\\"%s\\\" automatically\",\n\nIt would be nice to mention the reason is the lack of primary key.\n\n+ /* create no index, just notice that an appropriate index is\nnecessary for efficient, IVM */\n\nfor efficient -> for efficiency.\n\nCheers\n\nOn Sat, Aug 7, 2021 at 12:00 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Aug 1, 2021 at 11:30 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi hackers,\n\nOn Mon, 19 Jul 2021 09:24:30 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 14 Jul 2021 21:22:37 +0530\n> vignesh C <vignesh21@gmail.com> wrote:\n\n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n> \n> Ok. I'll update the patch in a few days.\n\nAttached is the latest patch set to add support for Incremental\nMaterialized View Maintenance (IVM)\n\nThe patches are rebased to the master and also revised with some\ncode cleaning. \n\nIVM is a way to make materialized views up-to-date in which only\nincremental changes are computed and applied on views rather than\nrecomputing the contents from scratch as REFRESH MATERIALIZED VIEW\ndoes. IVM can update materialized views more efficiently\nthan recomputation when only small part of the view need updates.\n\nThe patch set implements a feature so that materialized views could be\nupdated automatically and immediately when a base table is modified.\n\nCurrently, our IVM implementation supports views which could contain\ntuple duplicates whose definition includes:\n\n - inner and outer joins including self-join\n - DISTINCT\n - some built-in aggregate functions (count, sum, agv, min, and max)\n - a part of subqueries\n -- simple subqueries in FROM clause\n -- EXISTS subqueries in WHERE clause\n - CTEs\n\nWe hope the IVM feature would be adopted into pg15. However, the size of\npatch set has grown too large through supporting above features. Therefore, \nI think it is better to consider only a part of these features for the first\nrelease. Especially, I would like propose the following features for pg15.\n\n - inner joins including self-join\n - DISTINCT and views with tuple duplicates\n - some built-in aggregate functions (count, sum, agv, min, and max)\n\nBy omitting outer-join, sub-queries, and CTE features, the patch size becomes\nless than half. I hope this will make a bit easer to review the IVM patch set.\n\nHere is a list of separated patches.\n\n- 0001: Add a new syntax:\n CREATE INCREMENTAL MATERIALIZED VIEW\n- 0002: Add a new column relisivm to pg_class\n- 0003: Add new deptype option 'm' in pg_depend\n- 0004: Change trigger.c to allow to prolong life span of tupestores\n containing Transition Tables generated via AFTER trigger\n- 0005: Add IVM supprot for pg_dump\n- 0006: Add IVM support for psql\n- 0007: Add the basic IVM future:\n This supports inner joins, DISTINCT, and tuple duplicates.\n- 0008: Add aggregates (count, sum, avg, min, max) support for IVM\n- 0009: Add regression tests for IVM\n- 0010: Add documentation for IVM\n\nWe could split the patch furthermore if this would make reviews much easer. \nFor example, I think 0007 could be split into the more basic part and the part\nfor handling tuple duplicates. Moreover, 0008 could be split into \"min/max\"\nand other aggregates because handling min/max is a bit more complicated than\nothers.\n\nI also attached IVM_extra.tar.gz that contains patches for sub-quereis,\nouter-join, CTE support, just for your information. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>Hi,For v23-0007-Add-Incremental-View-Maintenance-support.patch :bq. In this implementation, AFTER triggers are used to collecting tuplestores 'to collecting' -> to collectbq. are contained in a old transition table.'a old' -> an oldbq. updates of more than one base tablesone base tables -> one base tablebq. DISTINCT and tuple duplicates in views are supportedSince distinct and duplicate have opposite meanings, it would be better to rephrase the above sentence.bq. The value in__ivm_count__ is updatedI searched the patch for in__ivm_count__ - there was no (second) match. I think there should be a space between in and underscore.+static void CreateIvmTriggersOnBaseTables_recurse(Query *qry, Node *node, Oid matviewOid, Relids *relids, bool ex_lock);nit: long line. please wrap.+ if (rewritten->distinctClause)+ rewritten->groupClause = transformDistinctClause(NULL, &rewritten->targetList, rewritten->sortClause, false);++ /* Add count(*) for counting distinct tuples in views */+ if (rewritten->distinctClause)It seems the body of the two if statements can be combined into one.More to follow for this patch.CheersHi,+ CreateIvmTriggersOnBaseTables_recurse(qry, (Node *)qry, matviewOid, &relids, ex_lock); Looking at existing recursive functions, e.g.src/backend/executor/execPartition.c:find_matching_subplans_recurse(PartitionPruningData *prunedata,the letters in the function name are all lower case. I think following the convention would be nice.+ if (rte->rtekind == RTE_RELATION)+ {+ if (!bms_is_member(rte->relid, *relids))The conditions for the two if statements can be combined (saving some indentation).+ check_stack_depth();++ if (node == NULL)+ return false;It seems the node check can be placed ahead of the stack depth check.+ * CreateindexOnIMMVCreateindexOnIMMV -> CreateIndexOnIMMV+ (errmsg(\"could not create an index on materialized view \\\"%s\\\" automatically\",It would be nice to mention the reason is the lack of primary key.+ /* create no index, just notice that an appropriate index is necessary for efficient, IVM */for efficient -> for efficiency.Cheers",
"msg_date": "Sat, 7 Aug 2021 00:52:24 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Nagata-san,\n\n\nI'm still reading the patch.\nI have additional comments.\n\n\n(1)\nIn v23-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch, ivm member is added to IntoClause struct.\nI think it is necessary to modify _copyIntoClause() and _equalIntoClause() functions.\n\n\n(2)\nBy executing pg_dump with v23-0005-Add-Incremental-View-Maintenance-support-to-pg_d.patch,\nthe constraint which is automatically created during \"CREATE INCREMENTAL MATERIALIZED VIEW\" is also dumped.\nThis cause error during recovery as follows.\n\nivm=# create table t (c1 int, c2 int);\nCREATE TABLE\nivm=# create incremental materialized view ivm_t as select distinct c1 from t;\nNOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\nSELECT 0\n\nThen I executed pg_dump.\n\nIn the dump, the following SQLs appear.\n\nCREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n SELECT DISTINCT t.c1\n FROM public.t\n WITH NO DATA;\n\nALTER TABLE ONLY public.ivm_t\n ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n\nIf I execute psql with the result of pg_dump, following error occurs.\n\nERROR: ALTER action ADD CONSTRAINT cannot be performed on relation \"ivm_t\"\nDETAIL: This operation is not supported for materialized views.\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Mon, 6 Sep 2021 10:06:37 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Zhihong Yu,\n\nThank you for your suggestion!\n\nI am sorry for late replay. I'll fix them and submit the\nupdated patch soon.\n\nOn Sat, 7 Aug 2021 00:52:24 -0700\nZhihong Yu <zyu@yugabyte.com> wrote:\n\n> > Hi,\n> > For v23-0007-Add-Incremental-View-Maintenance-support.patch :\n> >\n> > bq. In this implementation, AFTER triggers are used to collecting\n> > tuplestores\n> >\n> > 'to collecting' -> to collect\n> >\n> > bq. are contained in a old transition table.\n> >\n> > 'a old' -> an old\n> >\n> > bq. updates of more than one base tables\n> >\n> > one base tables -> one base table\n\nI'll fix them.\n\n> > bq. DISTINCT and tuple duplicates in views are supported\n> >\n> > Since distinct and duplicate have opposite meanings, it would be better to\n> > rephrase the above sentence.\n\nI'll rewrite it to \n\"Incrementally Maintainable Materialized Views (IMMV) can contain\nduplicated tuples. Also, DISTINCT clause is supported. \"\n\n> > bq. The value in__ivm_count__ is updated\n> >\n> > I searched the patch for in__ivm_count__ - there was no (second) match. I\n> > think there should be a space between in and underscore.\n\nYes, the space was missing.\n\n> > +static void CreateIvmTriggersOnBaseTables_recurse(Query *qry, Node *node,\n> > Oid matviewOid, Relids *relids, bool ex_lock);\n> >\n> > nit: long line. please wrap.\n\nOK.\n\n> >\n> > + if (rewritten->distinctClause)\n> > + rewritten->groupClause = transformDistinctClause(NULL,\n> > &rewritten->targetList, rewritten->sortClause, false);\n> > +\n> > + /* Add count(*) for counting distinct tuples in views */\n> > + if (rewritten->distinctClause)\n> >\n> > It seems the body of the two if statements can be combined into one.\n\nOk.\n\n> \n> + CreateIvmTriggersOnBaseTables_recurse(qry, (Node *)qry, matviewOid,\n> &relids, ex_lock);\n> \n> Looking at existing recursive functions, e.g.\n> \n> src/backend/executor/execPartition.c:find_matching_subplans_recurse(PartitionPruningData\n> *prunedata,\n> \n> the letters in the function name are all lower case. I think following the\n> convention would be nice.\n\nOk. I'll rename this to CreateIvmTriggersOnBaseTablesRecurse since I found\nDeadLockCheckRecurse, transformExprRecurse, and so on.\n\n> \n> + if (rte->rtekind == RTE_RELATION)\n> + {\n> + if (!bms_is_member(rte->relid, *relids))\n> \n> The conditions for the two if statements can be combined (saving some\n> indentation).\n\nYes. I'll fix.\n\n> + check_stack_depth();\n> +\n> + if (node == NULL)\n> + return false;\n> \n> It seems the node check can be placed ahead of the stack depth check.\n\nOK.\n\n> + * CreateindexOnIMMV\n> \n> CreateindexOnIMMV -> CreateIndexOnIMMV\n> \n> + (errmsg(\"could not create an index on materialized view\n> \\\"%s\\\" automatically\",\n> \n> It would be nice to mention the reason is the lack of primary key.\n> \n> + /* create no index, just notice that an appropriate index is\n> necessary for efficient, IVM */\n> \n> for efficient -> for efficiency.\n\nI'll fix them. Thanks.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 22 Sep 2021 18:23:21 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Takahashi-san,\n\nOn Thu, 5 Aug 2021 08:53:47 +0000\n\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n\n> Hi Nagata-san,\n> \n> \n> Thank you for your reply.\n> \n> > I'll investigate this more, but we may have to prohibit views on partitioned\n> > table and partitions.\n> \n> I think this restriction is strict.\n> This feature is useful when the base table is large and partitioning is also useful in such case.\n\nOne reason of this issue is the lack of triggers on partitioned tables or partitions that\nare not specified in the view definition. \n\nHowever, even if we create triggers recursively on the parents or children, we would still\nneed more consideration. This is because we will have to convert the format of tuple of\nmodified table to the format of the table specified in the view for cases that the parent\nand some children have different format. \n\nI think supporting partitioned tables can be left for the next release.\n\n> \n> I have several additional comments on the patch.\n> \n> \n> (1)\n> The following features are added to transition table.\n> - Prolong lifespan of transition table\n> - If table has row security policies, set them to the transition table\n> - Calculate pre-state of the table\n> \n> Are these features only for IVM?\n> If there are other useful case, they should be separated from IVM patch and\n> should be independent patch for transition table.\n\nMaybe. However, we don't have good idea about use cases other than IVM of\nthem for now...\n\n> \n> (2)\n> DEPENDENCY_IMMV (m) is added to deptype of pg_depend.\n> What is the difference compared with existing deptype such as DEPENDENCY_INTERNAL (i)?\n\nDEPENDENCY_IMMV was added to clear that a certain trigger is related to IMMV.\nWe dropped the IVM trigger and its dependencies from IMMV when REFRESH ... WITH NO DATA\nis executed. Without the new deptype, we may accidentally delete a dependency created\nwith an intention other than the IVM trigger.\n\n> (3)\n> Converting from normal materialized view to IVM or from IVM to normal materialized view is not implemented yet.\n> Is it difficult?\n> \n> I think create/drop triggers and __ivm_ columns can achieve this feature.\n\nI think it is harder than you expected. When an IMMV is switched to a normal\nmaterialized view, we needs to drop hidden columns (__ivm_count__ etc.), and in\nthe opposite case, we need to create them again. The former (IMMV->IVM) might be\neaser, but for the latter (IVM->IMMV) I wonder we would need to re-create IMMV.\n\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 22 Sep 2021 18:53:43 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Takahashi-san,\n\nOn Mon, 6 Sep 2021 10:06:37 +0000\n\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n\n> Hi Nagata-san,\n> \n> \n> I'm still reading the patch.\n> I have additional comments.\n\nThank you for your comments!\n\n> \n> (1)\n> In v23-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch, ivm member is added to IntoClause struct.\n> I think it is necessary to modify _copyIntoClause() and _equalIntoClause() functions.\n\nOk. I'll fix _copyIntoClause() and _equalIntoClause() as well as _readIntoClause() and _outIntoClause().\n \n> (2)\n> By executing pg_dump with v23-0005-Add-Incremental-View-Maintenance-support-to-pg_d.patch,\n> the constraint which is automatically created during \"CREATE INCREMENTAL MATERIALIZED VIEW\" is also dumped.\n> This cause error during recovery as follows.\n> \n> ivm=# create table t (c1 int, c2 int);\n> CREATE TABLE\n> ivm=# create incremental materialized view ivm_t as select distinct c1 from t;\n> NOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\n> SELECT 0\n> \n> Then I executed pg_dump.\n> \n> In the dump, the following SQLs appear.\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n> SELECT DISTINCT t.c1\n> FROM public.t\n> WITH NO DATA;\n> \n> ALTER TABLE ONLY public.ivm_t\n> ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n> \n> If I execute psql with the result of pg_dump, following error occurs.\n> \n> ERROR: ALTER action ADD CONSTRAINT cannot be performed on relation \"ivm_t\"\n> DETAIL: This operation is not supported for materialized views.\n\nGood catch! It was my mistake creating unique constraints on IMMV in spite of\nwe cannot defined them via SQL. I'll fix it to use unique indexes instead of\nconstraints.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 22 Sep 2021 19:12:27 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi hackers,\n\nI attached the updated patch including fixes reported by \nZhihong Yu and Ryohei Takahashi.\n\nRegards,\nYugo Nagata\n\nOn Wed, 22 Sep 2021 19:12:27 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hello Takahashi-san,\n> \n> On Mon, 6 Sep 2021 10:06:37 +0000\n> \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> \n> > Hi Nagata-san,\n> > \n> > \n> > I'm still reading the patch.\n> > I have additional comments.\n> \n> Thank you for your comments!\n> \n> > \n> > (1)\n> > In v23-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch, ivm member is added to IntoClause struct.\n> > I think it is necessary to modify _copyIntoClause() and _equalIntoClause() functions.\n> \n> Ok. I'll fix _copyIntoClause() and _equalIntoClause() as well as _readIntoClause() and _outIntoClause().\n> \n> > (2)\n> > By executing pg_dump with v23-0005-Add-Incremental-View-Maintenance-support-to-pg_d.patch,\n> > the constraint which is automatically created during \"CREATE INCREMENTAL MATERIALIZED VIEW\" is also dumped.\n> > This cause error during recovery as follows.\n> > \n> > ivm=# create table t (c1 int, c2 int);\n> > CREATE TABLE\n> > ivm=# create incremental materialized view ivm_t as select distinct c1 from t;\n> > NOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\n> > SELECT 0\n> > \n> > Then I executed pg_dump.\n> > \n> > In the dump, the following SQLs appear.\n> > \n> > CREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n> > SELECT DISTINCT t.c1\n> > FROM public.t\n> > WITH NO DATA;\n> > \n> > ALTER TABLE ONLY public.ivm_t\n> > ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n> > \n> > If I execute psql with the result of pg_dump, following error occurs.\n> > \n> > ERROR: ALTER action ADD CONSTRAINT cannot be performed on relation \"ivm_t\"\n> > DETAIL: This operation is not supported for materialized views.\n> \n> Good catch! It was my mistake creating unique constraints on IMMV in spite of\n> we cannot defined them via SQL. I'll fix it to use unique indexes instead of\n> constraints.\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 22 Sep 2021 19:17:12 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 22 Sep 2021 19:17:12 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi hackers,\n> \n> I attached the updated patch including fixes reported by \n> Zhihong Yu and Ryohei Takahashi.\n\nCfbot seems to fail to open the tar file, so I attached\npatch files instead of tar ball.\n\nRegards,\nYugo Nagata\n\n \n> On Wed, 22 Sep 2021 19:12:27 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hello Takahashi-san,\n> > \n> > On Mon, 6 Sep 2021 10:06:37 +0000\n> > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > \n> > > Hi Nagata-san,\n> > > \n> > > \n> > > I'm still reading the patch.\n> > > I have additional comments.\n> > \n> > Thank you for your comments!\n> > \n> > > \n> > > (1)\n> > > In v23-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch, ivm member is added to IntoClause struct.\n> > > I think it is necessary to modify _copyIntoClause() and _equalIntoClause() functions.\n> > \n> > Ok. I'll fix _copyIntoClause() and _equalIntoClause() as well as _readIntoClause() and _outIntoClause().\n> > \n> > > (2)\n> > > By executing pg_dump with v23-0005-Add-Incremental-View-Maintenance-support-to-pg_d.patch,\n> > > the constraint which is automatically created during \"CREATE INCREMENTAL MATERIALIZED VIEW\" is also dumped.\n> > > This cause error during recovery as follows.\n> > > \n> > > ivm=# create table t (c1 int, c2 int);\n> > > CREATE TABLE\n> > > ivm=# create incremental materialized view ivm_t as select distinct c1 from t;\n> > > NOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\n> > > SELECT 0\n> > > \n> > > Then I executed pg_dump.\n> > > \n> > > In the dump, the following SQLs appear.\n> > > \n> > > CREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n> > > SELECT DISTINCT t.c1\n> > > FROM public.t\n> > > WITH NO DATA;\n> > > \n> > > ALTER TABLE ONLY public.ivm_t\n> > > ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n> > > \n> > > If I execute psql with the result of pg_dump, following error occurs.\n> > > \n> > > ERROR: ALTER action ADD CONSTRAINT cannot be performed on relation \"ivm_t\"\n> > > DETAIL: This operation is not supported for materialized views.\n> > \n> > Good catch! It was my mistake creating unique constraints on IMMV in spite of\n> > we cannot defined them via SQL. I'll fix it to use unique indexes instead of\n> > constraints.\n> > \n> > Regards,\n> > Yugo Nagata\n> > \n> > -- \n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> > \n> > \n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Thu, 23 Sep 2021 04:57:30 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Takahashi-san,\n\nOn Wed, 22 Sep 2021 18:53:43 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hello Takahashi-san,\n> \n> On Thu, 5 Aug 2021 08:53:47 +0000\n> \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> \n> > Hi Nagata-san,\n> > \n> > \n> > Thank you for your reply.\n> > \n> > > I'll investigate this more, but we may have to prohibit views on partitioned\n> > > table and partitions.\n> > \n> > I think this restriction is strict.\n> > This feature is useful when the base table is large and partitioning is also useful in such case.\n> \n> One reason of this issue is the lack of triggers on partitioned tables or partitions that\n> are not specified in the view definition. \n> \n> However, even if we create triggers recursively on the parents or children, we would still\n> need more consideration. This is because we will have to convert the format of tuple of\n> modified table to the format of the table specified in the view for cases that the parent\n> and some children have different format. \n> \n> I think supporting partitioned tables can be left for the next release.\n> \n> > \n> > I have several additional comments on the patch.\n> > \n> > \n> > (1)\n> > The following features are added to transition table.\n> > - Prolong lifespan of transition table\n> > - If table has row security policies, set them to the transition table\n> > - Calculate pre-state of the table\n> > \n> > Are these features only for IVM?\n> > If there are other useful case, they should be separated from IVM patch and\n> > should be independent patch for transition table.\n> \n> Maybe. However, we don't have good idea about use cases other than IVM of\n> them for now...\n> \n> > \n> > (2)\n> > DEPENDENCY_IMMV (m) is added to deptype of pg_depend.\n> > What is the difference compared with existing deptype such as DEPENDENCY_INTERNAL (i)?\n> \n> DEPENDENCY_IMMV was added to clear that a certain trigger is related to IMMV.\n> We dropped the IVM trigger and its dependencies from IMMV when REFRESH ... WITH NO DATA\n> is executed. Without the new deptype, we may accidentally delete a dependency created\n> with an intention other than the IVM trigger.\n> \n> > (3)\n> > Converting from normal materialized view to IVM or from IVM to normal materialized view is not implemented yet.\n> > Is it difficult?\n> > \n> > I think create/drop triggers and __ivm_ columns can achieve this feature.\n> \n> I think it is harder than you expected. When an IMMV is switched to a normal\n> materialized view, we needs to drop hidden columns (__ivm_count__ etc.), and in\n> the opposite case, we need to create them again. The former (IMMV->IVM) might be\n> easer, but for the latter (IVM->IMMV) I wonder we would need to re-create IMMV.\n\nI am sorry but I found a mistake in the above description. \n\"IMMV->IVM\" and \"IVM->IMMV\" were wrong. I've should use \"IMMV->MV\" and \"MV->IMMV\"\nwhere MV means normal materialized view.w.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:37:55 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi hackers,\n\nI attached the rebased patch set.\n\nRegards,\nYugo Nagata\n\nOn Thu, 23 Sep 2021 04:57:30 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 22 Sep 2021 19:17:12 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi hackers,\n> > \n> > I attached the updated patch including fixes reported by \n> > Zhihong Yu and Ryohei Takahashi.\n> \n> Cfbot seems to fail to open the tar file, so I attached\n> patch files instead of tar ball.\n> \n> Regards,\n> Yugo Nagata\n> \n> \n> > On Wed, 22 Sep 2021 19:12:27 +0900\n> > Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > \n> > > Hello Takahashi-san,\n> > > \n> > > On Mon, 6 Sep 2021 10:06:37 +0000\n> > > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > > \n> > > > Hi Nagata-san,\n> > > > \n> > > > \n> > > > I'm still reading the patch.\n> > > > I have additional comments.\n> > > \n> > > Thank you for your comments!\n> > > \n> > > > \n> > > > (1)\n> > > > In v23-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch, ivm member is added to IntoClause struct.\n> > > > I think it is necessary to modify _copyIntoClause() and _equalIntoClause() functions.\n> > > \n> > > Ok. I'll fix _copyIntoClause() and _equalIntoClause() as well as _readIntoClause() and _outIntoClause().\n> > > \n> > > > (2)\n> > > > By executing pg_dump with v23-0005-Add-Incremental-View-Maintenance-support-to-pg_d.patch,\n> > > > the constraint which is automatically created during \"CREATE INCREMENTAL MATERIALIZED VIEW\" is also dumped.\n> > > > This cause error during recovery as follows.\n> > > > \n> > > > ivm=# create table t (c1 int, c2 int);\n> > > > CREATE TABLE\n> > > > ivm=# create incremental materialized view ivm_t as select distinct c1 from t;\n> > > > NOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\n> > > > SELECT 0\n> > > > \n> > > > Then I executed pg_dump.\n> > > > \n> > > > In the dump, the following SQLs appear.\n> > > > \n> > > > CREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n> > > > SELECT DISTINCT t.c1\n> > > > FROM public.t\n> > > > WITH NO DATA;\n> > > > \n> > > > ALTER TABLE ONLY public.ivm_t\n> > > > ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n> > > > \n> > > > If I execute psql with the result of pg_dump, following error occurs.\n> > > > \n> > > > ERROR: ALTER action ADD CONSTRAINT cannot be performed on relation \"ivm_t\"\n> > > > DETAIL: This operation is not supported for materialized views.\n> > > \n> > > Good catch! It was my mistake creating unique constraints on IMMV in spite of\n> > > we cannot defined them via SQL. I'll fix it to use unique indexes instead of\n> > > constraints.\n> > > \n> > > Regards,\n> > > Yugo Nagata\n> > > \n> > > -- \n> > > Yugo NAGATA <nagata@sraoss.co.jp>\n> > > \n> > > \n> > \n> > \n> > -- \n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 29 Oct 2021 18:16:28 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Nagata-san,\n\n\nSorry for late reply.\n\n\n> However, even if we create triggers recursively on the parents or children, we would still\n> need more consideration. This is because we will have to convert the format of tuple of\n> modified table to the format of the table specified in the view for cases that the parent\n> and some children have different format.\n> \n> I think supporting partitioned tables can be left for the next release.\n\nOK. I understand.\nIn the v24-patch, creating IVM on partions or partition table is prohibited.\nIt is OK but it should be documented.\n\nPerhaps, the following statement describe this.\nIf so, I think the definition of \"simple base table\" is ambiguous for some users.\n\n+ IMMVs must be based on simple base tables. It's not supported to\n+ create them on top of views or materialized views.\n\n\n> DEPENDENCY_IMMV was added to clear that a certain trigger is related to IMMV.\n> We dropped the IVM trigger and its dependencies from IMMV when REFRESH ... WITH NO DATA\n> is executed. Without the new deptype, we may accidentally delete a dependency created\n> with an intention other than the IVM trigger.\n\nOK. I understand.\n\n> I think it is harder than you expected. When an IMMV is switched to a normal\n> materialized view, we needs to drop hidden columns (__ivm_count__ etc.), and in\n> the opposite case, we need to create them again. The former (IMMV->IVM) might be\n> easer, but for the latter (IVM->IMMV) I wonder we would need to re-create\n> IMMV.\n\nOK. I understand.\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Wed, 24 Nov 2021 04:27:13 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Nagata-san,\n\n\n> Ok. I'll fix _copyIntoClause() and _equalIntoClause() as well as _readIntoClause()\n> and _outIntoClause().\n\nOK.\n\n> > ivm=# create table t (c1 int, c2 int);\n> > CREATE TABLE\n> > ivm=# create incremental materialized view ivm_t as select distinct c1 from t;\n> > NOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\n> > SELECT 0\n> >\n> > Then I executed pg_dump.\n> >\n> > In the dump, the following SQLs appear.\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n> > SELECT DISTINCT t.c1\n> > FROM public.t\n> > WITH NO DATA;\n> >\n> > ALTER TABLE ONLY public.ivm_t\n> > ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n> >\n> > If I execute psql with the result of pg_dump, following error occurs.\n> >\n> > ERROR: ALTER action ADD CONSTRAINT cannot be performed on relation\n> \"ivm_t\"\n> > DETAIL: This operation is not supported for materialized views.\n> \n> Good catch! It was my mistake creating unique constraints on IMMV in spite of\n> we cannot defined them via SQL. I'll fix it to use unique indexes instead of\n> constraints.\n\nI checked the same procedure on v24 patch.\nBut following error occurs instead of the original error.\n\nERROR: relation \"ivm_t_index\" already exists\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Wed, 24 Nov 2021 04:31:25 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Takahashi-san,\n\nOn Wed, 24 Nov 2021 04:27:13 +0000\n\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n\n> Hi Nagata-san,\n> \n> \n> Sorry for late reply.\n> \n> \n> > However, even if we create triggers recursively on the parents or children, we would still\n> > need more consideration. This is because we will have to convert the format of tuple of\n> > modified table to the format of the table specified in the view for cases that the parent\n> > and some children have different format.\n> > \n> > I think supporting partitioned tables can be left for the next release.\n> \n> OK. I understand.\n> In the v24-patch, creating IVM on partions or partition table is prohibited.\n> It is OK but it should be documented.\n> \n> Perhaps, the following statement describe this.\n> If so, I think the definition of \"simple base table\" is ambiguous for some users.\n> \n> + IMMVs must be based on simple base tables. It's not supported to\n> + create them on top of views or materialized views.\n\nOh, I forgot to fix the documentation. I'll fix it.\n\nRagards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 25 Nov 2021 15:47:17 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 24 Nov 2021 04:31:25 +0000\n\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n\n> > > ivm=# create table t (c1 int, c2 int);\n> > > CREATE TABLE\n> > > ivm=# create incremental materialized view ivm_t as select distinct c1 from t;\n> > > NOTICE: created index \"ivm_t_index\" on materialized view \"ivm_t\"\n> > > SELECT 0\n> > >\n> > > Then I executed pg_dump.\n> > >\n> > > In the dump, the following SQLs appear.\n> > >\n> > > CREATE INCREMENTAL MATERIALIZED VIEW public.ivm_t AS\n> > > SELECT DISTINCT t.c1\n> > > FROM public.t\n> > > WITH NO DATA;\n> > >\n> > > ALTER TABLE ONLY public.ivm_t\n> > > ADD CONSTRAINT ivm_t_index UNIQUE (c1);\n> > >\n> > > If I execute psql with the result of pg_dump, following error occurs.\n> > >\n> > > ERROR: ALTER action ADD CONSTRAINT cannot be performed on relation\n> > \"ivm_t\"\n> > > DETAIL: This operation is not supported for materialized views.\n> > \n> > Good catch! It was my mistake creating unique constraints on IMMV in spite of\n> > we cannot defined them via SQL. I'll fix it to use unique indexes instead of\n> > constraints.\n> \n> I checked the same procedure on v24 patch.\n> But following error occurs instead of the original error.\n> \n> ERROR: relation \"ivm_t_index\" already exists\n\nThank you for pointing out it!\n\nHmmm, an index is created when IMMV is defined, so CREAE INDEX called\nafter this would fail... Maybe, we should not create any index automatically\nif IMMV is created WITH NO DATA.\n\nI'll fix it after some investigation.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 25 Nov 2021 16:37:10 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi hackers,\n\nThis is a response to a comment in \"Commitfest 2021-11 Patch Triage - Part 1\" [1].\n\n> 2138: Incremental Materialized View Maintenance\n> ===============================================\n> There seems to be concensus on the thread that this is a feature that we want,\n> and after initial design discussions there seems to be no disagreements with\n> the approach taken. The patch was marked ready for committer almost a year\n> ago, but have since been needs review (which seems correct). The size of the\n> patchset and the length of the thread make it hard to gauge just far away it\n> is, maybe the author or a review can summarize the current state and outline\n> what is left for it to be committable.\n\n[1] https://www.postgresql.org/message-id/6EDAAF93-1663-41D0-9148-76739104943E%40yesql.se\n\nI'll describe recent discussions and current status of this thread. \n\n* Recent Discussions and Current status\n\n1.\nPreviously, we proposed a patchset that supports outer-joins, some sub-queries\nand CTEs. However, aiming to reduce the size of the patchset, I proposed to omit\nthese features from the first version of the patch in my post at 2021-08-02 [2]. \n\nCurrently, we are proposing Incremental View Maintenance feature for PostgreSQL 15\nthat supports following queries in the view definition query.\n\n - inner joins including self-join\n - DISTINCT and views with tuple duplicates\n - some built-in aggregate functions (count, sum, agv, min, and max)\n\nIs it OK? Although there has been no opposite opinion, we want to confirm it.\n\n[2] https://www.postgresql.org/message-id/20210802152834.ecbaba6e17d1957547c3a55d%40sraoss.co.jp\n\n2.\nRecently, There was a suggestion that we should support partitioned tables from\nRyohei Takahashi, but I decided to not support it in the first release of IVM. \nTakahshi-san agreed with it, and the documentation will be fixed soon [3].\n\n[3] https://www.postgresql.org/message-id/20211125154717.777e9d35ddde5f2e0d5d8355%40sraoss.co.jp\n\n3.\nTakahashi-san pointed out that restoring pg_dump results causes an error. I am fixing\nit now.[4]\n\n[4] https://www.postgresql.org/message-id/20211125163710.2f32ae3d4be5d5f9ade020b6%40sraoss.co.jp\n\n\nThe remaining is the summary of our proposal of IVM feature, its design, and past discussions.\n\n---------------------------------------------------------------------------------------\n* Features \n\nIncremental View Maintenance (IVM) is a way to make materialized views\nup-to-date by computing only incremental changes and applying them on\nviews. IVM is more efficient than REFRESH MATERIALIZED VIEW when\nonly small parts of the view are changed.\n\nThis patchset provides a feature that allows materialized views to be\nupdated automatically and incrementally just after a underlying table\nis modified. \n\nYou can create an incementally maintainable materialized view (IMMV)\nby using CREATE INCREMENTAL MATERIALIZED VIEW command.\n\nThe followings are supported in view definition queries:\n- SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n- some built-in aggregate functions (count, sum, avg, min, max)\n- GROUP BY clause\n- DISTINCT clause\n\nViews can contain multiple tuples with the same content (duplicate tuples).\n\nThe following are not supported in a view definition:\n- Outer joins\n- Aggregates otehr than above, window functions, HAVING\n- Sub-queries, CTEs\n- Set operations (UNION, INTERSECT, EXCEPT)\n- DISTINCT ON, ORDER BY, LIMIT, OFFSET\n\nAlso, a view definition query cannot contain other views, materialized views,\nforeign tables, partitioned tables, partitions, VALUES, non-immutable functions,\nsystem columns, or expressions that contains aggregates.\n\n---------------------------------------------------------------------------------------\n* Design\n\nAn IMMV is maintained using statement-level AFTER triggers. When an IMMV is\ncreated, triggers are automatically created on all base tables contained in the\nview definition query. \n\nWhen a table is modified, the change that occurred in the table are extracted\nas transition tables in the AFTER triggers. Then, changes that will occur in\nthe view are calculated by a rewritten view dequery in which the modified table\nis replaced with the transition table. For example, if the view is defined as\n\"SELECT * FROM R, S\", and tuples inserted into R are stored in a transiton table\ndR, the tuples that will be inserted into the view are calculated as the result\nof \"SELECT * FROM dR, S\".\n\n** Multiple Tables Modification\n\nMultiple tables can be modified in a statement when using triggers, foreign key\nconstraint, or modifying CTEs. When multiple tables are modified, we need\nthe state of tables before the modification. For example, when some tuples,\ndR and dS, are inserted into R and S respectively, the tuples that will be\ninserted into the view are calculated by the following two queries:\n\n \"SELECT * FROM dR, S_pre\"\n \"SELECT * FROM R, dS\"\n\nwhere S_pre is the table before the modification, R is the current state of\ntable, that is, after the modification. This pre-update states of table \nis calculated by filtering inserted tuples using cmin/xmin system columns, \nand appending deleted tuples which are contained in the old transition table.\nThis is implemented in get_prestate_rte(). \n\nTransition tables for each modification are collected in each AFTER trigger\nfunction call. Then, the view maintenance is performed in the last call of\nthe trigger. \n\nIn the original PostgreSQL, tuplestores of transition tables are freed at the\nend of each nested query. However, their lifespan needs to be prolonged to\nthe end of the out-most query in order to maintain the view in the last AFTER\ntrigger. For this purpose, SetTransitionTablePreserved is added in trigger.c. \n\n** Duplicate Tulpes\n\nWhen calculating changes that will occur in the view (= delta tables),\nmultiplicity of tuples are calculated by using count(*). \n\nWhen deleting tuples from the view, tuples to be deleted are identified by\njoining the delta table with the view, and the tuples are deleted as many as\nspecified multiplicity by numbered using row_number() function. \nThis is implemented in apply_old_delta().\n\nWhen inserting tuples into the view, tuples are duplicated to the specified\nmultiplicity using generate_series() function. This is implemented in\napply_new_delta().\n\n** DISTINCT clause\n\nWhen DISTINCT is used, the view has a hidden column __ivm_count__ that\nstores multiplicity for tuples. When tuples are deleted from or inserted into\nthe view, the values of __ivm_count__ column is decreased or increased as many\nas specified multiplicity. Eventually, when the values becomes zero, the\ncorresponding tuple is deleted from the view. This is implemented in\napply_old_delta_with_count() and apply_new_delta_with_count().\n\n** Aggregates\n\nBuilt-in count sum, avg, min, and max are supported. Whether a given\naggregate function can be used or not is checked by using its OID in\ncheck_aggregate_supports_ivm().\n\nWhen creating a materialized view containing aggregates, in addition\nto __ivm_count__, more than one hidden columns for each aggregate are\nadded to the target list. For example, columns for storing sum(x),\ncount(x) are added if we have avg(x). When the view is maintained,\naggregated values are updated using these hidden columns, also hidden\ncolumns are updated at the same time.\n\nThe maintenance of aggregated view is performed in\napply_old_delta_with_count() and apply_new_delta_with_count(). The SET\nclauses for updating columns are generated by append_set_clause_*(). \n\nIf the view has min(x) or max(x) and the minimum or maximal value is\ndeleted from a table, we need to update the value to the new min/max\nrecalculated from the tables rather than incremental computation. This\nis performed in recalc_and_set_values().\n\n---------------------------------------------------------------------------------------\n* Discussion\n\n** Aggregate support\n\nThere were a few suggestions that general aggregate functions should be\nsupported [5][6], which may be possible by extending pg_aggregate catalog.\nHowever, we decided to left supporting general aggregates to the future work [7]\nbecause it would need substantial works and make the patch more complex and\nbigger. There has been no opposite opinion on this.\n\n[5] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n[6] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n[7] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n\n** Hidden columns\n\nColumns starting with \"__ivm_\" are hidden columns that doesn't appear when a\nview is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\nfixed. There was a proposal to enable hidden columns by adding a new flag to\npg_attribute [8], but this thread is no longer active, so we decided to check\nthe hidden column by its name [9].\n\n[8] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n[9] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n\n** Concurrent Transactions\n\nWhen the view definition has more than one table, we acquire an exclusive\nlock before the view maintenance in order to avoid inconsistent results.\nThis behavior was explained in [10]. The lock was improved to use weaker lock\nwhen the view has only one table based on a suggestion from Konstantin Knizhnik [11].\n\n[10] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n[11] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n\n** Automatic Index Creation\n\nWhen a view is created, a unique index is automatically created if\npossible, that is, if the view definition query has a GROUP BY or\nDISTINCT, or if the view contains all primary key attributes of\nits base tables in the target list. It is necessary for efficient\nview maintenance. This feature is based on a suggestion from\nKonstantin Knizhnik [12].\n\n[12] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n\n** Others\n\nThere are some other changes in core for IVM implementation. \nThere has been no opposite opinion on any ever.\n\n- syntax \n\nThe command to create an incrementally maintainable materialized\nview (IMMV) is \"CREATE INCREMENTAL MATERIALIZED VIEW\". The new\nkeyword \"INCREMENTAL\" is added.\n\n- pg_class\n\nA new attribue \"relisivm\" is added to pg_class to indicate\nthat the relation is an IMMV.\n\n- deptype\n\nDEPENDENCY_IMMV(m) was added to pg_depend as a new deptype. This is necessary\nto clear that a certain trigger is related to IMMV, especially when We dropped\nIVM triggers from the view when REFRESH ... WITH NO DATA is executed [13]. \n\n[13] https://www.postgresql.org/message-id/20210922185343.548883e81b8baef14a0193c5%40sraoss.co.jp\n\n---------------------------------------------------------------------------------------\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 29 Nov 2021 14:48:26 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nOn Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> On Wed, 24 Nov 2021 04:31:25 +0000\n> \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> \n> > \n> > I checked the same procedure on v24 patch.\n> > But following error occurs instead of the original error.\n> > \n> > ERROR: relation \"ivm_t_index\" already exists\n> \n> Thank you for pointing out it!\n> \n> Hmmm, an index is created when IMMV is defined, so CREAE INDEX called\n> after this would fail... Maybe, we should not create any index automatically\n> if IMMV is created WITH NO DATA.\n> \n> I'll fix it after some investigation.\n\nAre you still investigating on that problem? Also, the patchset doesn't apply\nanymore:\nhttp://cfbot.cputube.org/patch_36_2138.log\n=== Applying patches on top of PostgreSQL commit ID a18b6d2dc288dfa6e7905ede1d4462edd6a8af47 ===\n[...]\n=== applying patch ./v24-0005-Add-Incremental-View-Maintenance-support-to-pg_d.patch\npatching file src/bin/pg_dump/pg_dump.c\nHunk #1 FAILED at 6393.\nHunk #2 FAILED at 6596.\nHunk #3 FAILED at 6719.\nHunk #4 FAILED at 6796.\nHunk #5 succeeded at 14953 (offset -915 lines).\n4 out of 5 hunks FAILED -- saving rejects to file src/bin/pg_dump/pg_dump.c.rej\n\nThere isn't any answer to your following email summarizing the feature yet, so\nI'm not sure what should be the status of this patch, as there's no ideal\ncategory for that. For now I'll change the patch to Waiting on Author on the\ncf app, feel free to switch it back to Needs Review if you think it's more\nsuitable, at least for the design discussion need.\n\n\n",
"msg_date": "Thu, 13 Jan 2022 18:23:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nOn Thu, 13 Jan 2022 18:23:42 +0800\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n> \n> On Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> > On Wed, 24 Nov 2021 04:31:25 +0000\n> > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > \n> > > \n> > > I checked the same procedure on v24 patch.\n> > > But following error occurs instead of the original error.\n> > > \n> > > ERROR: relation \"ivm_t_index\" already exists\n> > \n> > Thank you for pointing out it!\n> > \n> > Hmmm, an index is created when IMMV is defined, so CREAE INDEX called\n> > after this would fail... Maybe, we should not create any index automatically\n> > if IMMV is created WITH NO DATA.\n> > \n> > I'll fix it after some investigation.\n> \n> Are you still investigating on that problem? Also, the patchset doesn't apply\n> anymore:\n\nI attached the updated and rebased patch set.\n\nI fixed to not create a unique index when an IMMV is created WITH NO DATA.\nInstead, the index is created by REFRESH WITH DATA only when the same one\nis not created yet.\n\nAlso, I fixed the documentation to describe that foreign tables and partitioned\ntables are not supported according with Takahashi-san's suggestion. \n \n> There isn't any answer to your following email summarizing the feature yet, so\n> I'm not sure what should be the status of this patch, as there's no ideal\n> category for that. For now I'll change the patch to Waiting on Author on the\n> cf app, feel free to switch it back to Needs Review if you think it's more\n> suitable, at least for the design discussion need.\n\nI changed the status to Needs Review.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 4 Feb 2022 01:25:48 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 8:28 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n>\n> On Thu, 13 Jan 2022 18:23:42 +0800\n> Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > Hi,\n> >\n> > On Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> > > On Wed, 24 Nov 2021 04:31:25 +0000\n> > > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > >\n> > > >\n> > > > I checked the same procedure on v24 patch.\n> > > > But following error occurs instead of the original error.\n> > > >\n> > > > ERROR: relation \"ivm_t_index\" already exists\n> > >\n> > > Thank you for pointing out it!\n> > >\n> > > Hmmm, an index is created when IMMV is defined, so CREAE INDEX called\n> > > after this would fail... Maybe, we should not create any index\n> automatically\n> > > if IMMV is created WITH NO DATA.\n> > >\n> > > I'll fix it after some investigation.\n> >\n> > Are you still investigating on that problem? Also, the patchset doesn't\n> apply\n> > anymore:\n>\n> I attached the updated and rebased patch set.\n>\n> I fixed to not create a unique index when an IMMV is created WITH NO DATA.\n> Instead, the index is created by REFRESH WITH DATA only when the same one\n> is not created yet.\n>\n> Also, I fixed the documentation to describe that foreign tables and\n> partitioned\n> tables are not supported according with Takahashi-san's suggestion.\n>\n> > There isn't any answer to your following email summarizing the feature\n> yet, so\n> > I'm not sure what should be the status of this patch, as there's no ideal\n> > category for that. For now I'll change the patch to Waiting on Author\n> on the\n> > cf app, feel free to switch it back to Needs Review if you think it's\n> more\n> > suitable, at least for the design discussion need.\n>\n> I changed the status to Needs Review.\n>\n>\n> Hi,\nDid you intend to attach updated patch ?\n\nI don't seem to find any.\n\nFYI\n\nOn Thu, Feb 3, 2022 at 8:28 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi,\n\nOn Thu, 13 Jan 2022 18:23:42 +0800\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n> \n> On Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> > On Wed, 24 Nov 2021 04:31:25 +0000\n> > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > \n> > > \n> > > I checked the same procedure on v24 patch.\n> > > But following error occurs instead of the original error.\n> > > \n> > > ERROR: relation \"ivm_t_index\" already exists\n> > \n> > Thank you for pointing out it!\n> > \n> > Hmmm, an index is created when IMMV is defined, so CREAE INDEX called\n> > after this would fail... Maybe, we should not create any index automatically\n> > if IMMV is created WITH NO DATA.\n> > \n> > I'll fix it after some investigation.\n> \n> Are you still investigating on that problem? Also, the patchset doesn't apply\n> anymore:\n\nI attached the updated and rebased patch set.\n\nI fixed to not create a unique index when an IMMV is created WITH NO DATA.\nInstead, the index is created by REFRESH WITH DATA only when the same one\nis not created yet.\n\nAlso, I fixed the documentation to describe that foreign tables and partitioned\ntables are not supported according with Takahashi-san's suggestion. \n\n> There isn't any answer to your following email summarizing the feature yet, so\n> I'm not sure what should be the status of this patch, as there's no ideal\n> category for that. For now I'll change the patch to Waiting on Author on the\n> cf app, feel free to switch it back to Needs Review if you think it's more\n> suitable, at least for the design discussion need.\n\nI changed the status to Needs Review.\nHi,Did you intend to attach updated patch ?I don't seem to find any.FYI",
"msg_date": "Thu, 3 Feb 2022 08:48:00 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, 3 Feb 2022 08:48:00 -0800\nZhihong Yu <zyu@yugabyte.com> wrote:\n\n> On Thu, Feb 3, 2022 at 8:28 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi,\n> >\n> > On Thu, 13 Jan 2022 18:23:42 +0800\n> > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > Hi,\n> > >\n> > > On Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> > > > On Wed, 24 Nov 2021 04:31:25 +0000\n> > > > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > > >\n> > > > >\n> > > > > I checked the same procedure on v24 patch.\n> > > > > But following error occurs instead of the original error.\n> > > > >\n> > > > > ERROR: relation \"ivm_t_index\" already exists\n> > > >\n> > > > Thank you for pointing out it!\n> > > >\n> > > > Hmmm, an index is created when IMMV is defined, so CREAE INDEX called\n> > > > after this would fail... Maybe, we should not create any index\n> > automatically\n> > > > if IMMV is created WITH NO DATA.\n> > > >\n> > > > I'll fix it after some investigation.\n> > >\n> > > Are you still investigating on that problem? Also, the patchset doesn't\n> > apply\n> > > anymore:\n> >\n> > I attached the updated and rebased patch set.\n> >\n> > I fixed to not create a unique index when an IMMV is created WITH NO DATA.\n> > Instead, the index is created by REFRESH WITH DATA only when the same one\n> > is not created yet.\n> >\n> > Also, I fixed the documentation to describe that foreign tables and\n> > partitioned\n> > tables are not supported according with Takahashi-san's suggestion.\n> >\n> > > There isn't any answer to your following email summarizing the feature\n> > yet, so\n> > > I'm not sure what should be the status of this patch, as there's no ideal\n> > > category for that. For now I'll change the patch to Waiting on Author\n> > on the\n> > > cf app, feel free to switch it back to Needs Review if you think it's\n> > more\n> > > suitable, at least for the design discussion need.\n> >\n> > I changed the status to Needs Review.\n> >\n> >\n> > Hi,\n> Did you intend to attach updated patch ?\n> \n> I don't seem to find any.\n\nOops, I attached. Thanks!\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 4 Feb 2022 01:48:06 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 8:50 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Thu, 3 Feb 2022 08:48:00 -0800\n> Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> > On Thu, Feb 3, 2022 at 8:28 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > > Hi,\n> > >\n> > > On Thu, 13 Jan 2022 18:23:42 +0800\n> > > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > > Hi,\n> > > >\n> > > > On Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> > > > > On Wed, 24 Nov 2021 04:31:25 +0000\n> > > > > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > > > >\n> > > > > >\n> > > > > > I checked the same procedure on v24 patch.\n> > > > > > But following error occurs instead of the original error.\n> > > > > >\n> > > > > > ERROR: relation \"ivm_t_index\" already exists\n> > > > >\n> > > > > Thank you for pointing out it!\n> > > > >\n> > > > > Hmmm, an index is created when IMMV is defined, so CREAE INDEX\n> called\n> > > > > after this would fail... Maybe, we should not create any index\n> > > automatically\n> > > > > if IMMV is created WITH NO DATA.\n> > > > >\n> > > > > I'll fix it after some investigation.\n> > > >\n> > > > Are you still investigating on that problem? Also, the patchset\n> doesn't\n> > > apply\n> > > > anymore:\n> > >\n> > > I attached the updated and rebased patch set.\n> > >\n> > > I fixed to not create a unique index when an IMMV is created WITH NO\n> DATA.\n> > > Instead, the index is created by REFRESH WITH DATA only when the same\n> one\n> > > is not created yet.\n> > >\n> > > Also, I fixed the documentation to describe that foreign tables and\n> > > partitioned\n> > > tables are not supported according with Takahashi-san's suggestion.\n> > >\n> > > > There isn't any answer to your following email summarizing the\n> feature\n> > > yet, so\n> > > > I'm not sure what should be the status of this patch, as there's no\n> ideal\n> > > > category for that. For now I'll change the patch to Waiting on\n> Author\n> > > on the\n> > > > cf app, feel free to switch it back to Needs Review if you think it's\n> > > more\n> > > > suitable, at least for the design discussion need.\n> > >\n> > > I changed the status to Needs Review.\n> > >\n> > >\n> > > Hi,\n> > Did you intend to attach updated patch ?\n> >\n> > I don't seem to find any.\n>\n> Oops, I attached. Thanks!\n>\n> Hi,\nFor CreateIndexOnIMMV():\n\n+ ereport(NOTICE,\n+ (errmsg(\"could not create an index on materialized view\n\\\"%s\\\" automatically\",\n...\n+ return;\n+ }\n\nShould the return type be changed to bool so that the caller knows whether\nthe index creation succeeds ?\nIf index creation is unsuccessful, should the call\nto CreateIvmTriggersOnBaseTables() be skipped ?\n\nFor check_ivm_restriction_walker():\n\n+ break;\n+ expression_tree_walker(node, check_ivm_restriction_walker,\nNULL);\n+ break;\n\nSomething is missing between the break and expression_tree_walker().\n\nCheers\n\nOn Thu, Feb 3, 2022 at 8:50 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:On Thu, 3 Feb 2022 08:48:00 -0800\nZhihong Yu <zyu@yugabyte.com> wrote:\n\n> On Thu, Feb 3, 2022 at 8:28 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi,\n> >\n> > On Thu, 13 Jan 2022 18:23:42 +0800\n> > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > Hi,\n> > >\n> > > On Thu, Nov 25, 2021 at 04:37:10PM +0900, Yugo NAGATA wrote:\n> > > > On Wed, 24 Nov 2021 04:31:25 +0000\n> > > > \"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com> wrote:\n> > > >\n> > > > >\n> > > > > I checked the same procedure on v24 patch.\n> > > > > But following error occurs instead of the original error.\n> > > > >\n> > > > > ERROR: relation \"ivm_t_index\" already exists\n> > > >\n> > > > Thank you for pointing out it!\n> > > >\n> > > > Hmmm, an index is created when IMMV is defined, so CREAE INDEX called\n> > > > after this would fail... Maybe, we should not create any index\n> > automatically\n> > > > if IMMV is created WITH NO DATA.\n> > > >\n> > > > I'll fix it after some investigation.\n> > >\n> > > Are you still investigating on that problem? Also, the patchset doesn't\n> > apply\n> > > anymore:\n> >\n> > I attached the updated and rebased patch set.\n> >\n> > I fixed to not create a unique index when an IMMV is created WITH NO DATA.\n> > Instead, the index is created by REFRESH WITH DATA only when the same one\n> > is not created yet.\n> >\n> > Also, I fixed the documentation to describe that foreign tables and\n> > partitioned\n> > tables are not supported according with Takahashi-san's suggestion.\n> >\n> > > There isn't any answer to your following email summarizing the feature\n> > yet, so\n> > > I'm not sure what should be the status of this patch, as there's no ideal\n> > > category for that. For now I'll change the patch to Waiting on Author\n> > on the\n> > > cf app, feel free to switch it back to Needs Review if you think it's\n> > more\n> > > suitable, at least for the design discussion need.\n> >\n> > I changed the status to Needs Review.\n> >\n> >\n> > Hi,\n> Did you intend to attach updated patch ?\n> \n> I don't seem to find any.\n\nOops, I attached. Thanks!Hi,For CreateIndexOnIMMV():+ ereport(NOTICE,+ (errmsg(\"could not create an index on materialized view \\\"%s\\\" automatically\",...+ return;+ } Should the return type be changed to bool so that the caller knows whether the index creation succeeds ?If index creation is unsuccessful, should the call to CreateIvmTriggersOnBaseTables() be skipped ?For check_ivm_restriction_walker():+ break;+ expression_tree_walker(node, check_ivm_restriction_walker, NULL);+ break;Something is missing between the break and expression_tree_walker().Cheers",
"msg_date": "Thu, 3 Feb 2022 09:51:52 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, Nagata-san\nI am very interested in IMMV and read your patch but have some comments in v25-0007-Add-Incremental-View-Maintenance-support.patch and want to discuss with you.\n\n+\t\t/* For IMMV, we need to rewrite matview query */\n+\t\tquery = rewriteQueryForIMMV(query, into->colNames);\n+\t\tquery_immv = copyObject(query);\n\n/* Create triggers on incremental maintainable materialized view */\n+\t\t\t\tAssert(query_immv != NULL);\n+\t\t\t\tCreateIvmTriggersOnBaseTables(query_immv, matviewOid, true);\n1. Do we need copy query?Is it okay that CreateIvmTriggersOnBaseTables directly use (Query *) into->viewQuery instead of query_immv like CreateIndexOnIMMV? It seems only planner may change query, but it shouldn't affect us finding the correct base table in CreateIvmTriggersOnBaseTables .\n\n+void\n+CreateIndexOnIMMV(Query *query, Relation matviewRel)\n+{\n+\tQuery *qry = (Query *) copyObject(query);\n2. Also, is it okay to not copy query in CreateIndexOnIMMV? It seems we only read query in CreateIndexOnIMMV.\n\nRegards,\n\n\n\n\nHi, Nagata-sanI am very interested in IMMV and read your patch but have some comments in v25-0007-Add-Incremental-View-Maintenance-support.patch and want to discuss with you.+\t\t/* For IMMV, we need to rewrite matview query */\n+\t\tquery = rewriteQueryForIMMV(query, into->colNames);\n+\t\tquery_immv = copyObject(query);/* Create triggers on incremental maintainable materialized view */\n+\t\t\t\tAssert(query_immv != NULL);\n+\t\t\t\tCreateIvmTriggersOnBaseTables(query_immv, matviewOid, true);1. Do we need copy query?Is it okay that CreateIvmTriggersOnBaseTables directly use (Query *) into->viewQuery instead of query_immv like CreateIndexOnIMMV? It seems only planner may change query, but it shouldn't affect us finding the correct base table in CreateIvmTriggersOnBaseTables .+void+CreateIndexOnIMMV(Query *query, Relation matviewRel)+{+ Query *qry = (Query *) copyObject(query);2. Also, is it okay to not copy query in CreateIndexOnIMMV? It seems we only read query in CreateIndexOnIMMV.Regards,",
"msg_date": "Wed, 16 Feb 2022 22:34:18 +0800",
"msg_from": "huyajun <hu_yajun@qq.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Wed, 16 Feb 2022 22:34:18 +0800\nhuyajun <hu_yajun@qq.com> wrote:\n\n> Hi, Nagata-san\n> I am very interested in IMMV and read your patch but have some comments in v25-0007-Add-Incremental-View-Maintenance-support.patch and want to discuss with you.\n\nThank you for your review!\n\n> \n> +\t\t/* For IMMV, we need to rewrite matview query */\n> +\t\tquery = rewriteQueryForIMMV(query, into->colNames);\n> +\t\tquery_immv = copyObject(query);\n> \n> /* Create triggers on incremental maintainable materialized view */\n> +\t\t\t\tAssert(query_immv != NULL);\n> +\t\t\t\tCreateIvmTriggersOnBaseTables(query_immv, matviewOid, true);\n> 1. Do we need copy query?Is it okay that CreateIvmTriggersOnBaseTables directly use (Query *) into->viewQuery instead of query_immv like CreateIndexOnIMMV? It seems only planner may change query, but it shouldn't affect us finding the correct base table in CreateIvmTriggersOnBaseTables .\n\nThe copy to query_immv was necessary for supporting sub-queries in the view\ndefinition. However, we excluded the fueature from the current patch to reduce\nthe patch size, so it would be unnecessary. I'll fix it. \n\n> \n> +void\n> +CreateIndexOnIMMV(Query *query, Relation matviewRel)\n> +{\n> +\tQuery *qry = (Query *) copyObject(query);\n> 2. Also, is it okay to not copy query in CreateIndexOnIMMV? It seems we only read query in CreateIndexOnIMMV.\n\nThis was also necessary for supporting CTEs, but unnecessary in the current\npatch, so I'll fix it, too.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 2 Mar 2022 03:55:01 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nI attached the updated patch-set (v26).\n\n> On Wed, 16 Feb 2022 22:34:18 +0800\n> huyajun <hu_yajun@qq.com> wrote:\n> \n> > Hi, Nagata-san\n> > I am very interested in IMMV and read your patch but have some comments in v25-0007-Add-Incremental-View-Maintenance-support.patch and want to discuss with you.\n> \n> Thank you for your review!\n> \n> > \n> > +\t\t/* For IMMV, we need to rewrite matview query */\n> > +\t\tquery = rewriteQueryForIMMV(query, into->colNames);\n> > +\t\tquery_immv = copyObject(query);\n> > \n> > /* Create triggers on incremental maintainable materialized view */\n> > +\t\t\t\tAssert(query_immv != NULL);\n> > +\t\t\t\tCreateIvmTriggersOnBaseTables(query_immv, matviewOid, true);\n> > 1. Do we need copy query?Is it okay that CreateIvmTriggersOnBaseTables directly use (Query *) into->viewQuery instead of query_immv like CreateIndexOnIMMV? It seems only planner may change query, but it shouldn't affect us finding the correct base table in CreateIvmTriggersOnBaseTables .\n> \n> The copy to query_immv was necessary for supporting sub-queries in the view\n> definition. However, we excluded the fueature from the current patch to reduce\n> the patch size, so it would be unnecessary. I'll fix it. \n> \n> > \n> > +void\n> > +CreateIndexOnIMMV(Query *query, Relation matviewRel)\n> > +{\n> > +\tQuery *qry = (Query *) copyObject(query);\n> > 2. Also, is it okay to not copy query in CreateIndexOnIMMV? It seems we only read query in CreateIndexOnIMMV.\n> \n> This was also necessary for supporting CTEs, but unnecessary in the current\n> patch, so I'll fix it, too.\n\nI removed unnecessary copies of Query in according with the suggestions\nfrom huyajun, and fix wrong codes in a \"switch\" statement pointed out\nby Zhihong Yu.\n\nIn addition, I made the following fixes:\n- Fix psql tab-completion code according with master branch\n- Fix auto-index-creation that didn't work well in REFRESH command\n- Add documentation description about the automatic index creation\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 14 Mar 2022 19:12:17 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Zhihong Yu,\n\nI already replied to your comments before, but I forgot to include\nthe list to CC, so I resend the same again. Sorry for the duplicate\nemails.\n\nOn Thu, 3 Feb 2022 09:51:52 -0800\nZhihong Yu <zyu@yugabyte.com> wrote:\n\n> For CreateIndexOnIMMV():\n> \n> + ereport(NOTICE,\n> + (errmsg(\"could not create an index on materialized view\n> \\\"%s\\\" automatically\",\n> ...\n> + return;\n> + }\n> \n> Should the return type be changed to bool so that the caller knows whether\n> the index creation succeeds ?\n> If index creation is unsuccessful, should the call\n> to CreateIvmTriggersOnBaseTables() be skipped ?\n\nCreateIvmTriggersOnBaseTables() have to be called regardless\nof whether an index is created successfully or not, so I think\nCreateindexOnIMMV() doesn't have to return the result for now.\n\n> For check_ivm_restriction_walker():\n> \n> + break;\n> + expression_tree_walker(node, check_ivm_restriction_walker,\n> NULL);\n> + break;\n> \n> Something is missing between the break and expression_tree_walker().\n\nYes, it's my mistake during making the patch-set. I fixed it in the\nupdated patch I attached in the other post.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 14 Mar 2022 19:26:16 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "This patch has bitrotted due to some other patch affecting trigger.c.\n\nCould you post a rebase?\n\nThis is the last week of the CF before feature freeze so time is of the essence.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 11:09:16 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi,\n\nOn Fri, 1 Apr 2022 11:09:16 -0400\nGreg Stark <stark@mit.edu> wrote:\n\n> This patch has bitrotted due to some other patch affecting trigger.c.\n> \n> Could you post a rebase?\n> \n> This is the last week of the CF before feature freeze so time is of the essence.\n\nI attached a rebased patch-set.\n\nAlso, I made the folowing changes from the previous.\n\n1. Fix to not use a new deptye\n\nIn the previous patch, we introduced a new deptye 'm' into pg_depend.\nThis deptype was used for looking for IVM triggers to be removed at\nREFRESH WITH NO DATA. However, we decided to not use it for reducing\nunnecessary change in the core code. Currently, the trigger name and\ndependent objclass are used at that time instead of it.\n\nAs a result, the number of patches are reduced to nine from ten.\n\n2. Bump the version numbers in psql and pg_dump\n\nThis feature's target is PG 16 now.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 22 Apr 2022 11:29:39 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, 22 Apr 2022 11:29:39 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n> \n> On Fri, 1 Apr 2022 11:09:16 -0400\n> Greg Stark <stark@mit.edu> wrote:\n> \n> > This patch has bitrotted due to some other patch affecting trigger.c.\n> > \n> > Could you post a rebase?\n> > \n> > This is the last week of the CF before feature freeze so time is of the essence.\n> \n> I attached a rebased patch-set.\n> \n> Also, I made the folowing changes from the previous.\n> \n> 1. Fix to not use a new deptye\n> \n> In the previous patch, we introduced a new deptye 'm' into pg_depend.\n> This deptype was used for looking for IVM triggers to be removed at\n> REFRESH WITH NO DATA. However, we decided to not use it for reducing\n> unnecessary change in the core code. Currently, the trigger name and\n> dependent objclass are used at that time instead of it.\n> \n> As a result, the number of patches are reduced to nine from ten.\n\n\n> 2. Bump the version numbers in psql and pg_dump\n> \n> This feature's target is PG 16 now.\n\nSorry, I revert this change. It was too early to bump up the\nversion number.\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 22 Apr 2022 14:58:01 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "I'm trying to figure out how to get this feature more attention. Everyone\nagrees it would be a huge help but it's a scary patch to review.\n\nI wonder if it would be helpful to have a kind of \"readers guide\"\nexplanation of the patches to help a reviewer understand what the point of\neach patch is and how the whole system works? I think Andres and Robert\nhave both taken that approach before with big patches and it really helped\nimho.\n\n\n\nOn Fri., Apr. 22, 2022, 08:01 Yugo NAGATA, <nagata@sraoss.co.jp> wrote:\n\n> On Fri, 22 Apr 2022 11:29:39 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> > Hi,\n> >\n> > On Fri, 1 Apr 2022 11:09:16 -0400\n> > Greg Stark <stark@mit.edu> wrote:\n> >\n> > > This patch has bitrotted due to some other patch affecting trigger.c.\n> > >\n> > > Could you post a rebase?\n> > >\n> > > This is the last week of the CF before feature freeze so time is of\n> the essence.\n> >\n> > I attached a rebased patch-set.\n> >\n> > Also, I made the folowing changes from the previous.\n> >\n> > 1. Fix to not use a new deptye\n> >\n> > In the previous patch, we introduced a new deptye 'm' into pg_depend.\n> > This deptype was used for looking for IVM triggers to be removed at\n> > REFRESH WITH NO DATA. However, we decided to not use it for reducing\n> > unnecessary change in the core code. Currently, the trigger name and\n> > dependent objclass are used at that time instead of it.\n> >\n> > As a result, the number of patches are reduced to nine from ten.\n>\n>\n> > 2. Bump the version numbers in psql and pg_dump\n> >\n> > This feature's target is PG 16 now.\n>\n> Sorry, I revert this change. It was too early to bump up the\n> version number.\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n\nI'm trying to figure out how to get this feature more attention. Everyone agrees it would be a huge help but it's a scary patch to review.I wonder if it would be helpful to have a kind of \"readers guide\" explanation of the patches to help a reviewer understand what the point of each patch is and how the whole system works? I think Andres and Robert have both taken that approach before with big patches and it really helped imho.On Fri., Apr. 22, 2022, 08:01 Yugo NAGATA, <nagata@sraoss.co.jp> wrote:On Fri, 22 Apr 2022 11:29:39 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n> \n> On Fri, 1 Apr 2022 11:09:16 -0400\n> Greg Stark <stark@mit.edu> wrote:\n> \n> > This patch has bitrotted due to some other patch affecting trigger.c.\n> > \n> > Could you post a rebase?\n> > \n> > This is the last week of the CF before feature freeze so time is of the essence.\n> \n> I attached a rebased patch-set.\n> \n> Also, I made the folowing changes from the previous.\n> \n> 1. Fix to not use a new deptye\n> \n> In the previous patch, we introduced a new deptye 'm' into pg_depend.\n> This deptype was used for looking for IVM triggers to be removed at\n> REFRESH WITH NO DATA. However, we decided to not use it for reducing\n> unnecessary change in the core code. Currently, the trigger name and\n> dependent objclass are used at that time instead of it.\n> \n> As a result, the number of patches are reduced to nine from ten.\n\n\n> 2. Bump the version numbers in psql and pg_dump\n> \n> This feature's target is PG 16 now.\n\nSorry, I revert this change. It was too early to bump up the\nversion number.\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Sat, 23 Apr 2022 08:18:01 +0200",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello Greg,\n\nOn Sat, 23 Apr 2022 08:18:01 +0200\nGreg Stark <stark@mit.edu> wrote:\n\n> I'm trying to figure out how to get this feature more attention. Everyone\n> agrees it would be a huge help but it's a scary patch to review.\n> \n> I wonder if it would be helpful to have a kind of \"readers guide\"\n> explanation of the patches to help a reviewer understand what the point of\n> each patch is and how the whole system works? I think Andres and Robert\n> have both taken that approach before with big patches and it really helped\n> imho.\n\nThank you very much for your suggestion!\n\nFollowing your advice, I am going to write a readers guide referring to the past\nposts of Andres and Rebert. \n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 28 Apr 2022 15:40:11 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> 2022年4月22日 下午1:58,Yugo NAGATA <nagata@sraoss.co.jp> 写道:\n> \n> On Fri, 22 Apr 2022 11:29:39 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n>> Hi,\n>> \n>> On Fri, 1 Apr 2022 11:09:16 -0400\n>> Greg Stark <stark@mit.edu> wrote:\n>> \n>>> This patch has bitrotted due to some other patch affecting trigger.c.\n>>> \n>>> Could you post a rebase?\n>>> \n>>> This is the last week of the CF before feature freeze so time is of the essence.\n>> \n>> I attached a rebased patch-set.\n>> \n>> Also, I made the folowing changes from the previous.\n>> \n>> 1. Fix to not use a new deptye\n>> \n>> In the previous patch, we introduced a new deptye 'm' into pg_depend.\n>> This deptype was used for looking for IVM triggers to be removed at\n>> REFRESH WITH NO DATA. However, we decided to not use it for reducing\n>> unnecessary change in the core code. Currently, the trigger name and\n>> dependent objclass are used at that time instead of it.\n>> \n>> As a result, the number of patches are reduced to nine from ten.\n> \n> \n>> 2. Bump the version numbers in psql and pg_dump\n>> \n>> This feature's target is PG 16 now.\n> \n> Sorry, I revert this change. It was too early to bump up the\n> version number.\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> <v27-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch><v27-0002-Add-relisivm-column-to-pg_class-system-catalog.patch><v27-0003-Allow-to-prolong-life-span-of-transition-tables-.patch><v27-0004-Add-Incremental-View-Maintenance-support-to-pg_d.patch><v27-0005-Add-Incremental-View-Maintenance-support-to-psql.patch><v27-0006-Add-Incremental-View-Maintenance-support.patch><v27-0007-Add-aggregates-support-in-IVM.patch><v27-0008-Add-regression-tests-for-Incremental-View-Mainte.patch><v27-0009-Add-documentations-about-Incremental-View-Mainte.patch>\n\nHi, Nagata-san\nI read your patch with v27 version and has some new comments,I want to discuss with you.\n\n1. How about use DEPENDENCY_INTERNAL instead of DEPENDENCY_AUTO\n when record dependence on trigger created by IMV.( related code is in the end of CreateIvmTrigger)\nOtherwise, User can use sql to drop trigger and corrupt IVM, DEPENDENCY_INTERNAL is also semantically more correct\nCrash case like:\n create table t( a int);\n create incremental materialized view s as select * from t;\n drop trigger \"IVM_trigger_XXXX”;\n Insert into t values(1);\n\n2. In get_matching_condition_string, Considering NULL values, we can not use simple = operator.\nBut how about 'record = record', record_eq treat NULL = NULL\nit should fast than current implementation for only one comparation\nBelow is my simple implementation with this, Variables are named arbitrarily..\nI test some cases it’s ok\n\nstatic char *\nget_matching_condition_string(List *keys)\n{\n StringInfoData match_cond;\n ListCell *lc;\n\n /* If there is no key columns, the condition is always true. */\n if (keys == NIL)\n return \"true\";\n else\n {\n StringInfoData s1;\n StringInfoData s2;\n initStringInfo(&match_cond);\n initStringInfo(&s1);\n initStringInfo(&s2);\n /* Considering NULL values, we can not use simple = operator. */\n appendStringInfo(&s1, \"ROW(\");\n appendStringInfo(&s2, \"ROW(\");\n foreach (lc, keys)\n {\n Form_pg_attribute attr = (Form_pg_attribute) lfirst(lc);\n char *resname = NameStr(attr->attname);\n char *mv_resname = quote_qualified_identifier(\"mv\", resname);\n char *diff_resname = quote_qualified_identifier(\"diff\", resname);\n \n appendStringInfo(&s1, \"%s\", mv_resname);\n appendStringInfo(&s2, \"%s\", diff_resname);\n\n if (lnext(lc))\n {\n appendStringInfo(&s1, \", \");\n appendStringInfo(&s2, \", \");\n }\n }\n appendStringInfo(&s1, \")::record\");\n appendStringInfo(&s2, \")::record\");\n appendStringInfo(&match_cond, \"%s operator(pg_catalog.=) %s\", s1.data, s2.data);\n return match_cond.data;\n }\n}\n\n3. Consider truncate base tables, IVM will not refresh, maybe raise an error will be better\n\n4. In IVM_immediate_before,I know Lock base table with ExclusiveLock is \n for concurrent updates to the IVM correctly, But how about to Lock it when actually \n need to maintain MV which in IVM_immediate_maintenance\n In this way you don't have to lock multiple times.\n\n5. Why we need CreateIndexOnIMMV, is it a optimize? \n It seems like when maintenance MV,\n the index may not be used because of our match conditions can’t use simple = operator\n\nLooking forward to your early reply to answer my above doubts, thank you a lot!\nRegards,\nYajun Hu\n\n\n\n2022年4月22日 下午1:58,Yugo NAGATA <nagata@sraoss.co.jp> 写道:On Fri, 22 Apr 2022 11:29:39 +0900Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi,On Fri, 1 Apr 2022 11:09:16 -0400Greg Stark <stark@mit.edu> wrote:This patch has bitrotted due to some other patch affecting trigger.c.Could you post a rebase?This is the last week of the CF before feature freeze so time is of the essence.I attached a rebased patch-set.Also, I made the folowing changes from the previous.1. Fix to not use a new deptyeIn the previous patch, we introduced a new deptye 'm' into pg_depend.This deptype was used for looking for IVM triggers to be removed atREFRESH WITH NO DATA. However, we decided to not use it for reducingunnecessary change in the core code. Currently, the trigger name anddependent objclass are used at that time instead of it.As a result, the number of patches are reduced to nine from ten.2. Bump the version numbers in psql and pg_dumpThis feature's target is PG 16 now.Sorry, I revert this change. It was too early to bump up theversion number.-- Yugo NAGATA <nagata@sraoss.co.jp><v27-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch><v27-0002-Add-relisivm-column-to-pg_class-system-catalog.patch><v27-0003-Allow-to-prolong-life-span-of-transition-tables-.patch><v27-0004-Add-Incremental-View-Maintenance-support-to-pg_d.patch><v27-0005-Add-Incremental-View-Maintenance-support-to-psql.patch><v27-0006-Add-Incremental-View-Maintenance-support.patch><v27-0007-Add-aggregates-support-in-IVM.patch><v27-0008-Add-regression-tests-for-Incremental-View-Mainte.patch><v27-0009-Add-documentations-about-Incremental-View-Mainte.patch>Hi, Nagata-sanI read your patch with v27 version and has some new comments,I want to discuss with you.1. How about use DEPENDENCY_INTERNAL instead of DEPENDENCY_AUTO when record dependence on trigger created by IMV.( related code is in the end of CreateIvmTrigger) Otherwise, User can use sql to drop trigger and corrupt IVM, DEPENDENCY_INTERNAL is also semantically more correctCrash case like: create table t( a int); create incremental materialized view s as select * from t; drop trigger \"IVM_trigger_XXXX”; Insert into t values(1);2. In get_matching_condition_string, Considering NULL values, we can not use simple = operator.But how about 'record = record', record_eq treat NULL = NULLit should fast than current implementation for only one comparationBelow is my simple implementation with this, Variables are named arbitrarily..I test some cases it’s okstatic char *get_matching_condition_string(List *keys){ StringInfoData match_cond; ListCell *lc; /* If there is no key columns, the condition is always true. */ if (keys == NIL) return \"true\"; else { StringInfoData s1; StringInfoData s2; initStringInfo(&match_cond); initStringInfo(&s1); initStringInfo(&s2); /* Considering NULL values, we can not use simple = operator. */ appendStringInfo(&s1, \"ROW(\"); appendStringInfo(&s2, \"ROW(\"); foreach (lc, keys) { Form_pg_attribute attr = (Form_pg_attribute) lfirst(lc); char *resname = NameStr(attr->attname); char *mv_resname = quote_qualified_identifier(\"mv\", resname); char *diff_resname = quote_qualified_identifier(\"diff\", resname); appendStringInfo(&s1, \"%s\", mv_resname); appendStringInfo(&s2, \"%s\", diff_resname); if (lnext(lc)) { appendStringInfo(&s1, \", \"); appendStringInfo(&s2, \", \"); } } appendStringInfo(&s1, \")::record\"); appendStringInfo(&s2, \")::record\"); appendStringInfo(&match_cond, \"%s operator(pg_catalog.=) %s\", s1.data, s2.data); return match_cond.data; }}3. Consider truncate base tables, IVM will not refresh, maybe raise an error will be better4. In IVM_immediate_before,I know Lock base table with ExclusiveLock is for concurrent updates to the IVM correctly, But how about to Lock it when actually need to maintain MV which in IVM_immediate_maintenance In this way you don't have to lock multiple times.5. Why we need CreateIndexOnIMMV, is it a optimize? It seems like when maintenance MV, the index may not be used because of our match conditions can’t use simple = operatorLooking forward to your early reply to answer my above doubts, thank you a lot!Regards,Yajun Hu",
"msg_date": "Wed, 29 Jun 2022 17:56:39 +0800",
"msg_from": "huyajun <hu_yajun@qq.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi huyajun, \n\nThank you for your comments!\n\nOn Wed, 29 Jun 2022 17:56:39 +0800\nhuyajun <hu_yajun@qq.com> wrote:\n\n\n> Hi, Nagata-san\n> I read your patch with v27 version and has some new comments,I want to discuss with you.\n> \n> 1. How about use DEPENDENCY_INTERNAL instead of DEPENDENCY_AUTO\n> when record dependence on trigger created by IMV.( related code is in the end of CreateIvmTrigger)\n> Otherwise, User can use sql to drop trigger and corrupt IVM, DEPENDENCY_INTERNAL is also semantically more correct\n> Crash case like:\n> create table t( a int);\n> create incremental materialized view s as select * from t;\n> drop trigger \"IVM_trigger_XXXX”;\n> Insert into t values(1);\n\nWe use DEPENDENCY_AUTO because we want to delete the triggers when\nREFRESH ... WITH NO DATA is performed on the materialized view in order\nto disable IVM. Triggers created with DEPENDENCY_INTERNAL cannot be dropped.\nSuch triggers are re-created when REFRESH ... [WITH DATA] is performed.\n\nWe can use DEPENDENCY_INTERNAL if we disable/enable such triggers instead of\ndropping/re-creating them, although users also can disable triggers using\nALTER TRIGGER.\n\n> 2. In get_matching_condition_string, Considering NULL values, we can not use simple = operator.\n> But how about 'record = record', record_eq treat NULL = NULL\n> it should fast than current implementation for only one comparation\n> Below is my simple implementation with this, Variables are named arbitrarily..\n> I test some cases it’s ok\n> \n> static char *\n> get_matching_condition_string(List *keys)\n> {\n> StringInfoData match_cond;\n> ListCell *lc;\n> \n> /* If there is no key columns, the condition is always true. */\n> if (keys == NIL)\n> return \"true\";\n> else\n> {\n> StringInfoData s1;\n> StringInfoData s2;\n> initStringInfo(&match_cond);\n> initStringInfo(&s1);\n> initStringInfo(&s2);\n> /* Considering NULL values, we can not use simple = operator. */\n> appendStringInfo(&s1, \"ROW(\");\n> appendStringInfo(&s2, \"ROW(\");\n> foreach (lc, keys)\n> {\n> Form_pg_attribute attr = (Form_pg_attribute) lfirst(lc);\n> char *resname = NameStr(attr->attname);\n> char *mv_resname = quote_qualified_identifier(\"mv\", resname);\n> char *diff_resname = quote_qualified_identifier(\"diff\", resname);\n> \n> appendStringInfo(&s1, \"%s\", mv_resname);\n> appendStringInfo(&s2, \"%s\", diff_resname);\n> \n> if (lnext(lc))\n> {\n> appendStringInfo(&s1, \", \");\n> appendStringInfo(&s2, \", \");\n> }\n> }\n> appendStringInfo(&s1, \")::record\");\n> appendStringInfo(&s2, \")::record\");\n> appendStringInfo(&match_cond, \"%s operator(pg_catalog.=) %s\", s1.data, s2.data);\n> return match_cond.data;\n> }\n> }\n\nAs you say, we don't have to use IS NULL if we use ROW(...)::record, but we\ncannot use an index in this case and it makes IVM ineffecient. As showed\nbellow (#5), an index works even when we use simple = operations together\nwith together \"IS NULL\" operations.\n \n> 3. Consider truncate base tables, IVM will not refresh, maybe raise an error will be better\n\nI fixed to support TRUNCATE on base tables in our repository.\nhttps://github.com/sraoss/pgsql-ivm/commit/a1365ed69f34e1adbd160f2ce8fd1e80e032392f\n\nWhen a base table is truncated, the view content will be empty if the\nview definition query does not contain an aggregate without a GROUP clause.\nTherefore, such views can be truncated. \n\nAggregate views without a GROUP clause always have one row. Therefore,\nif a base table is truncated, the view will not be empty and will contain\na row with NULL value (or 0 for count()). So, in this case, we refresh the\nview instead of truncating it.\n\nThe next version of the patch-set will include this change. \n \n> 4. In IVM_immediate_before,I know Lock base table with ExclusiveLock is \n> for concurrent updates to the IVM correctly, But how about to Lock it when actually \n> need to maintain MV which in IVM_immediate_maintenance\n> In this way you don't have to lock multiple times.\n\nYes, as you say, we don't have to lock the view multiple times.\nI'll investigate better locking ways including the way that you suggest.\n \n> 5. Why we need CreateIndexOnIMMV, is it a optimize? \n> It seems like when maintenance MV,\n> the index may not be used because of our match conditions can’t use simple = operator\n\nNo, the index works even when we use simple = operator together with \"IS NULL\".\nFor example:\n\npostgres=# \\d mv\n Materialized view \"public.mv\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n id | integer | | | \n v1 | integer | | | \n v2 | integer | | | \nIndexes:\n \"mv_index\" UNIQUE, btree (id) NULLS NOT DISTINCT\n\npostgres=# EXPLAIN ANALYZE\n WITH diff(id, v1, v2) AS MATERIALIZED ((VALUES(42, 420, NULL::int)))\n SELECT mv.* FROM mv, diff \n WHERE (mv.id = diff.id OR (mv.id IS NULL AND diff.id IS NULL)) AND \n (mv.v1 = diff.v1 OR (mv.v1 IS NULL AND diff.v1 IS NULL)) AND \n (mv.v2 = diff.v2 OR (mv.v2 IS NULL AND diff.v2 IS NULL));\n\n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------\n Nested Loop (cost=133.87..137.92 rows=1 width=12) (actual time=0.180..0.191 rows=1 loops=1)\n CTE diff\n -> Result (cost=0.00..0.01 rows=1 width=12) (actual time=0.027..0.028 rows=1 loops=1)\n -> CTE Scan on diff (cost=0.00..0.02 rows=1 width=12) (actual time=0.037..0.040 rows=1 loops=1)\n -> Bitmap Heap Scan on mv (cost=133.86..137.88 rows=1 width=12) (actual time=0.127..0.132 rows=1 loops=1)\n Recheck Cond: ((id = diff.id) OR (id IS NULL))\n Filter: (((id = diff.id) OR ((id IS NULL) AND (diff.id IS NULL))) AND ((v1 = diff.v1) OR ((v1 IS NULL) AND (diff.v1 IS NULL))) AND ((v2 = diff.v2) OR ((v2 IS NULL) AND (diff.v2\n IS NULL))))\n Heap Blocks: exact=1\n -> BitmapOr (cost=133.86..133.86 rows=1 width=0) (actual time=0.091..0.093 rows=0 loops=1)\n -> Bitmap Index Scan on mv_index (cost=0.00..4.43 rows=1 width=0) (actual time=0.065..0.065 rows=1 loops=1)\n Index Cond: (id = diff.id)\n -> Bitmap Index Scan on mv_index (cost=0.00..4.43 rows=1 width=0) (actual time=0.021..0.021 rows=0 loops=1)\n Index Cond: (id IS NULL)\n Planning Time: 0.666 ms\n Execution Time: 0.399 ms\n(15 rows)\n\n\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 8 Jul 2022 19:22:11 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi, Nagata-san\n\t\nThank you for your answer, I agree with your opinion, and found some new problems to discuss with you\n> \n>> 3. Consider truncate base tables, IVM will not refresh, maybe raise an error will be better\n> \n> I fixed to support TRUNCATE on base tables in our repository.\n> https://github.com/sraoss/pgsql-ivm/commit/a1365ed69f34e1adbd160f2ce8fd1e80e032392f\n> \n> When a base table is truncated, the view content will be empty if the\n> view definition query does not contain an aggregate without a GROUP clause.\n> Therefore, such views can be truncated. \n> \n> Aggregate views without a GROUP clause always have one row. Therefore,\n> if a base table is truncated, the view will not be empty and will contain\n> a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> view instead of truncating it.\n> \n> The next version of the patch-set will include this change. \n> \nI read your patch and think this processing is greet, but there is a risk of deadlock. \nAlthough I have not thought of a suitable processing method for the time being, \nit is also acceptable for truncate scenarios.The deadlock scene is as follows:\n\nMv define is: select * from base_a,base_b;\nS1: truncate base_a; — only AccessExclusiveLock base_a and not run into after trigger\nS2: insert into base_b; — The update has been completed and the incremental refresh is started in the after trigger,RowExclusive on base_b and ExclusiveLock on mv\nS1: continue truncate mv, wait for AccessExclusiveLock on mv, wait for S2\nS2: continue refresh mv, wait for AccessShardLock on base_a, wait for S1\nSo deadlock occurred\n\nI also found some new issues that I would like to discuss with you\n1. Concurrent DML causes imv data error, case like below\nSetup:\nCreate table t( a int);\nInsert into t select 1 from generate_series(1,3);\ncreate incremental materialized view s as select count(*) from t;\n\nS1: begin;delete from t where ctid in (select ctid from t limit 1);\nS2: begin;delete from t where ctid in (select ctid from t limit 1 offset 1);\nS1: commit;\nS2: commit;\n\nAfter this, The count data of s becomes 2 but correct data is 1.\nI found out that the problem is probably because to our use of ctid update\nConsider user behavior unrelated to imv:\n\nCreate table t( a int);\nInsert into t select 1;\ns1: BEGIN\ns1: update t set a = 2 where ctid in (select ctid from t); -- UPDATE 1\ns2: BEGIN\ns2: update t set a = 3 where ctid in (select ctid from t); -- wait row lock\ns1: COMMIT\ns2: -- UPDATE 0 -- ctid change so can't UPDATE one rows\nSo we lost the s2 update\n\n2. Sometimes it will crash when the columns of the created materialized view do not match\nCreate table t( a int);\ncreate incremental materialized view s(z) as select sum(1) as a, sum(1) as b from t;\n\nThe problem should be that colNames in rewriteQueryForIMMV does not consider this situation\n\n3. Sometimes no error when the columns of the created materialized view do not match\n Create table t( a int);\n create incremental materialized view s(y,z) as select count(1) as b from t;\n\nBut the hidden column of IMV is overwritten to z which will cause refresh failed.\n\nThe problem should be that checkRuleResultList we should only skip imv hidden columns check\n\n4. A unique index should not be created in the case of a Cartesian product\n\ncreate table base_a (i int primary key, j varchar);\ncreate table base_b (i int primary key, k varchar);\nINSERT INTO base_a VALUES\n(1,10),\n(2,20),\n(3,30),\n(4,40),\n(5,50);\nINSERT INTO base_b VALUES\n(1,101),\n(2,102),\n(3,103),\n(4,104);\nCREATE incremental MATERIALIZED VIEW s as\nselect base_a.i,base_a.j from base_a,base_b; — create error because of unique index\n\n5. Besides, I would like to ask you if you have considered implementing an IMV with delayed refresh?\nThe advantage of delayed refresh is that it will not have much impact on write performance\nI probably have some ideas about it now, do you think it works?\n1. After the base table is updated, the delayed IMV's after trigger is used to record the delta\n information in another table similar to the incremental log of the base table\n2. When incremental refresh, use the data in the log instead of the data in the trasient table\nof the after trigger\n3. We need to merge the incremental information in advance to ensure that the base_table\nafter transaction filtering UNION ALL old_delta is the state before the base table is updated\nCase like below:\nCreate table t( a int);\n—begin to record log\nInsert into t select 1; — newlog: 1 oldlog: empty\nDelete from t; —newlog:1, oldlog:1\n— begin to incremental refresh\nSelect * from t where xmin < xid or (xmin = xid and cmin < cid); — empty\nSo this union all oldlog is not equal to before the base table is updated\nWe need merge the incremental log in advance to make newlog: empty, oldlog: empty\n\nIf implemented, incremental refresh must still be serialized, but the DML of the base table \ncan not be blocked, that is to say, the base table can still record logs during incremental refresh,\nas long as we use same snapshot when incrementally updating.\n\ndo you think there will be any problems with this solution?\n\nLooking forward to your reply to answer my above doubts, thank you a lot!\nRegards,\nYajun Hu\nHi, Nagata-san Thank you for your answer, I agree with your opinion, and found some new problems to discuss with you3. Consider truncate base tables, IVM will not refresh, maybe raise an error will be betterI fixed to support TRUNCATE on base tables in our repository.https://github.com/sraoss/pgsql-ivm/commit/a1365ed69f34e1adbd160f2ce8fd1e80e032392fWhen a base table is truncated, the view content will be empty if theview definition query does not contain an aggregate without a GROUP clause.Therefore, such views can be truncated. Aggregate views without a GROUP clause always have one row. Therefore,if a base table is truncated, the view will not be empty and will containa row with NULL value (or 0 for count()). So, in this case, we refresh theview instead of truncating it.The next version of the patch-set will include this change. I read your patch and think this processing is greet, but there is a risk of deadlock. Although I have not thought of a suitable processing method for the time being, it is also acceptable for truncate scenarios.The deadlock scene is as follows:Mv define is: select * from base_a,base_b;S1: truncate base_a; — only AccessExclusiveLock base_a and not run into after triggerS2: insert into base_b; — The update has been completed and the incremental refresh is started in the after trigger,RowExclusive on base_b and ExclusiveLock on mvS1: continue truncate mv, wait for AccessExclusiveLock on mv, wait for S2S2: continue refresh mv, wait for AccessShardLock on base_a, wait for S1So deadlock occurredI also found some new issues that I would like to discuss with you1. Concurrent DML causes imv data error, case like belowSetup:Create table t( a int);Insert into t select 1 from generate_series(1,3);create incremental materialized view s as select count(*) from t;S1: begin;delete from t where ctid in (select ctid from t limit 1);S2: begin;delete from t where ctid in (select ctid from t limit 1 offset 1);S1: commit;S2: commit;After this, The count data of s becomes 2 but correct data is 1.I found out that the problem is probably because to our use of ctid updateConsider user behavior unrelated to imv:Create table t( a int);Insert into t select 1;s1: BEGINs1: update t set a = 2 where ctid in (select ctid from t); -- UPDATE 1s2: BEGINs2: update t set a = 3 where ctid in (select ctid from t); -- wait row locks1: COMMITs2: -- UPDATE 0 -- ctid change so can't UPDATE one rowsSo we lost the s2 update2. Sometimes it will crash when the columns of the created materialized view do not matchCreate table t( a int);create incremental materialized view s(z) as select sum(1) as a, sum(1) as b from t;The problem should be that colNames in rewriteQueryForIMMV does not consider this situation3. Sometimes no error when the columns of the created materialized view do not match Create table t( a int); create incremental materialized view s(y,z) as select count(1) as b from t;But the hidden column of IMV is overwritten to z which will cause refresh failed.The problem should be that checkRuleResultList we should only skip imv hidden columns check4. A unique index should not be created in the case of a Cartesian productcreate table base_a (i int primary key, j varchar);create table base_b (i int primary key, k varchar);INSERT INTO base_a VALUES(1,10),(2,20),(3,30),(4,40),(5,50);INSERT INTO base_b VALUES(1,101),(2,102),(3,103),(4,104);CREATE incremental MATERIALIZED VIEW s asselect base_a.i,base_a.j from base_a,base_b; — create error because of unique index5. Besides, I would like to ask you if you have considered implementing an IMV with delayed refresh?The advantage of delayed refresh is that it will not have much impact on write performanceI probably have some ideas about it now, do you think it works?1. After the base table is updated, the delayed IMV's after trigger is used to record the delta information in another table similar to the incremental log of the base table2. When incremental refresh, use the data in the log instead of the data in the trasient tableof the after trigger3. We need to merge the incremental information in advance to ensure that the base_tableafter transaction filtering UNION ALL old_delta is the state before the base table is updatedCase like below:Create table t( a int);—begin to record logInsert into t select 1; — newlog: 1 oldlog: emptyDelete from t; —newlog:1, oldlog:1— begin to incremental refreshSelect * from t where xmin < xid or (xmin = xid and cmin < cid); — emptySo this union all oldlog is not equal to before the base table is updatedWe need merge the incremental log in advance to make newlog: empty, oldlog: emptyIf implemented, incremental refresh must still be serialized, but the DML of the base table can not be blocked, that is to say, the base table can still record logs during incremental refresh,as long as we use same snapshot when incrementally updating.do you think there will be any problems with this solution?Looking forward to your reply to answer my above doubts, thank you a lot!Regards,Yajun Hu",
"msg_date": "Tue, 26 Jul 2022 12:00:26 +0800",
"msg_from": "huyajun <hu_yajun@qq.com>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hello huyajun,\n\nI'm sorry for delay in my response.\n\nOn Tue, 26 Jul 2022 12:00:26 +0800\nhuyajun <hu_yajun@qq.com> wrote:\n\n> I read your patch and think this processing is greet, but there is a risk of deadlock. \n> Although I have not thought of a suitable processing method for the time being, \n> it is also acceptable for truncate scenarios.The deadlock scene is as follows:\n> \n> Mv define is: select * from base_a,base_b;\n> S1: truncate base_a; ― only AccessExclusiveLock base_a and not run into after trigger\n> S2: insert into base_b; ― The update has been completed and the incremental refresh is started in the after trigger,RowExclusive on base_b and ExclusiveLock on mv\n> S1: continue truncate mv, wait for AccessExclusiveLock on mv, wait for S2\n> S2: continue refresh mv, wait for AccessShardLock on base_a, wait for S1\n> So deadlock occurred\n\nHmm, this deadlock scenario is possible, indeed.\n\nOne idea to resolve it is to acquire RowExclusive locks on all base tables\nin the BEFORE trigger. If so, S2 can not progress its process because it\nwaits for a RowExclusive lock on base_b, and it can not acquire ExeclusiveLock\non mv before S1 finishes.\n\n> I also found some new issues that I would like to discuss with you\n\nThank you so much for your massive bug reports!\n\n> 1. Concurrent DML causes imv data error, case like below\n> Setup:\n> Create table t( a int);\n> Insert into t select 1 from generate_series(1,3);\n> create incremental materialized view s as select count(*) from t;\n> \n> S1: begin;delete from t where ctid in (select ctid from t limit 1);\n> S2: begin;delete from t where ctid in (select ctid from t limit 1 offset 1);\n> S1: commit;\n> S2: commit;\n> \n> After this, The count data of s becomes 2 but correct data is 1.\n> I found out that the problem is probably because to our use of ctid update\n> Consider user behavior unrelated to imv:\n> \n> Create table t( a int);\n> Insert into t select 1;\n> s1: BEGIN\n> s1: update t set a = 2 where ctid in (select ctid from t); -- UPDATE 1\n> s2: BEGIN\n> s2: update t set a = 3 where ctid in (select ctid from t); -- wait row lock\n> s1: COMMIT\n> s2: -- UPDATE 0 -- ctid change so can't UPDATE one rows\n> So we lost the s2 update\n> \n> 2. Sometimes it will crash when the columns of the created materialized view do not match\n> Create table t( a int);\n> create incremental materialized view s(z) as select sum(1) as a, sum(1) as b from t;\n> \n> The problem should be that colNames in rewriteQueryForIMMV does not consider this situation\n> \n> 3. Sometimes no error when the columns of the created materialized view do not match\n> Create table t( a int);\n> create incremental materialized view s(y,z) as select count(1) as b from t;\n> \n> But the hidden column of IMV is overwritten to z which will cause refresh failed.\n> \n> The problem should be that checkRuleResultList we should only skip imv hidden columns check\n> \n> 4. A unique index should not be created in the case of a Cartesian product\n> \n> create table base_a (i int primary key, j varchar);\n> create table base_b (i int primary key, k varchar);\n> INSERT INTO base_a VALUES\n> (1,10),\n> (2,20),\n> (3,30),\n> (4,40),\n> (5,50);\n> INSERT INTO base_b VALUES\n> (1,101),\n> (2,102),\n> (3,103),\n> (4,104);\n> CREATE incremental MATERIALIZED VIEW s as\n> select base_a.i,base_a.j from base_a,base_b; ― create error because of unique index\n\nI am working on above issues (#1-#4) now, and I'll respond on each later.\n\n> 5. Besides, I would like to ask you if you have considered implementing an IMV with delayed refresh?\n> The advantage of delayed refresh is that it will not have much impact on write performance\n\nYes, I've been thinking to implement deferred maintenance since the beginning of\nthis IVM project. However, we've decided to start from immediate maintenance, and\nwill plan to propose deferred maintenance to the core after the current patch is\naccepted. (I plan to implement this feature in pg_ivm extension module first,\nthough.)\n\n> I probably have some ideas about it now, do you think it works?\n> 1. After the base table is updated, the delayed IMV's after trigger is used to record the delta\n> information in another table similar to the incremental log of the base table\n> 2. When incremental refresh, use the data in the log instead of the data in the trasient table\n> of the after trigger\n> 3. We need to merge the incremental information in advance to ensure that the base_table\n> after transaction filtering UNION ALL old_delta is the state before the base table is updated\n> Case like below:\n> Create table t( a int);\n> ―begin to record log\n> Insert into t select 1; ― newlog: 1 oldlog: empty\n> Delete from t; ―newlog:1, oldlog:1\n> ― begin to incremental refresh\n> Select * from t where xmin < xid or (xmin = xid and cmin < cid); ― empty\n> So this union all oldlog is not equal to before the base table is updated\n> We need merge the incremental log in advance to make newlog: empty, oldlog: empty\n> \n> If implemented, incremental refresh must still be serialized, but the DML of the base table \n> can not be blocked, that is to say, the base table can still record logs during incremental refresh,\n> as long as we use same snapshot when incrementally updating.\n> \n> do you think there will be any problems with this solution?\n\nI guess the deferred maintenance process would be basically what similar\nto above. Especially, as you say, we need to merge incremental information\nin some way before calculating deltas on the view. I investigated some\nresearch papers, but I'll review again before working on deferred approach\ndesign.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 9 Sep 2022 20:10:32 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 08:10:32PM +0900, Yugo NAGATA wrote:\n> I am working on above issues (#1-#4) now, and I'll respond on each later.\n\nOkay, well. There has been some feedback sent lately and no update\nfor one month, so I am marking it as RwF for now. As a whole the\npatch has been around for three years and it does not seem that a lot\nhas happened in terms of design discussion (now the thread is long so\nI would easily miss something)..\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 16:53:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
}
] |
[
{
"msg_contents": "I just noticed that, since I retired pademelon in August, we have\nexactly no buildfarm coverage of --disable-strong-random code paths.\nWhat's more, because the vast majority of the buildfarm enables\n--with-openssl, we're mostly just testing the punt-to-OpenSSL\nvariant of pg_strong_random. Checking the buildfarm database,\nthe last builds that did anything different from that are\n\n protosciurus | 2018-08-15 13:37:08 | checking which random number source to use... /dev/urandom\n pademelon | 2018-08-16 18:47:07 | checking which random number source to use... weak builtin PRNG\n castoroides | 2018-09-26 12:03:07 | checking which random number source to use... /dev/urandom\n locust | 2018-12-14 01:14:35 | checking which random number source to use... /dev/urandom\n frogfish | 2018-12-22 18:39:35 | checking which random number source to use... /dev/urandom\n anole | 2018-12-27 10:30:33 | checking which random number source to use... /dev/urandom\n gharial | 2018-12-27 13:30:46 | checking which random number source to use... /dev/urandom\n jacana | 2018-12-27 13:45:14 | checking which random number source to use... Windows native\n\nDo we need more coverage of the \"Windows native\" alternative?\n\nMore urgently, what about the lack of --disable-strong-random\ncoverage? I feel like we should either have a buildfarm\ncritter or two testing that code, or decide that it's no longer\ninteresting and rip it out. backend_random.c, to name just\none place, has a complex enough !HAVE_STRONG_RANDOM code path\nthat I don't feel comfortable letting it go totally untested.\n\nThere's certainly a reasonable argument to be made that everybody\nshould have /dev/urandom these days, or else be willing to\ninstall OpenSSL and let it figure out what to do. (Even my hoary\nold HPUX 10.20 box does have OpenSSL and a working entropy daemon\nto feed it; I was just intentionally not using that in the\npademelon configuration.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 27 Dec 2018 15:56:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "On Thu, Dec 27, 2018 at 03:56:52PM -0500, Tom Lane wrote:\n> More urgently, what about the lack of --disable-strong-random\n> coverage? I feel like we should either have a buildfarm\n> critter or two testing that code, or decide that it's no longer\n> interesting and rip it out. backend_random.c, to name just\n> one place, has a complex enough !HAVE_STRONG_RANDOM code path\n> that I don't feel comfortable letting it go totally untested.\n\nIf that proves to not be useful, just dropping the switch sounds like\na good option to me. I would be curious to hear Heikki on the matter\nas he has introduced the switch in the v10 time-frame.\n--\nMichael",
"msg_date": "Fri, 28 Dec 2018 08:00:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Dec 27, 2018 at 03:56:52PM -0500, Tom Lane wrote:\n>> More urgently, what about the lack of --disable-strong-random\n>> coverage? I feel like we should either have a buildfarm\n>> critter or two testing that code, or decide that it's no longer\n>> interesting and rip it out. backend_random.c, to name just\n>> one place, has a complex enough !HAVE_STRONG_RANDOM code path\n>> that I don't feel comfortable letting it go totally untested.\n\n> If that proves to not be useful, just dropping the switch sounds like\n> a good option to me. I would be curious to hear Heikki on the matter\n> as he has introduced the switch in the v10 time-frame.\n\nI might be misremembering, but I think it was me that pressed to have\nthat switch in the first place :-). The reason my feelings have changed\non the matter is mainly that we recently moved the compiler goalposts\nto C99. That reduces to about nil the chances of people being able to\nbuild PG on pre-turn-of-the-century platforms, at least without a lot\nof add-on software. So the idea that we should be able to cope with\nplatforms lacking /dev/urandom has correspondingly dropped in value.\nJudging by our buildfarm sample, nothing released in this century\nlacks /dev/urandom.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 27 Dec 2018 18:14:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "On 28/12/2018 01:14, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Thu, Dec 27, 2018 at 03:56:52PM -0500, Tom Lane wrote:\n>>> More urgently, what about the lack of --disable-strong-random\n>>> coverage? I feel like we should either have a buildfarm\n>>> critter or two testing that code, or decide that it's no longer\n>>> interesting and rip it out. backend_random.c, to name just\n>>> one place, has a complex enough !HAVE_STRONG_RANDOM code path\n>>> that I don't feel comfortable letting it go totally untested.\n> \n>> If that proves to not be useful, just dropping the switch sounds like\n>> a good option to me. I would be curious to hear Heikki on the matter\n>> as he has introduced the switch in the v10 time-frame.\n> \n> I might be misremembering, but I think it was me that pressed to have\n> that switch in the first place :-). The reason my feelings have changed\n> on the matter is mainly that we recently moved the compiler goalposts\n> to C99. That reduces to about nil the chances of people being able to\n> build PG on pre-turn-of-the-century platforms, at least without a lot\n> of add-on software. So the idea that we should be able to cope with\n> platforms lacking /dev/urandom has correspondingly dropped in value.\n> Judging by our buildfarm sample, nothing released in this century\n> lacks /dev/urandom.\n\nYeah, there probably isn't anyone needing --disable-strong-random in \npractice. The situation is slightly different between the frontend and \nbackend, though. It's more likely that someone might need to build libpq \non a very ancient system, but not the server. And libpq only needs \npg_strong_random() for SCRAM support. It'd be kind of nice to still be \nable to build libpq without pg_strong_random(), with SCRAM disabled. But \nthat's awkward to arrange with autoconf, there is no \"--libpq-only\" \nflag. Perhaps replace the backend !HAVE_STRONG_RANDOM code with #error.\n\n+1 for just ripping it out, nevertheless. If someone needs libpq on an \nancient system, they can build an older version of libpq as a last resort.\n\n- Heikki\n\n",
"msg_date": "Fri, 28 Dec 2018 15:27:58 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Yeah, there probably isn't anyone needing --disable-strong-random in \n> practice. The situation is slightly different between the frontend and \n> backend, though. It's more likely that someone might need to build libpq \n> on a very ancient system, but not the server. And libpq only needs \n> pg_strong_random() for SCRAM support. It'd be kind of nice to still be \n> able to build libpq without pg_strong_random(), with SCRAM disabled. But \n> that's awkward to arrange with autoconf, there is no \"--libpq-only\" \n> flag. Perhaps replace the backend !HAVE_STRONG_RANDOM code with #error.\n\n> +1 for just ripping it out, nevertheless. If someone needs libpq on an \n> ancient system, they can build an older version of libpq as a last resort.\n\nThe other workaround that remains available is to build --with-openssl.\nSo the arguments for keeping !HAVE_STRONG_RANDOM seem pretty weak from\nhere.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 28 Dec 2018 10:16:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "Further to this ... I was just doing some measurements to see how much\nit'd add to backend startup time if we start using pg_strong_random()\nto set the initial random seed. The answer, at least on my slightly\nlong-in-the-tooth RHEL6 box, is \"about 25 usec using /dev/urandom,\nor about 80 usec using OpenSSL\". So I'm wondering why configure is\ncoded to prefer OpenSSL.\n\nI'm going to go do some timing checks on some other platforms, but\nthis result suggests that we may need to question that choice.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 29 Dec 2018 11:39:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "I wrote:\n> Further to this ... I was just doing some measurements to see how much\n> it'd add to backend startup time if we start using pg_strong_random()\n> to set the initial random seed. The answer, at least on my slightly\n> long-in-the-tooth RHEL6 box, is \"about 25 usec using /dev/urandom,\n> or about 80 usec using OpenSSL\". So I'm wondering why configure is\n> coded to prefer OpenSSL.\n> I'm going to go do some timing checks on some other platforms, but\n> this result suggests that we may need to question that choice.\n\nFurther testing (on Fedora, macOS, FreeBSD, and NetBSD) has confirmed\nthat the OpenSSL code path is 2x to 3x slower than the /dev/urandom\ncode path for fetching half a dozen random bytes. So I'm still\nwondering why the current preference order. My mental model of\nthis is that on platforms with /dev/*random, OpenSSL's RAND_bytes\nisn't doing much more than wrapping /dev/*random --- so is it\nreally doing anything we need?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 29 Dec 2018 18:56:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
},
{
"msg_contents": "On Fri, Dec 28, 2018 at 03:27:58PM +0200, Heikki Linnakangas wrote:\n> +1 for just ripping it out, nevertheless. If someone needs libpq on\n> an ancient system, they can build an older version of libpq as a\n> last resort. \n\nOkay, let's do the cleanup then. I am just going to create a thread\non the matter.\n--\nMichael",
"msg_date": "Sun, 30 Dec 2018 15:22:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Poor buildfarm coverage of strong-random alternatives"
}
] |
[
{
"msg_contents": "Hello,\n\nI've investigated a crash report of PG-Strom for a few days, then I doubt\nadd_partial_path() can unexpectedly release dominated old partial path\nbut still referenced by other Gather node, and it leads unexpected system\ncrash.\n\nPlease check at the gpuscan.c:373\nhttps://github.com/heterodb/pg-strom/blob/master/src/gpuscan.c#L373\n\nThe create_gpuscan_path() constructs a partial custom-path, then it is\nadded to the partial_pathlist of the baserel.\nIf both of old and new partial paths have no pathkeys and old-path has\nlarger cost, add_partial_path detaches the old path from the list, then\ncalls pfree() to release old_path itself.\n\nOn the other hands, the baserel may have GatherPath which references\nthe partial-path on its pathlist. Here is no check whether the old partial-\npaths are referenced by others, or not.\n\nTo ensure my assumption, I tried to inject elog() before/after the call of\nadd_partial_path() and just before the pfree(old_path) in add_partial_path().\n\n----------------------------------------------------------------\ndbt3=# explain select\n sum(l_extendedprice * l_discount) as revenue\nfrom\n lineitem\nwhere\n l_shipdate >= date '1994-01-01'\n and l_shipdate < cast(date '1994-01-01' + interval '1 year' as date)\n and l_discount between 0.08 - 0.01 and 0.08 + 0.01\n and l_quantity < 24 limit 1;\nINFO: GpuScan:389 GATHER(0x28f3c30), SUBPATH(0x28f3f88): {GATHERPATH\n:pathtype 44 :parent_relids (b 1) :required_outer (b) :parallel_aware\nfalse :parallel_safe false :parallel_workers 0 :rows 151810\n:startup_cost 1000.00 :total_cost 341760.73 :pathkeys <> :subpath\n{PATH :pathtype 18 :parent_relids (b 1) :required_outer (b)\n:parallel_aware true :parallel_safe true :parallel_workers 2 :rows\n63254 :startup_cost 0.00 :total_cost 325579.73 :pathkeys <>}\n:single_copy false :num_workers 2}\nINFO: add_partial_path:830 old_path(0x28f3f88) is removed\nWARNING: could not dump unrecognized node type: 2139062143\nINFO: GpuScan:401 GATHER(0x28f3c30), SUBPATH(0x28f3f88): {GATHERPATH\n:pathtype 44 :parent_relids (b 1) :required_outer (b) :parallel_aware\nfalse :parallel_safe false :parallel_workers 0 :rows 151810\n:startup_cost 1000.00 :total_cost 341760.73 :pathkeys <> :subpath\n{(HOGE)} :single_copy false :num_workers 2}\n----------------------------------------------------------------\n\nAt the L389, GatherPath in the baresel->pathlist is healthy. Its\nsubpath (0x28f3f88) is\na valid T_Scan path node.\nThen, gpuscan.c adds a cheaper path-node so add_partial_path()\nconsiders the above\nsubpath (0x28f3f88) is dominated by the new custom-path, and release it.\nSo, elog() at L401 says subpath has unrecognized node type: 2139062143\n== 0x7f7f7f7f\nthat implies the memory region was already released by pfree().\n\nReference counter or other mechanism to tack referenced paths may be an idea\nto avoid unintentional release of path-node.\nOn the other hands, it seems to me the pfree() at add_path /\nadd_partial_path is not\na serious memory management because other objects referenced by the path-node\nare not released here.\nIt is sufficient if we detach dominated path-node from the pathlist /\npartial_pathlist.\n\nHow about your opinions?\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Fri, 28 Dec 2018 13:21:30 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> I've investigated a crash report of PG-Strom for a few days, then I doubt\n> add_partial_path() can unexpectedly release dominated old partial path\n> but still referenced by other Gather node, and it leads unexpected system\n> crash.\n\nHm. This seems comparable to the special case in plain add_path, where it\ndoesn't attempt to free IndexPaths because of the risk that they're still\nreferenced. So maybe we should just drop the pfree here.\n\nHowever, first I'd like to know why this situation is arising in the first\nplace. To have the situation you're describing, we'd have to have\nattempted to make some Gather paths before we have all the partial paths\nfor the relation they're for. Why is that a good thing to do? It seems\nlike such Gathers are necessarily being made with incomplete information,\nand we'd be better off to fix things so that none are made till later.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 28 Dec 2018 11:44:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Kohei KaiGai <kaigai@heterodb.com> writes:\n> > I've investigated a crash report of PG-Strom for a few days, then I doubt\n> > add_partial_path() can unexpectedly release dominated old partial path\n> > but still referenced by other Gather node, and it leads unexpected system\n> > crash.\n>\n> Hm. This seems comparable to the special case in plain add_path, where it\n> doesn't attempt to free IndexPaths because of the risk that they're still\n> referenced. So maybe we should just drop the pfree here.\n>\n> However, first I'd like to know why this situation is arising in the first\n> place. To have the situation you're describing, we'd have to have\n> attempted to make some Gather paths before we have all the partial paths\n> for the relation they're for. Why is that a good thing to do? It seems\n> like such Gathers are necessarily being made with incomplete information,\n> and we'd be better off to fix things so that none are made till later.\n>\nBecause of the hook location, Gather-node shall be constructed with built-in\nand foreign partial scan node first, then extension gets a chance to add its\ncustom paths (partial and full).\nAt the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\ngenerate_gather_paths(). Even if extension adds some partial paths later,\nthe first generate_gather_paths() has to generate Gather node based on\nincomplete information.\nIf we could ensure Gather node shall be made after all the partial nodes\nare added, it may be a solution for the problem.\n\nOf course, relocation of the hook may have a side-effect. Anyone may\nexpect the pathlist contains all the path-node including Gather node, for\neditorialization of the pathlist.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Sat, 29 Dec 2018 10:05:57 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n>> However, first I'd like to know why this situation is arising in the first\n>> place. To have the situation you're describing, we'd have to have\n>> attempted to make some Gather paths before we have all the partial paths\n>> for the relation they're for. Why is that a good thing to do? It seems\n>> like such Gathers are necessarily being made with incomplete information,\n>> and we'd be better off to fix things so that none are made till later.\n\n> Because of the hook location, Gather-node shall be constructed with built-in\n> and foreign partial scan node first, then extension gets a chance to add its\n> custom paths (partial and full).\n> At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> generate_gather_paths().\n\nHmm. I'm inclined to think that we should have a separate hook\nin which extensions are allowed to add partial paths, and that\nset_rel_pathlist_hook should only be allowed to add regular paths.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 29 Dec 2018 14:12:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Kohei KaiGai <kaigai@heterodb.com> writes:\n> > 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n> >> However, first I'd like to know why this situation is arising in the first\n> >> place. To have the situation you're describing, we'd have to have\n> >> attempted to make some Gather paths before we have all the partial paths\n> >> for the relation they're for. Why is that a good thing to do? It seems\n> >> like such Gathers are necessarily being made with incomplete information,\n> >> and we'd be better off to fix things so that none are made till later.\n>\n> > Because of the hook location, Gather-node shall be constructed with built-in\n> > and foreign partial scan node first, then extension gets a chance to add its\n> > custom paths (partial and full).\n> > At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> > generate_gather_paths().\n>\n> Hmm. I'm inclined to think that we should have a separate hook\n> in which extensions are allowed to add partial paths, and that\n> set_rel_pathlist_hook should only be allowed to add regular paths.\n>\nI have almost same opinion, but the first hook does not need to be\ndedicated for partial paths. As like set_foreign_pathlist() doing, we can\nadd both of partial and regular paths here, then generate_gather_paths()\nmay generate a Gather-path on top of the best partial-path.\n\nOn the other hands, the later hook must be dedicated to add regular paths,\nand also provides a chance for extensions to manipulate pre-built path-list\nincluding Gather-path.\nAs long as I know, pg_hint_plan uses the set_rel_pathlist_hook to enforce\na particular path-node, including Gather-node, by manipulation of the cost\nvalue. Horiguchi-san, is it right?\nLikely, this kind of extension needs to use the later hook.\n\nI expect these hooks are located as follows:\n\nset_rel_pathlist(...)\n{\n :\n <snip>\n :\n /* for partial / regular paths */\n if (set_rel_pathlist_hook)\n (*set_rel_pathlist_hook) (root, rel, rti, rte);\n /* generate Gather-node */\n if (rel->reloptkind == RELOPT_BASEREL)\n generate_gather_paths(root, rel);\n /* for regular paths and manipulation */\n if (post_rel_pathlist_hook)\n (*post_rel_pathlist_hook) (root, rel, rti, rte);\n\n set_cheapest();\n}\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Sun, 30 Dec 2018 12:31:22 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 9:01 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> >\n> > Kohei KaiGai <kaigai@heterodb.com> writes:\n> > > 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n> > >> However, first I'd like to know why this situation is arising in the first\n> > >> place. To have the situation you're describing, we'd have to have\n> > >> attempted to make some Gather paths before we have all the partial paths\n> > >> for the relation they're for. Why is that a good thing to do? It seems\n> > >> like such Gathers are necessarily being made with incomplete information,\n> > >> and we'd be better off to fix things so that none are made till later.\n> >\n> > > Because of the hook location, Gather-node shall be constructed with built-in\n> > > and foreign partial scan node first, then extension gets a chance to add its\n> > > custom paths (partial and full).\n> > > At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> > > generate_gather_paths().\n> >\n> > Hmm. I'm inclined to think that we should have a separate hook\n> > in which extensions are allowed to add partial paths, and that\n> > set_rel_pathlist_hook should only be allowed to add regular paths.\n\n+1. This idea sounds sensible to me.\n\n> >\n> I have almost same opinion, but the first hook does not need to be\n> dedicated for partial paths. As like set_foreign_pathlist() doing, we can\n> add both of partial and regular paths here, then generate_gather_paths()\n> may generate a Gather-path on top of the best partial-path.\n>\n\nWon't it be confusing for users if we allow both partial and full\npaths in first hook and only full paths in the second hook?\nBasically, in many cases, the second hook won't be of much use. What\nadvantage you are seeing in allowing both partial and full paths in\nthe first hook?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 31 Dec 2018 09:39:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "2018年12月31日(月) 13:10 Amit Kapila <amit.kapila16@gmail.com>:\n>\n> On Sun, Dec 30, 2018 at 9:01 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> > >\n> > > Kohei KaiGai <kaigai@heterodb.com> writes:\n> > > > 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > >> However, first I'd like to know why this situation is arising in the first\n> > > >> place. To have the situation you're describing, we'd have to have\n> > > >> attempted to make some Gather paths before we have all the partial paths\n> > > >> for the relation they're for. Why is that a good thing to do? It seems\n> > > >> like such Gathers are necessarily being made with incomplete information,\n> > > >> and we'd be better off to fix things so that none are made till later.\n> > >\n> > > > Because of the hook location, Gather-node shall be constructed with built-in\n> > > > and foreign partial scan node first, then extension gets a chance to add its\n> > > > custom paths (partial and full).\n> > > > At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> > > > generate_gather_paths().\n> > >\n> > > Hmm. I'm inclined to think that we should have a separate hook\n> > > in which extensions are allowed to add partial paths, and that\n> > > set_rel_pathlist_hook should only be allowed to add regular paths.\n>\n> +1. This idea sounds sensible to me.\n>\n> > >\n> > I have almost same opinion, but the first hook does not need to be\n> > dedicated for partial paths. As like set_foreign_pathlist() doing, we can\n> > add both of partial and regular paths here, then generate_gather_paths()\n> > may generate a Gather-path on top of the best partial-path.\n> >\n>\n> Won't it be confusing for users if we allow both partial and full\n> paths in first hook and only full paths in the second hook?\n> Basically, in many cases, the second hook won't be of much use. What\n> advantage you are seeing in allowing both partial and full paths in\n> the first hook?\n>\nTwo advantages. The first one is, it follows same manner of\nset_foreign_pathlist()\nwhich allows to add both of full and partial path if FDW supports parallel-scan.\nThe second one is practical. During the path construction, extension needs to\ncheck availability to run (e.g, whether operators in WHERE-clause is supported\non GPU device...), calculate its estimated cost and so on. Not a small\nportion of\nthem are common for both of full and partial path. So, if the first\nhook accepts to\nadd both kind of paths at once, extension can share the common properties.\n\nProbably, the second hook is only used for a few corner case where an extension\nwants to manipulate path-list already built, like pg_hint_plan.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Mon, 31 Dec 2018 21:17:54 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 5:48 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> 2018年12月31日(月) 13:10 Amit Kapila <amit.kapila16@gmail.com>:\n> >\n> > On Sun, Dec 30, 2018 at 9:01 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > >\n> > > > Kohei KaiGai <kaigai@heterodb.com> writes:\n> > > > > 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > > >> However, first I'd like to know why this situation is arising in the first\n> > > > >> place. To have the situation you're describing, we'd have to have\n> > > > >> attempted to make some Gather paths before we have all the partial paths\n> > > > >> for the relation they're for. Why is that a good thing to do? It seems\n> > > > >> like such Gathers are necessarily being made with incomplete information,\n> > > > >> and we'd be better off to fix things so that none are made till later.\n> > > >\n> > > > > Because of the hook location, Gather-node shall be constructed with built-in\n> > > > > and foreign partial scan node first, then extension gets a chance to add its\n> > > > > custom paths (partial and full).\n> > > > > At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> > > > > generate_gather_paths().\n> > > >\n> > > > Hmm. I'm inclined to think that we should have a separate hook\n> > > > in which extensions are allowed to add partial paths, and that\n> > > > set_rel_pathlist_hook should only be allowed to add regular paths.\n> >\n> > +1. This idea sounds sensible to me.\n> >\n> > > >\n> > > I have almost same opinion, but the first hook does not need to be\n> > > dedicated for partial paths. As like set_foreign_pathlist() doing, we can\n> > > add both of partial and regular paths here, then generate_gather_paths()\n> > > may generate a Gather-path on top of the best partial-path.\n> > >\n> >\n> > Won't it be confusing for users if we allow both partial and full\n> > paths in first hook and only full paths in the second hook?\n> > Basically, in many cases, the second hook won't be of much use. What\n> > advantage you are seeing in allowing both partial and full paths in\n> > the first hook?\n> >\n> Two advantages. The first one is, it follows same manner of\n> set_foreign_pathlist()\n> which allows to add both of full and partial path if FDW supports parallel-scan.\n> The second one is practical. During the path construction, extension needs to\n> check availability to run (e.g, whether operators in WHERE-clause is supported\n> on GPU device...), calculate its estimated cost and so on. Not a small\n> portion of\n> them are common for both of full and partial path. So, if the first\n> hook accepts to\n> add both kind of paths at once, extension can share the common properties.\n>\n\nYou have a point, though I am not sure how much difference it can\ncreate for cost computation as ideally, both will have different\ncosting model. I understand there are some savings by avoiding some\ncommon work, is there any way to cache the required information?\n\n> Probably, the second hook is only used for a few corner case where an extension\n> wants to manipulate path-list already built, like pg_hint_plan.\n>\n\nOkay, but it could be some work for extension authors who are using\nthe current hook, not sure they would like to divide the work between\nfirst and second hook.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 31 Dec 2018 18:55:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "2018年12月31日(月) 22:25 Amit Kapila <amit.kapila16@gmail.com>:\n>\n> On Mon, Dec 31, 2018 at 5:48 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> >\n> > 2018年12月31日(月) 13:10 Amit Kapila <amit.kapila16@gmail.com>:\n> > >\n> > > On Sun, Dec 30, 2018 at 9:01 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > > 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > > >\n> > > > > Kohei KaiGai <kaigai@heterodb.com> writes:\n> > > > > > 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > > > >> However, first I'd like to know why this situation is arising in the first\n> > > > > >> place. To have the situation you're describing, we'd have to have\n> > > > > >> attempted to make some Gather paths before we have all the partial paths\n> > > > > >> for the relation they're for. Why is that a good thing to do? It seems\n> > > > > >> like such Gathers are necessarily being made with incomplete information,\n> > > > > >> and we'd be better off to fix things so that none are made till later.\n> > > > >\n> > > > > > Because of the hook location, Gather-node shall be constructed with built-in\n> > > > > > and foreign partial scan node first, then extension gets a chance to add its\n> > > > > > custom paths (partial and full).\n> > > > > > At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> > > > > > generate_gather_paths().\n> > > > >\n> > > > > Hmm. I'm inclined to think that we should have a separate hook\n> > > > > in which extensions are allowed to add partial paths, and that\n> > > > > set_rel_pathlist_hook should only be allowed to add regular paths.\n> > >\n> > > +1. This idea sounds sensible to me.\n> > >\n> > > > >\n> > > > I have almost same opinion, but the first hook does not need to be\n> > > > dedicated for partial paths. As like set_foreign_pathlist() doing, we can\n> > > > add both of partial and regular paths here, then generate_gather_paths()\n> > > > may generate a Gather-path on top of the best partial-path.\n> > > >\n> > >\n> > > Won't it be confusing for users if we allow both partial and full\n> > > paths in first hook and only full paths in the second hook?\n> > > Basically, in many cases, the second hook won't be of much use. What\n> > > advantage you are seeing in allowing both partial and full paths in\n> > > the first hook?\n> > >\n> > Two advantages. The first one is, it follows same manner of\n> > set_foreign_pathlist()\n> > which allows to add both of full and partial path if FDW supports parallel-scan.\n> > The second one is practical. During the path construction, extension needs to\n> > check availability to run (e.g, whether operators in WHERE-clause is supported\n> > on GPU device...), calculate its estimated cost and so on. Not a small\n> > portion of\n> > them are common for both of full and partial path. So, if the first\n> > hook accepts to\n> > add both kind of paths at once, extension can share the common properties.\n> >\n>\n> You have a point, though I am not sure how much difference it can\n> create for cost computation as ideally, both will have different\n> costing model. I understand there are some savings by avoiding some\n> common work, is there any way to cache the required information?\n>\nI have no idea for the clean way.\nWe may be able to have an opaque pointer for extension usage, however,\nit may be problematic if multiple extension uses the hook.\n\n> > Probably, the second hook is only used for a few corner case where an extension\n> > wants to manipulate path-list already built, like pg_hint_plan.\n> >\n>\n> Okay, but it could be some work for extension authors who are using\n> the current hook, not sure they would like to divide the work between\n> first and second hook.\n>\nI guess they don't divide their code, but choose either of them.\nIn case of PG-Strom, even if there are two hooks around the point, it will use\nthe first hook only, unless it does not prohibit to call add_path() here.\nHowever, some adjustments are required. Its current implementation makes\nGatherPath node with partial CustomScanPath because set_rel_pathlist_hook()\nis called after the generate_gather_paths().\nOnce we could choose the first hook, no need to make a GatherPath by itself,\nbecause PostgreSQL-core will make the path if partial custom-path is enough\nreasonable cost. Likely, this adjustment is more preferable one.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Wed, 2 Jan 2019 22:34:04 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "I tried to make a patch to have dual hooks at set_rel_pathlist(), and\nadjusted PG-Strom for the new design. It stopped to create GatherPath\nby itself, just added a partial path for the base relation.\nIt successfully made a plan using parallel custom-scan node, without\nsystem crash.\n\nAs I mentioned above, it does not use the new \"post_rel_pathlist_hook\"\nbecause we can add both of partial/regular path-node at the first hook\nwith no particular problems.\n\nThanks,\n\ndbt3=# explain select\n sum(l_extendedprice * l_discount) as revenue\nfrom\n lineitem\nwhere\n l_shipdate >= date '1994-01-01'\n and l_shipdate < cast(date '1994-01-01' + interval '1 year' as date)\n and l_discount between 0.08 - 0.01 and 0.08 + 0.01\n and l_quantity < 24 limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Limit (cost=144332.62..144332.63 rows=1 width=4)\n -> Aggregate (cost=144332.62..144332.63 rows=1 width=4)\n -> Gather (cost=144285.70..144329.56 rows=408 width=4)\n Workers Planned: 2\n -> Parallel Custom Scan (GpuPreAgg) on lineitem\n(cost=143285.70..143288.76 rows=204 width=4)\n Reduction: NoGroup\n Outer Scan: lineitem (cost=1666.67..143246.16\nrows=63254 width=8)\n Outer Scan Filter: ((l_discount >= '0.07'::double\nprecision) AND\n (l_discount <=\n'0.09'::double precision) AND\n (l_quantity <\n'24'::double precision) AND\n (l_shipdate <\n'1995-01-01'::date) AND\n (l_shipdate >=\n'1994-01-01'::date))\n(8 rows)\n\nThanks,\n\n2019年1月2日(水) 22:34 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> 2018年12月31日(月) 22:25 Amit Kapila <amit.kapila16@gmail.com>:\n> >\n> > On Mon, Dec 31, 2018 at 5:48 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > >\n> > > 2018年12月31日(月) 13:10 Amit Kapila <amit.kapila16@gmail.com>:\n> > > >\n> > > > On Sun, Dec 30, 2018 at 9:01 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > > > 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > > > >\n> > > > > > Kohei KaiGai <kaigai@heterodb.com> writes:\n> > > > > > > 2018年12月29日(土) 1:44 Tom Lane <tgl@sss.pgh.pa.us>:\n> > > > > > >> However, first I'd like to know why this situation is arising in the first\n> > > > > > >> place. To have the situation you're describing, we'd have to have\n> > > > > > >> attempted to make some Gather paths before we have all the partial paths\n> > > > > > >> for the relation they're for. Why is that a good thing to do? It seems\n> > > > > > >> like such Gathers are necessarily being made with incomplete information,\n> > > > > > >> and we'd be better off to fix things so that none are made till later.\n> > > > > >\n> > > > > > > Because of the hook location, Gather-node shall be constructed with built-in\n> > > > > > > and foreign partial scan node first, then extension gets a chance to add its\n> > > > > > > custom paths (partial and full).\n> > > > > > > At the set_rel_pathlist(), set_rel_pathlist_hook() is invoked next to the\n> > > > > > > generate_gather_paths().\n> > > > > >\n> > > > > > Hmm. I'm inclined to think that we should have a separate hook\n> > > > > > in which extensions are allowed to add partial paths, and that\n> > > > > > set_rel_pathlist_hook should only be allowed to add regular paths.\n> > > >\n> > > > +1. This idea sounds sensible to me.\n> > > >\n> > > > > >\n> > > > > I have almost same opinion, but the first hook does not need to be\n> > > > > dedicated for partial paths. As like set_foreign_pathlist() doing, we can\n> > > > > add both of partial and regular paths here, then generate_gather_paths()\n> > > > > may generate a Gather-path on top of the best partial-path.\n> > > > >\n> > > >\n> > > > Won't it be confusing for users if we allow both partial and full\n> > > > paths in first hook and only full paths in the second hook?\n> > > > Basically, in many cases, the second hook won't be of much use. What\n> > > > advantage you are seeing in allowing both partial and full paths in\n> > > > the first hook?\n> > > >\n> > > Two advantages. The first one is, it follows same manner of\n> > > set_foreign_pathlist()\n> > > which allows to add both of full and partial path if FDW supports parallel-scan.\n> > > The second one is practical. During the path construction, extension needs to\n> > > check availability to run (e.g, whether operators in WHERE-clause is supported\n> > > on GPU device...), calculate its estimated cost and so on. Not a small\n> > > portion of\n> > > them are common for both of full and partial path. So, if the first\n> > > hook accepts to\n> > > add both kind of paths at once, extension can share the common properties.\n> > >\n> >\n> > You have a point, though I am not sure how much difference it can\n> > create for cost computation as ideally, both will have different\n> > costing model. I understand there are some savings by avoiding some\n> > common work, is there any way to cache the required information?\n> >\n> I have no idea for the clean way.\n> We may be able to have an opaque pointer for extension usage, however,\n> it may be problematic if multiple extension uses the hook.\n>\n> > > Probably, the second hook is only used for a few corner case where an extension\n> > > wants to manipulate path-list already built, like pg_hint_plan.\n> > >\n> >\n> > Okay, but it could be some work for extension authors who are using\n> > the current hook, not sure they would like to divide the work between\n> > first and second hook.\n> >\n> I guess they don't divide their code, but choose either of them.\n> In case of PG-Strom, even if there are two hooks around the point, it will use\n> the first hook only, unless it does not prohibit to call add_path() here.\n> However, some adjustments are required. Its current implementation makes\n> GatherPath node with partial CustomScanPath because set_rel_pathlist_hook()\n> is called after the generate_gather_paths().\n> Once we could choose the first hook, no need to make a GatherPath by itself,\n> because PostgreSQL-core will make the path if partial custom-path is enough\n> reasonable cost. Likely, this adjustment is more preferable one.\n>\n> Thanks,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Fri, 4 Jan 2019 13:46:27 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "At Sun, 30 Dec 2018 12:31:22 +0900, Kohei KaiGai <kaigai@heterodb.com> wrote in <CAOP8fzY1Oqf-LGdrZT+TAu+JajwPGn1uYnpWWUPL=2LiattjYA@mail.gmail.com>\n> 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> On the other hands, the later hook must be dedicated to add regular paths,\n> and also provides a chance for extensions to manipulate pre-built path-list\n> including Gather-path.\n> As long as I know, pg_hint_plan uses the set_rel_pathlist_hook to enforce\n> a particular path-node, including Gather-node, by manipulation of the cost\n> value. Horiguchi-san, is it right?\n\nMmm. I haven't expected that it is mentioned here.\n\nActually in the hook, it changes enable_* planner variables, or\ndirectory manipuraltes path costs or even can clear and\nregenerate the path list and gather paths for the parallel\ncase. It will be happy if we had a chance to manpurate partial\npaths before genrating gahter paths.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 09 Jan 2019 13:18:03 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "2019年1月9日(水) 13:18 Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>:\n>\n> At Sun, 30 Dec 2018 12:31:22 +0900, Kohei KaiGai <kaigai@heterodb.com> wrote in <CAOP8fzY1Oqf-LGdrZT+TAu+JajwPGn1uYnpWWUPL=2LiattjYA@mail.gmail.com>\n> > 2018年12月30日(日) 4:12 Tom Lane <tgl@sss.pgh.pa.us>:\n> > On the other hands, the later hook must be dedicated to add regular paths,\n> > and also provides a chance for extensions to manipulate pre-built path-list\n> > including Gather-path.\n> > As long as I know, pg_hint_plan uses the set_rel_pathlist_hook to enforce\n> > a particular path-node, including Gather-node, by manipulation of the cost\n> > value. Horiguchi-san, is it right?\n>\n> Mmm. I haven't expected that it is mentioned here.\n>\n> Actually in the hook, it changes enable_* planner variables, or\n> directory manipuraltes path costs or even can clear and\n> regenerate the path list and gather paths for the parallel\n> case. It will be happy if we had a chance to manpurate partial\n> paths before genrating gahter paths.\n>\nSo, is it sufficient if set_rel_pathlist_hook is just relocated in\nfront of the generate_gather_paths?\nIf we have no use case for the second hook, here is little necessity\nto have the post_rel_pathlist_hook() here.\n(At least, PG-Strom will use the first hook only.)\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Wed, 9 Jan 2019 14:44:15 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 12:44 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> So, is it sufficient if set_rel_pathlist_hook is just relocated in\n> front of the generate_gather_paths?\n> If we have no use case for the second hook, here is little necessity\n> to have the post_rel_pathlist_hook() here.\n> (At least, PG-Strom will use the first hook only.)\n\n+1. That seems like the best way to be consistent with the principle\nthat we need to have all the partial paths before generating any\nGather paths.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 10 Jan 2019 15:52:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "2019年1月11日(金) 5:52 Robert Haas <robertmhaas@gmail.com>:\n>\n> On Wed, Jan 9, 2019 at 12:44 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > So, is it sufficient if set_rel_pathlist_hook is just relocated in\n> > front of the generate_gather_paths?\n> > If we have no use case for the second hook, here is little necessity\n> > to have the post_rel_pathlist_hook() here.\n> > (At least, PG-Strom will use the first hook only.)\n>\n> +1. That seems like the best way to be consistent with the principle\n> that we need to have all the partial paths before generating any\n> Gather paths.\n>\nPatch was updated, just for relocation of the set_rel_pathlist_hook.\nPlease check it.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Fri, 11 Jan 2019 11:09:59 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 9:10 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> 2019年1月11日(金) 5:52 Robert Haas <robertmhaas@gmail.com>:\n> > On Wed, Jan 9, 2019 at 12:44 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > So, is it sufficient if set_rel_pathlist_hook is just relocated in\n> > > front of the generate_gather_paths?\n> > > If we have no use case for the second hook, here is little necessity\n> > > to have the post_rel_pathlist_hook() here.\n> > > (At least, PG-Strom will use the first hook only.)\n> >\n> > +1. That seems like the best way to be consistent with the principle\n> > that we need to have all the partial paths before generating any\n> > Gather paths.\n> >\n> Patch was updated, just for relocation of the set_rel_pathlist_hook.\n> Please check it.\n\nSeems reasonable to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 11:36:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "Hello, sorry for the absence.\n\nAt Fri, 11 Jan 2019 11:36:43 -0500, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmoYyxBgkfN_APBdxdutFMukb=P-EgGNY-NbauRcL7mGnmA@mail.gmail.com>\n> On Thu, Jan 10, 2019 at 9:10 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > 2019年1月11日(金) 5:52 Robert Haas <robertmhaas@gmail.com>:\n> > > On Wed, Jan 9, 2019 at 12:44 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > > So, is it sufficient if set_rel_pathlist_hook is just relocated in\n> > > > front of the generate_gather_paths?\n> > > > If we have no use case for the second hook, here is little necessity\n> > > > to have the post_rel_pathlist_hook() here.\n> > > > (At least, PG-Strom will use the first hook only.)\n> > >\n> > > +1. That seems like the best way to be consistent with the principle\n> > > that we need to have all the partial paths before generating any\n> > > Gather paths.\n> > >\n> > Patch was updated, just for relocation of the set_rel_pathlist_hook.\n> > Please check it.\n> \n> Seems reasonable to me.\n\nAlso seems reasonable to me. The extension can call\ngenerate_gather_paths redundantly as is but it almost doesn't\nharm, so it is acceptable even in a minor release.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Jan 2019 18:28:48 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "Let me remind the thread.\nIf no more comments, objections, or better ideas, please commit this fix.\n\nThanks,\n\n2019年1月17日(木) 18:29 Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>:\n>\n> Hello, sorry for the absence.\n>\n> At Fri, 11 Jan 2019 11:36:43 -0500, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmoYyxBgkfN_APBdxdutFMukb=P-EgGNY-NbauRcL7mGnmA@mail.gmail.com>\n> > On Thu, Jan 10, 2019 at 9:10 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > 2019年1月11日(金) 5:52 Robert Haas <robertmhaas@gmail.com>:\n> > > > On Wed, Jan 9, 2019 at 12:44 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > > > So, is it sufficient if set_rel_pathlist_hook is just relocated in\n> > > > > front of the generate_gather_paths?\n> > > > > If we have no use case for the second hook, here is little necessity\n> > > > > to have the post_rel_pathlist_hook() here.\n> > > > > (At least, PG-Strom will use the first hook only.)\n> > > >\n> > > > +1. That seems like the best way to be consistent with the principle\n> > > > that we need to have all the partial paths before generating any\n> > > > Gather paths.\n> > > >\n> > > Patch was updated, just for relocation of the set_rel_pathlist_hook.\n> > > Please check it.\n> >\n> > Seems reasonable to me.\n>\n> Also seems reasonable to me. The extension can call\n> generate_gather_paths redundantly as is but it almost doesn't\n> harm, so it is acceptable even in a minor release.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n",
"msg_date": "Tue, 22 Jan 2019 20:50:31 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "Hello,\n\nLet me remind the thread again.\nI'm waiting for the fix getting committed for a month...\n\n2019年1月22日(火) 20:50 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> Let me remind the thread.\n> If no more comments, objections, or better ideas, please commit this fix.\n>\n> Thanks,\n>\n> 2019年1月17日(木) 18:29 Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>:\n> >\n> > Hello, sorry for the absence.\n> >\n> > At Fri, 11 Jan 2019 11:36:43 -0500, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmoYyxBgkfN_APBdxdutFMukb=P-EgGNY-NbauRcL7mGnmA@mail.gmail.com>\n> > > On Thu, Jan 10, 2019 at 9:10 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > > 2019年1月11日(金) 5:52 Robert Haas <robertmhaas@gmail.com>:\n> > > > > On Wed, Jan 9, 2019 at 12:44 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > > > > > So, is it sufficient if set_rel_pathlist_hook is just relocated in\n> > > > > > front of the generate_gather_paths?\n> > > > > > If we have no use case for the second hook, here is little necessity\n> > > > > > to have the post_rel_pathlist_hook() here.\n> > > > > > (At least, PG-Strom will use the first hook only.)\n> > > > >\n> > > > > +1. That seems like the best way to be consistent with the principle\n> > > > > that we need to have all the partial paths before generating any\n> > > > > Gather paths.\n> > > > >\n> > > > Patch was updated, just for relocation of the set_rel_pathlist_hook.\n> > > > Please check it.\n> > >\n> > > Seems reasonable to me.\n> >\n> > Also seems reasonable to me. The extension can call\n> > generate_gather_paths redundantly as is but it almost doesn't\n> > harm, so it is acceptable even in a minor release.\n> >\n> > regards.\n> >\n> > --\n> > Kyotaro Horiguchi\n> > NTT Open Source Software Center\n> >\n>\n>\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Wed, 6 Feb 2019 14:05:05 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "On Wed, Feb 6, 2019 at 10:35 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> Hello,\n> Let me remind the thread again.\n> I'm waiting for the fix getting committed for a month...\n>\n\nIt seems you would also like to see this back-patched. I am not sure\nif that is a good idea as there is some risk of breaking existing\nusage. Tom, do you have any opinion on this patch? It seems to me\nyou were thinking to have a separate hook for partial paths, but the\npatch has solved the problem by moving the hook location. I think\nwhatever is the case we should try to reach some consensus and move\nforward with this patch as KaiGai-San is waiting from quite some time.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 9 Feb 2019 18:37:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> It seems you would also like to see this back-patched. I am not sure\n> if that is a good idea as there is some risk of breaking existing\n> usage. Tom, do you have any opinion on this patch? It seems to me\n> you were thinking to have a separate hook for partial paths, but the\n> patch has solved the problem by moving the hook location.\n\nI was expecting Haas to take point on this, but since he doesn't seem\nto be doing so, I'll push it. I don't think there's any material\nrisk of breaking things --- the only functionality lost is the ability to\nremove or modify baserel Gather paths, which I doubt anybody is interested\nin doing. Certainly that's way less useful than the ability to add\npartial paths and have them be included in Gather-building.\n\nIn a green field I'd rather have had a separate hook for adding partial\npaths, but it's not clear that that really buys much of anything except\nlogical cleanliness ... against which it adds cost since the using\nextension(s) have to figure out what's going on twice.\n\nAlso this way does have the advantage that it retroactively fixes things\nfor extensions that may be trying to make partial paths today.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 09 Feb 2019 10:27:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add_partial_path() may remove dominated path but still in use"
}
] |
[
{
"msg_contents": "Hi,\n\nPlease find attached a patch to enable support for temporary tables in\nprepared transactions when ON COMMIT DROP has been specified.\n\nThe comment in the existing code around this idea reads:\n\n\t * Don't allow PREPARE TRANSACTION if we've accessed a temporary table in\n\t * this transaction.\n [ ... ]\n\t * XXX In principle this could be relaxed to allow some useful special\n\t * cases, such as a temp table created and dropped all within the\n\t * transaction. That seems to require much more bookkeeping though.\n\nIn the attached patch I have added this paragraph, and of course the\nimplementation of it:\n\n\t * A special case of this situation is using ON COMMIT DROP, where the\n\t * call to PreCommit_on_commit_actions() is then responsible for\n\t * performing the DROP table within the transaction and before we get\n\t * here.\n\nRegards,\n-- \ndim",
"msg_date": "Fri, 28 Dec 2018 12:46:13 +0100",
"msg_from": "Dimitri Fontaine <dimitri@citusdata.com>",
"msg_from_op": true,
"msg_subject": "Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On 28/12/2018 12:46, Dimitri Fontaine wrote:\n> Hi,\n> \n> Please find attached a patch to enable support for temporary tables in\n> prepared transactions when ON COMMIT DROP has been specified.\n\nThe comments I made on IRC have been addressed in this version of the\npatch, so it looks good to me.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n",
"msg_date": "Fri, 28 Dec 2018 12:57:37 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "Hi Dimitri\n\nOn 2018-Dec-28, Dimitri Fontaine wrote:\n\n> Please find attached a patch to enable support for temporary tables in\n> prepared transactions when ON COMMIT DROP has been specified.\n\nGlad to see you submitting patches again.\n\nI suggest to add in your regression tests a case where the prepared\ntransaction commits, and ensuring that the temp table is gone from\ncatalogs.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 28 Dec 2018 15:06:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Glad to see you submitting patches again.\n\nThanks!\n\n> I suggest to add in your regression tests a case where the prepared\n> transaction commits, and ensuring that the temp table is gone from\n> catalogs.\n\nPlease find such a revision attached.\n\nRegards,\n-- \ndim",
"msg_date": "Fri, 28 Dec 2018 20:32:15 +0100",
"msg_from": "Dimitri Fontaine <dimitri@citusdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Fri, Dec 28, 2018 at 08:32:15PM +0100, Dimitri Fontaine wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I suggest to add in your regression tests a case where the prepared\n>> transaction commits, and ensuring that the temp table is gone from\n>> catalogs.\n> \n> Please find such a revision attached.\n\nBeing able to relax a bit the case is better than nothing, so that's\nnice to see incremental improvements. Thanks Dimitri.\n\nI just had a very quick glance, so that's far from being a detailed\nreview, but could it be possible to add test cases involving\ninheritance trees and/or partitions if that makes sense? The ON\nCOMMIT action handling is designed to make such cases work properly.\n--\nMichael",
"msg_date": "Sat, 29 Dec 2018 08:32:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Being able to relax a bit the case is better than nothing, so that's\n> nice to see incremental improvements. Thanks Dimitri.\n\nI'm afraid that this patch is likely to be, if not completely broken,\nat least very much less useful than one could wish by the time we get\ndone closing the holes discussed in this other thread:\n\nhttps://www.postgresql.org/message-id/flat/5d910e2e-0db8-ec06-dd5f-baec420513c3%40imap.cc\n\nFor instance, if we're going to have to reject the case where the\nsession's temporary schema was created during the current transaction,\nthen that puts a very weird constraint on whether this case works.\n\nAlso, even without worrying about new problems that that discussion\nmay lead to, I don't think that the patch works as-is. The function\nevery_on_commit_is_on_commit_drop() does what it says, but that is\nNOT sufficient to conclude that every temp table the transaction has\ntouched is on-commit-drop. This logic will successfully reject cases\nwith on-commit-delete-rows temp tables, but not cases where the temp\ntable(s) lack any ON COMMIT spec at all.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 13 Jan 2019 15:34:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "Hi,\n\nTom Lane <tgl@sss.pgh.pa.us> writes:\n> I'm afraid that this patch is likely to be, if not completely broken,\n> at least very much less useful than one could wish by the time we get\n> done closing the holes discussed in this other thread:\n>\n> https://www.postgresql.org/message-id/flat/5d910e2e-0db8-ec06-dd5f-baec420513c3%40imap.cc\n\nThanks for the review here Tom, and for linking with the other\ndiscussion (Álvaro did that too, thanks!). I've been reviewing it too.\n\nI didn't think about the pg_temp_NN namespaces in my approach, and I\nthink it might be possible to make it work, but it's getting quite\ninvolved now.\n\nOne idea would be that if every temp table in the session belongs to the\ntransaction, and their namespace too (we could check through pg_depend\nthat the namespace doesn't contain anything else beside the\ntransaction's tables), then we could dispose of the temp schema and\non-commit-drop tables at PREPARE COMMIT time.\n\nOtherwise, as before, prevent the transaction to be a 2PC one.\n\n> For instance, if we're going to have to reject the case where the\n> session's temporary schema was created during the current transaction,\n> then that puts a very weird constraint on whether this case works.\n\nYeah. The goal of my approach is to transparently get back temp table\nsupport in 2PC when that makes sense, which is most use cases I've been\nconfronted to. We use 2PC in Citus, and it would be nice to be able to\nuse transaction local temp tables on worker nodes when doing data\ninjestion and roll-ups.\n\n> Also, even without worrying about new problems that that discussion\n> may lead to, I don't think that the patch works as-is. The function\n> every_on_commit_is_on_commit_drop() does what it says, but that is\n> NOT sufficient to conclude that every temp table the transaction has\n> touched is on-commit-drop. This logic will successfully reject cases\n> with on-commit-delete-rows temp tables, but not cases where the temp\n> table(s) lack any ON COMMIT spec at all.\n\nThanks! I missed that the lack of ON COMMIT spec would have that impact\nin the code. We could add tracking of that I suppose, and will have a\nlook at how to implement it provided that the other points find an\nacceptable solution.\n\nRegards,\n-- \ndim\n\n",
"msg_date": "Mon, 14 Jan 2019 19:41:18 +0100",
"msg_from": "Dimitri Fontaine <dimitri@citusdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 07:41:18PM +0100, Dimitri Fontaine wrote:\n> Thanks for the review here Tom, and for linking with the other\n> discussion (Álvaro did that too, thanks!). I've been reviewing it\n> too.\n\nIf you can look at the patch, reviews are welcome. There are quite a\ncouple of patterns I spotted on the way.\n\n> I didn't think about the pg_temp_NN namespaces in my approach, and I\n> think it might be possible to make it work, but it's getting quite\n> involved now.\n>\n> One idea would be that if every temp table in the session belongs to the\n> transaction, and their namespace too (we could check through pg_depend\n> that the namespace doesn't contain anything else beside the\n> transaction's tables), then we could dispose of the temp schema and\n> on-commit-drop tables at PREPARE COMMIT time.\n\nHm. A strong assumption that we rely on in the code is that the\ntemporary namespace drop only happens when the session ends, so you\nwould need to complicate the logic so as the namespace is created in a\ngiven transaction, which is something that can be done (at least\nthat's what my patch on the other thread adds control for), and that\nno other objects than ON COMMIT tables are created, which is more\ntricky to track (still things would get weird with a LOCK on ON COMMIT\nDROP tables?). The root of the problem is that the objects' previous\nversions would still be around between the PREPARE TRANSACTION and\nCOMMIT PREPARED, and that both queries can be perfectly run\ntransparently across multiple sessions. \n\nBack in the time, one thing that we did in Postgres-XC was to enforce\n2PC to not be used and use a direct commit instead of failing, which\nwas utterly wrong. Postgres-XL may be reusing some of that :(\n\n> Yeah. The goal of my approach is to transparently get back temp table\n> support in 2PC when that makes sense, which is most use cases I've been\n> confronted to. We use 2PC in Citus, and it would be nice to be able to\n> use transaction local temp tables on worker nodes when doing data\n> injestion and roll-ups.\n\nYou have not considered the case of inherited tables and partitioned\nmixing ON COMMIT actions of different types as well. For inherited\ntables this does not matter much I think, perhaps for partitions it\ndoes (see tests in 52ea6a8, which you would need to mix with 2PC).\n--\nMichael",
"msg_date": "Tue, 15 Jan 2019 11:41:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 1:41 PM Dimitri Fontaine <dimitri@citusdata.com> wrote:\n> One idea would be that if every temp table in the session belongs to the\n> transaction, and their namespace too (we could check through pg_depend\n> that the namespace doesn't contain anything else beside the\n> transaction's tables), then we could dispose of the temp schema and\n> on-commit-drop tables at PREPARE COMMIT time.\n\nWhy not just drop any on-commit-drop tables at PREPARE TRANSACTION\ntime and leave the schema alone? If there are any temp tables touched\nby the transaction which are not on-commit-drop then we'd have to\nfail, but as long as all the tables we've got are on-commit-drop then\nit seems fine to just nuke them at PREPARE time. Such tables must've\nbeen created in the current transaction, because otherwise the\ncreating transaction aborted and they're gone for that reason, or it\ncommitted and they're gone because they're on-commit-drop. And\nregardless of whether the transaction we are preparing goes on to\ncommit or abort, those tables will be gone afterwards for the same\nreasons. So there doesn't in this case seem to be any reason to keep\nthem around until the transaction's fate is known.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 16 Jan 2019 11:44:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On 16/01/2019 17:44, Robert Haas wrote:\n> On Mon, Jan 14, 2019 at 1:41 PM Dimitri Fontaine <dimitri@citusdata.com> wrote:\n>> One idea would be that if every temp table in the session belongs to the\n>> transaction, and their namespace too (we could check through pg_depend\n>> that the namespace doesn't contain anything else beside the\n>> transaction's tables), then we could dispose of the temp schema and\n>> on-commit-drop tables at PREPARE COMMIT time.\n> \n> Why not just drop any on-commit-drop tables at PREPARE TRANSACTION\n> time and leave the schema alone? If there are any temp tables touched\n> by the transaction which are not on-commit-drop then we'd have to\n> fail, but as long as all the tables we've got are on-commit-drop then\n> it seems fine to just nuke them at PREPARE time. Such tables must've\n> been created in the current transaction, because otherwise the\n> creating transaction aborted and they're gone for that reason, or it\n> committed and they're gone because they're on-commit-drop. And\n> regardless of whether the transaction we are preparing goes on to\n> commit or abort, those tables will be gone afterwards for the same\n> reasons. So there doesn't in this case seem to be any reason to keep\n> them around until the transaction's fate is known.\n\nIsn't that what happens already? PrepareTransaction() calls\nPreCommit_on_commit_actions() from what I can tell.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n",
"msg_date": "Fri, 18 Jan 2019 10:50:29 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 4:50 AM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> Isn't that what happens already? PrepareTransaction() calls\n> PreCommit_on_commit_actions() from what I can tell.\n\nHuh. Well, in that case, I'm not sure I understand we really need to\ndo beyond removing the error checks for the case where all tables are\non-commit-drop.\n\nIt could be useful to do something about the issue with pg_temp\ncreation that Tom linked to in the other thread. But even if you\ndidn't do that, it'd be pretty easy to work around this in application\ncode -- just issue a dummy CREATE TEMP TABLE .. ON COMMIT DROP\nstatement the first time you use a connection, so that the temp schema\ndefinitely exists. So I'm not sure I'd view that as a blocker for\nthis patch, even though it's kind of a sucky limitation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 18 Jan 2019 10:39:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 10:39:46AM -0500, Robert Haas wrote:\n> Huh. Well, in that case, I'm not sure I understand we really need to\n> do beyond removing the error checks for the case where all tables are\n> on-commit-drop.\n\nI have not looked at the patch in details, but we should really be\ncareful that if we do that the namespace does not remain behind when\nperforming such transactions so as it cannot be dropped. On my very\nrecent lookups of this class of problems you can easily finish by\nblocking a backend from shutting down when dropping its temporary\nschema, with the client, say psql, already able to disconnect. So as\nlong as the 2PC transaction is not COMMIT PREPARED the backend-side\nwait will not be able to complete, blocking a backend slot in shared\nmemory. PREPARE TRANSACTION is very close to a simple commit in terms\nof its semantics, while COMMIT PREPARED is just here to finish\nreleasing resources.\n\n> It could be useful to do something about the issue with pg_temp\n> creation that Tom linked to in the other thread. But even if you\n> didn't do that, it'd be pretty easy to work around this in application\n> code -- just issue a dummy CREATE TEMP TABLE .. ON COMMIT DROP\n> statement the first time you use a connection, so that the temp schema\n> definitely exists. So I'm not sure I'd view that as a blocker for\n> this patch, even though it's kind of a sucky limitation.\n\nThat's not really user-friendly, still workable. Or you could just\ncall current_schema() ;)\n--\nMichael",
"msg_date": "Sat, 19 Jan 2019 10:39:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Sat, Jan 19, 2019 at 10:39:43AM +0900, Michael Paquier wrote:\n> I have not looked at the patch in details, but we should really be\n> careful that if we do that the namespace does not remain behind when\n> performing such transactions so as it cannot be dropped. On my very\n> recent lookups of this class of problems you can easily finish by\n> blocking a backend from shutting down when dropping its temporary\n> schema, with the client, say psql, already able to disconnect. So as\n> long as the 2PC transaction is not COMMIT PREPARED the backend-side\n> wait will not be able to complete, blocking a backend slot in shared\n> memory. PREPARE TRANSACTION is very close to a simple commit in terms\n> of its semantics, while COMMIT PREPARED is just here to finish\n> releasing resources.\n\nI have been looking at this patch, which conflicts on HEAD by the way\n(Sorry!) still it is easy enough to get rid of the conflict, and from\nwhat I can see it does not completely do its job. Simply take the\nfollowing example:\n=# begin;\nBEGIN\n=# create temp table aa (a int ) on commit drop;\nCREATE TABLE\n=# prepare transaction 'ad';\nPREPARE TRANSACTION\n=# \\q\n\nThis causes the client to think that the session is finished, but if\nyou look closer at the backend it is still pending to close until the\ntransaction is COMMIT PREPARED:\nmichael 22126 0.0 0.0 218172 15788 ? Ss 15:59 0:00\npostgres: michael michael [local] idle waiting\n\nHere is a backtrace:\n#7 0x00005616900d0462 in LockAcquireExtended (locktag=0x7ffdd6bd5390,\nlockmode=8, sessionLock=false, dontWait=false,\nreportMemoryError=true, locallockp=0x0)\nat lock.c:1050\n#8 0x00005616900cf9ab in LockAcquire (locktag=0x7ffdd6bd5390,\nlockmode=8, sessionLock=false, dontWait=false) at lock.c:713\n#9 0x00005616900ced07 in LockDatabaseObject (classid=2615,\nobjid=16385, objsubid=0, lockmode=8) at lmgr.c:934\n#10 0x000056168fd8cace in AcquireDeletionLock (object=0x7ffdd6bd5414,\nflags=0) at dependency.c:1389\n#11 0x000056168fd8b398 in performDeletion (object=0x7ffdd6bd5414,\nbehavior=DROP_CASCADE, flags=29) at dependency.c:325\n#12 0x000056168fda103a in RemoveTempRelations (tempNamespaceId=16385)\nat namespace.c:4142\n#13 0x000056168fda106d in RemoveTempRelationsCallback (code=0, arg=0)\nat namespace.c:4161\n\nIf you really intend to drop any trace of the objects at PREPARE\nphase, that does not seem completely impossible to me, still you would\nalso need handling for the case where the temp table created also\ncreates the temporary schema for the session.\n--\nMichael",
"msg_date": "Mon, 28 Jan 2019 16:06:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
},
{
"msg_contents": "On Mon, Jan 28, 2019 at 04:06:11PM +0900, Michael Paquier wrote:\n> If you really intend to drop any trace of the objects at PREPARE\n> phase, that does not seem completely impossible to me, still you would\n> also need handling for the case where the temp table created also\n> creates the temporary schema for the session.\n\nMore work needs to be done, and the patch has problems, so I am\nmarking this patch as returned with feedback.\n--\nMichael",
"msg_date": "Fri, 1 Feb 2019 10:10:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepare Transaction support for ON COMMIT DROP temporary tables"
}
] |
[
{
"msg_contents": "Hi!\n\nHow can I define regression tests which should use multiple client\nsessions to test interaction between them?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 28 Dec 2018 12:49:08 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Regression tests using multiple sessions"
},
{
"msg_contents": "Hi,\n\nOn 2018-Dec-28, Mitar wrote:\n\n> How can I define regression tests which should use multiple client\n> sessions to test interaction between them?\n\nSee src/test/isolation/README.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 28 Dec 2018 18:01:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests using multiple sessions"
},
{
"msg_contents": "Hi!\n\nThanks.\n\n\nMitar\n\nOn Fri, Dec 28, 2018 at 1:01 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> On 2018-Dec-28, Mitar wrote:\n>\n> > How can I define regression tests which should use multiple client\n> > sessions to test interaction between them?\n>\n> See src/test/isolation/README.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Fri, 28 Dec 2018 13:05:50 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Regression tests using multiple sessions"
},
{
"msg_contents": "Hi,\n\nOn 2018-Dec-28, Mitar wrote:\n\n> Hi!\n> \n> Thanks.\n\nYou're welcome. Please don't top-post.\n\nRegards\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 28 Dec 2018 18:21:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests using multiple sessions"
}
] |
[
{
"msg_contents": "\"env EXTRA_REGRESS_OPTS=--temp-config=SOMEFILE make check\" appends the\ncontents of SOMEFILE to the test cluster's postgresql.conf. I want a similar\nfeature for TAP suites and other non-pg_regress suites. (My immediate use\ncase is to raise authentication_timeout and wal_sender_timeout on my buildfarm\nanimals, which sometimes fail at the defaults.) I'm thinking to do this by\nrecognizing the PG_TEST_TEMP_CONFIG environment variable as a\nwhitespace-separated list of file names for appending to postgresql.conf.\n\n",
"msg_date": "Fri, 28 Dec 2018 18:19:50 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Augment every test postgresql.conf"
},
{
"msg_contents": "On Fri, Dec 28, 2018 at 06:19:50PM -0800, Noah Misch wrote:\n> \"env EXTRA_REGRESS_OPTS=--temp-config=SOMEFILE make check\" appends the\n> contents of SOMEFILE to the test cluster's postgresql.conf. I want a similar\n> feature for TAP suites and other non-pg_regress suites. (My immediate use\n> case is to raise authentication_timeout and wal_sender_timeout on my buildfarm\n> animals, which sometimes fail at the defaults.) I'm thinking to do this by\n> recognizing the PG_TEST_TEMP_CONFIG environment variable as a\n> whitespace-separated list of file names for appending to postgresql.conf.\n\nLooking more closely, we already have the TEMP_CONFIG variable and apply it to\neverything except TAP suites. Closing that gap, as attached, is enough. The\nbuildfarm client uses TEMP_CONFIG to implement its extra_config setting, so\nthis will cause extra_config to start applying to TAP suites.",
"msg_date": "Sat, 29 Dec 2018 21:40:14 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Looking more closely, we already have the TEMP_CONFIG variable and apply it to\n> everything except TAP suites. Closing that gap, as attached, is enough. The\n> buildfarm client uses TEMP_CONFIG to implement its extra_config setting, so\n> this will cause extra_config to start applying to TAP suites.\n\nSeems reasonable, but it might be a good idea to warn the buildfarm-owners\nlist before committing. (Although I guess it wouldn't be hard to check\nthe buildfarm database to see if anyone is putting anything interesting\ninto their critters' TEMP_CONFIG.)\n\nAlso, if we're to do this, it seems like applying it to back branches\nwould be helpful --- but will it work in all the back branches?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 29 Dec 2018 22:46:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "On Sat, Dec 29, 2018 at 10:46:31PM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Looking more closely, we already have the TEMP_CONFIG variable and apply it to\n> > everything except TAP suites. Closing that gap, as attached, is enough. The\n> > buildfarm client uses TEMP_CONFIG to implement its extra_config setting, so\n> > this will cause extra_config to start applying to TAP suites.\n> \n> Seems reasonable, but it might be a good idea to warn the buildfarm-owners\n> list before committing. (Although I guess it wouldn't be hard to check\n> the buildfarm database to see if anyone is putting anything interesting\n> into their critters' TEMP_CONFIG.)\n\nGood idea. Here are the extra_config entries seen since 2018-12-01:\n\n archive_mode = off\n force_parallel_mode = regress\n fsync = off\n fsync = on\n jit=1\n jit = 1\n jit_above_cost=0\n jit = on\n jit_optimize_above_cost=1000\n log_checkpoints = 'true'\n log_connections = 'true'\n log_disconnections = 'true'\n log_line_prefix = '[%c:%l] '\n log_line_prefix = '%m [%c:%l] '\n log_line_prefix = '%m [%c:%l] %q%a '\n log_line_prefix = '%m [%p:%l] '\n log_line_prefix = '%m [%p:%l] %q%a '\n log_line_prefix = '%m [%s %p:%l] %q%a '\n log_statement = 'all'\n max_parallel_workers_per_gather = 2\n max_parallel_workers_per_gather = 5\n max_wal_senders = 0\n shared_buffers = 10MB\n stats_temp_directory = '/cygdrive/w/lorikeet/HEAD'\n stats_temp_directory = '/cygdrive/w/lorikeet/REL_10_STABLE'\n stats_temp_directory = '/cygdrive/w/lorikeet/REL_11_STABLE'\n stats_temp_directory = '/cygdrive/w/lorikeet/REL9_4_STABLE'\n stats_temp_directory = '/cygdrive/w/lorikeet/REL9_5_STABLE'\n stats_temp_directory = '/cygdrive/w/lorikeet/REL9_6_STABLE'\n stats_temp_directory= '/home/buildfarm/data/stats_temp'\n wal_level = 'minimal'\n\nProblems:\n\n1. max_wal_senders=0 and wal_level=minimal break a number of suites,\n e.g. pg_basebackup.\n2. stats_temp_directory is incompatible with TAP suites that start more than\n one node simultaneously.\n\nProblem (1) goes away if I inject the TEMP_CONFIG settings earlier in the\nfile, which seems defensible:\n\n--- a/src/test/perl/PostgresNode.pm\n+++ b/src/test/perl/PostgresNode.pm\n@@ -447,10 +447,13 @@ sub init\n \tprint $conf \"log_statement = all\\n\";\n \tprint $conf \"log_replication_commands = on\\n\";\n \tprint $conf \"wal_retrieve_retry_interval = '500ms'\\n\";\n \tprint $conf \"port = $port\\n\";\n \n+\tprint $conf TestLib::slurp_file($ENV{TEMP_CONFIG})\n+\t if defined $ENV{TEMP_CONFIG};\n+\n \tif ($params{allows_streaming})\n \t{\n \t\tif ($params{allows_streaming} eq \"logical\")\n \t\t{\n \t\t\tprint $conf \"wal_level = logical\\n\";\n\nProblem (2) remains. It's already a problem for \"make -j check-world\". I'll\ngive that one more thought.\n\n> Also, if we're to do this, it seems like applying it to back branches\n> would be helpful --- but will it work in all the back branches?\n\nYes. TEMP_CONFIG exists in all supported branches, and the back-patch to 9.4\nis no more complex. Before 9.6 (commit 87cc6b5), TEMP_CONFIG affected \"make\ncheck\" and the pg_upgrade test suite, but it did not affect other pg_regress\nsuites like contrib/* and src/pl/*. We could back-patch commit 87cc6b5, if\nthere's demand. I don't personally need it, because the tests I want to\ninfluence are all TAP tests.\n\n",
"msg_date": "Sun, 30 Dec 2018 00:53:46 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "\nOn 12/30/18 12:53 AM, Noah Misch wrote:\n> On Sat, Dec 29, 2018 at 10:46:31PM -0500, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> Looking more closely, we already have the TEMP_CONFIG variable and apply it to\n>>> everything except TAP suites. Closing that gap, as attached, is enough. The\n>>> buildfarm client uses TEMP_CONFIG to implement its extra_config setting, so\n>>> this will cause extra_config to start applying to TAP suites.\n>> Seems reasonable, but it might be a good idea to warn the buildfarm-owners\n>> list before committing. (Although I guess it wouldn't be hard to check\n>> the buildfarm database to see if anyone is putting anything interesting\n>> into their critters' TEMP_CONFIG.)\n> Good idea. Here are the extra_config entries seen since 2018-12-01:\n>\n> archive_mode = off\n> force_parallel_mode = regress\n> fsync = off\n> fsync = on\n> jit=1\n> jit = 1\n> jit_above_cost=0\n> jit = on\n> jit_optimize_above_cost=1000\n> log_checkpoints = 'true'\n> log_connections = 'true'\n> log_disconnections = 'true'\n> log_line_prefix = '[%c:%l] '\n> log_line_prefix = '%m [%c:%l] '\n> log_line_prefix = '%m [%c:%l] %q%a '\n> log_line_prefix = '%m [%p:%l] '\n> log_line_prefix = '%m [%p:%l] %q%a '\n> log_line_prefix = '%m [%s %p:%l] %q%a '\n> log_statement = 'all'\n> max_parallel_workers_per_gather = 2\n> max_parallel_workers_per_gather = 5\n> max_wal_senders = 0\n> shared_buffers = 10MB\n> stats_temp_directory = '/cygdrive/w/lorikeet/HEAD'\n> stats_temp_directory = '/cygdrive/w/lorikeet/REL_10_STABLE'\n> stats_temp_directory = '/cygdrive/w/lorikeet/REL_11_STABLE'\n> stats_temp_directory = '/cygdrive/w/lorikeet/REL9_4_STABLE'\n> stats_temp_directory = '/cygdrive/w/lorikeet/REL9_5_STABLE'\n> stats_temp_directory = '/cygdrive/w/lorikeet/REL9_6_STABLE'\n> stats_temp_directory= '/home/buildfarm/data/stats_temp'\n> wal_level = 'minimal'\n>\n> Problems:\n>\n> 1. max_wal_senders=0 and wal_level=minimal break a number of suites,\n> e.g. pg_basebackup.\n> 2. stats_temp_directory is incompatible with TAP suites that start more than\n> one node simultaneously.\n>\n> Problem (1) goes away if I inject the TEMP_CONFIG settings earlier in the\n> file, which seems defensible:\n>\n> --- a/src/test/perl/PostgresNode.pm\n> +++ b/src/test/perl/PostgresNode.pm\n> @@ -447,10 +447,13 @@ sub init\n> \tprint $conf \"log_statement = all\\n\";\n> \tprint $conf \"log_replication_commands = on\\n\";\n> \tprint $conf \"wal_retrieve_retry_interval = '500ms'\\n\";\n> \tprint $conf \"port = $port\\n\";\n> \n> +\tprint $conf TestLib::slurp_file($ENV{TEMP_CONFIG})\n> +\t if defined $ENV{TEMP_CONFIG};\n> +\n> \tif ($params{allows_streaming})\n> \t{\n> \t\tif ($params{allows_streaming} eq \"logical\")\n> \t\t{\n> \t\t\tprint $conf \"wal_level = logical\\n\";\n>\n> Problem (2) remains. It's already a problem for \"make -j check-world\". I'll\n> give that one more thought.\n>\n\n\nlorikeet is putting the stats_temp directory on a ramdisk. This is worth\ntesting in any case, but in lorikeet's case was done to help speed up\nthe tests. When I had a Raspberry Pi instance I did something similar,\nfor the same reason.\n\n\nThe obvious quick fix would be to have PostgresNode.pm set this to the\ndefault after inserting the TEMP_CONFIG file.\n\n\nThere are a couple of problems here that bear further consideration.\nFirst, that the stats_temp_directory has to exist, and second that there\nis no convenient way to make it unique. It would be nice if a) the\ndirectory could be created if it didn't exist and b) some place-holder\nin the name could be replaced by a unique identifier such as the node\nid. If there is interest I'll work on these. One problem I foresee is\nthat it might lead to a plethora of stats temp directories being left\naround. Still thinking about how we should deal with that. In the\nbuildfarm client I'd be tempted to create a directory to hold all the\nrun's stats_temp_directories and then clean it up at the end of the run.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Dec 2018 10:32:31 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 10:32:31AM -0500, Andrew Dunstan wrote:\n> On 12/30/18 12:53 AM, Noah Misch wrote:\n> > stats_temp_directory = '/cygdrive/w/lorikeet/HEAD'\n\n> > 2. stats_temp_directory is incompatible with TAP suites that start more than\n> > one node simultaneously.\n\n> > It's already a problem for \"make -j check-world\".\n\n> lorikeet is putting the stats_temp directory on a ramdisk. This is worth\n> testing in any case, but in lorikeet's case was done to help speed up\n> the tests. When I had a Raspberry Pi instance I did something similar,\n> for the same reason.\n\nI, too, value the ability to override stats_temp_directory for test runs. (I\nget stats.sql failures at high load, even on a high-performance machine.\nUsing stats_temp_directory may fix that.)\n\n> The obvious quick fix would be to have PostgresNode.pm set this to the\n> default after inserting the TEMP_CONFIG file.\n\nTrue.\n\n> There are a couple of problems here that bear further consideration.\n> First, that the stats_temp_directory has to exist, and second that there\n> is no convenient way to make it unique. It would be nice if a) the\n> directory could be created if it didn't exist and b) some place-holder\n> in the name could be replaced by a unique identifier such as the node\n> id. If there is interest I'll work on these. One problem I foresee is\n> that it might lead to a plethora of stats temp directories being left\n> around. Still thinking about how we should deal with that. In the\n> buildfarm client I'd be tempted to create a directory to hold all the\n> run's stats_temp_directories and then clean it up at the end of the run.\n\nI'm thinking the server should manage this; during startup, create\n$stats_temp_directory/PostgreSQL.$postmaster_pid and store each stats file\ntherein. Just before creating that directory, scan $stats_temp_directory and\ndelete subdirectories that no longer correspond to live PIDs. Subdirectories\nwould not build up over time, even if one deletes a test data directory while\nits subdirectory of stats_temp_directory still exists. For non-test\napplications, this makes stats_temp_directory safer to use. Today, we don't\ndetect two clusters using the same stats_temp_directory. We don't even\ndocument that it's unacceptable. This makes it acceptable.\n\nThanks,\nnm\n\n",
"msg_date": "Sun, 30 Dec 2018 19:28:15 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "stats_temp_directory conflicts"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> I'm thinking the server should manage this; during startup, create\n> $stats_temp_directory/PostgreSQL.$postmaster_pid and store each stats file\n> therein.\n\n+1\n\n> Just before creating that directory, scan $stats_temp_directory and\n> delete subdirectories that no longer correspond to live PIDs.\n\nHm, seems potentially racy, if multiple postmasters are starting\nat the same time. The only one you can be *sure* is dead is one\nwith your own PID.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 19:47:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stats_temp_directory conflicts"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 07:47:05PM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > I'm thinking the server should manage this; during startup, create\n> > $stats_temp_directory/PostgreSQL.$postmaster_pid and store each stats file\n> > therein.\n> \n> +1\n> \n> > Just before creating that directory, scan $stats_temp_directory and\n> > delete subdirectories that no longer correspond to live PIDs.\n> \n> Hm, seems potentially racy, if multiple postmasters are starting\n> at the same time. The only one you can be *sure* is dead is one\n> with your own PID.\n\nTrue; if a PID=123 postmaster launches and completes startup after a\nslightly-older PID=122 postmaster issues kill(123, 0) and before PID=122\nissues unlink()s, PID=122 unlinks files wrongly. I think I would fix that\nwith fcntl(F_SETLKW)/Windows-equivalent on some file in stats_temp_directory.\nOne would acquire the lock before deleting a subdirectory, except for a\npostmaster deleting the directory it created. (If a postmaster finds a stale\ndirectory for its PID, delete that directory and create a new one. Like any\ndeletion, one must hold the lock.)\n\n(Alternately, one could just accept that risk.)\n\nAnother problem comes to mind; if postmasters of different UIDs share a\nstats_temp_directory, then a PID=1234 postmaster may find itself unable to\ndelete the stale PostgreSQL.1234 subdirectory owned by some other UID. To fix\nthat, the name pattern probably should be\n$stats_temp_directory/PostgreSQL.$euid.$postmaster_pid.\n\n",
"msg_date": "Sun, 30 Dec 2018 23:06:15 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: stats_temp_directory conflicts"
},
{
"msg_contents": "\nOn 12/30/18 11:06 PM, Noah Misch wrote:\n> On Sun, Dec 30, 2018 at 07:47:05PM -0500, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> I'm thinking the server should manage this; during startup, create\n>>> $stats_temp_directory/PostgreSQL.$postmaster_pid and store each stats file\n>>> therein.\n>> +1\n>>\n>>> Just before creating that directory, scan $stats_temp_directory and\n>>> delete subdirectories that no longer correspond to live PIDs.\n>> Hm, seems potentially racy, if multiple postmasters are starting\n>> at the same time. The only one you can be *sure* is dead is one\n>> with your own PID.\n> True; if a PID=123 postmaster launches and completes startup after a\n> slightly-older PID=122 postmaster issues kill(123, 0) and before PID=122\n> issues unlink()s, PID=122 unlinks files wrongly. I think I would fix that\n> with fcntl(F_SETLKW)/Windows-equivalent on some file in stats_temp_directory.\n> One would acquire the lock before deleting a subdirectory, except for a\n> postmaster deleting the directory it created. (If a postmaster finds a stale\n> directory for its PID, delete that directory and create a new one. Like any\n> deletion, one must hold the lock.)\n>\n> (Alternately, one could just accept that risk.)\n>\n> Another problem comes to mind; if postmasters of different UIDs share a\n> stats_temp_directory, then a PID=1234 postmaster may find itself unable to\n> delete the stale PostgreSQL.1234 subdirectory owned by some other UID. To fix\n> that, the name pattern probably should be\n> $stats_temp_directory/PostgreSQL.$euid.$postmaster_pid.\n\n\n\nI like this scheme. It will certainly make using a RAMdisk simpler for\nbuildfarm members.\n\n\n+1 for locking rather than running the risk of incorrect deletions.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Dec 2018 16:52:33 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stats_temp_directory conflicts"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 10:32:31AM -0500, Andrew Dunstan wrote:\n> On 12/30/18 12:53 AM, Noah Misch wrote:\n> > 2. stats_temp_directory is incompatible with TAP suites that start more than\n> > one node simultaneously.\n\n> The obvious quick fix would be to have PostgresNode.pm set this to the\n> default after inserting the TEMP_CONFIG file.\n\nI'd like to get $SUBJECT in place for variables other than\nstats_temp_directory, using your quick fix idea. Attached. When its time\ncomes, your stats_temp_directory work can delete that section.",
"msg_date": "Sat, 6 Apr 2019 23:41:56 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "On Sun, Apr 7, 2019 at 2:41 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sun, Dec 30, 2018 at 10:32:31AM -0500, Andrew Dunstan wrote:\n> > On 12/30/18 12:53 AM, Noah Misch wrote:\n> > > 2. stats_temp_directory is incompatible with TAP suites that start more than\n> > > one node simultaneously.\n>\n> > The obvious quick fix would be to have PostgresNode.pm set this to the\n> > default after inserting the TEMP_CONFIG file.\n>\n> I'd like to get $SUBJECT in place for variables other than\n> stats_temp_directory, using your quick fix idea. Attached. When its time\n> comes, your stats_temp_directory work can delete that section.\n\nLooks good.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Apr 2019 07:56:02 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "On Sun, Apr 07, 2019 at 07:56:02AM -0400, Andrew Dunstan wrote:\n> On Sun, Apr 7, 2019 at 2:41 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Sun, Dec 30, 2018 at 10:32:31AM -0500, Andrew Dunstan wrote:\n> > > On 12/30/18 12:53 AM, Noah Misch wrote:\n> > > > 2. stats_temp_directory is incompatible with TAP suites that start more than\n> > > > one node simultaneously.\n> >\n> > > The obvious quick fix would be to have PostgresNode.pm set this to the\n> > > default after inserting the TEMP_CONFIG file.\n> >\n> > I'd like to get $SUBJECT in place for variables other than\n> > stats_temp_directory, using your quick fix idea. Attached. When its time\n> > comes, your stats_temp_directory work can delete that section.\n> \n> Looks good.\n\nPushed. This broke 010_dump_connstr.pl on bowerbird, introducing 'invalid\nbyte sequence for encoding \"UTF8\"' errors. That's because log_connections\nrenders this 010_dump_connstr.pl solution insufficient:\n\n # In a SQL_ASCII database, pgwin32_message_to_UTF16() needs to\n # interpret everything as UTF8. We're going to use byte sequences\n # that aren't valid UTF-8 strings, so that would fail. Use LATIN1,\n # which accepts any byte and has a conversion from each byte to UTF-8.\n $ENV{LC_ALL} = 'C';\n $ENV{PGCLIENTENCODING} = 'LATIN1';\n\nThe log_connections message prints before CheckMyDatabase() calls\npg_perm_setlocale() to activate that LATIN1 database encoding. Since\nbowerbird does a non-NLS build, GetMessageEncoding()==PG_SQL_ASCII at that\ntime. Some options:\n\n1. Make this one test explicitly set log_connections = off. This workaround\n restores what we had a day ago.\n\n2. Move the log_connections message after CheckMyDatabase() calls\n pg_perm_setlocale(), so it gets regular post-startup encoding treatment.\n That fixes this particular test. It's still wrong when a database's name\n is not valid in that database's encoding.\n\n3. If GetMessageEncoding()==PG_SQL_ASCII, make pgwin32_message_to_UTF16()\n assume the text is already UTF8, like it does when not in a transaction.\n If UTF8->UTF16 conversion fails, the caller will send untranslated bytes to\n write() or ReportEventA().\n\n4. If GetMessageEncoding()==PG_SQL_ASCII, make pgwin32_message_to_UTF16()\n return NULL. The caller will always send untranslated bytes to write() or\n ReportEventA(). This seems consistent with the SQL_ASCII concept and with\n pg_do_encoding_conversion()'s interpretation of SQL_ASCII.\n\n5. When including a datname or rolname value in a message, hex-escape\n non-ASCII bytes. They are byte sequences, not text of known encoding.\n This preserves the most information, but it's overkill and ugly in the\n probably-common case of one encoding across all databases of a cluster.\n\nI'm inclined to do (1) in back branches and (4) in HEAD only. (If starting\nfresh today, I would store the encoding of each rolname and dbname or just use\nUTF8 for those particular fields.) Other preferences?\n\nThanks,\nnm\n\n\n",
"msg_date": "Sat, 11 May 2019 18:56:15 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Pushed. This broke 010_dump_connstr.pl on bowerbird, introducing 'invalid\n> byte sequence for encoding \"UTF8\"' errors. That's because log_connections\n> renders this 010_dump_connstr.pl solution insufficient:\n\nUgh.\n\n> 4. If GetMessageEncoding()==PG_SQL_ASCII, make pgwin32_message_to_UTF16()\n> return NULL. The caller will always send untranslated bytes to write() or\n> ReportEventA(). This seems consistent with the SQL_ASCII concept and with\n> pg_do_encoding_conversion()'s interpretation of SQL_ASCII.\n\n> 5. When including a datname or rolname value in a message, hex-escape\n> non-ASCII bytes. They are byte sequences, not text of known encoding.\n> This preserves the most information, but it's overkill and ugly in the\n> probably-common case of one encoding across all databases of a cluster.\n\n> I'm inclined to do (1) in back branches and (4) in HEAD only. (If starting\n> fresh today, I would store the encoding of each rolname and dbname or just use\n> UTF8 for those particular fields.) Other preferences?\n\nI agree that (4) is a fairly reasonable thing to do, and wouldn't mind\nback-patching that. Taking a wider view, this seems closely related\nto something I've been thinking about in connection with the recent\npg_stat_activity contretemps: that mechanism is also shoving strings\nacross database boundaries without a lot of worry about encodings.\nMaybe we should try to develop a common solution.\n\nOne difference from the datname/rolname situation is that for\npg_stat_activity we can know the source encoding --- we aren't storing\nit now, but we easily could. If we're thinking of a future solution\nonly, adding a \"name encoding\" field to relevant shared catalogs makes\nsense perhaps. Alternatively, requiring names in shared catalogs to be\nUTF8 might be a reasonable answer too.\n\nIn all these cases, throwing an error when we can't translate a character\ninto the destination encoding is not very pleasant. For pg_stat_activity,\nI was imagining that translating such characters to '?' might be the best\nanswer. I don't know if we can get away with that for the datname/rolname\ncase --- at the very least, it opens problems with apparent duplication of\nnames that should be unique. I don't much like your hex-encoding answer,\nthough; that has its own uniqueness-violation hazards, plus it's ugly.\n\nI don't have a strong feeling about what's best.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 May 2019 22:43:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Augment every test postgresql.conf"
},
{
"msg_contents": "On Sat, May 11, 2019 at 10:43:59PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Pushed. This broke 010_dump_connstr.pl on bowerbird, introducing 'invalid\n> > byte sequence for encoding \"UTF8\"' errors. That's because log_connections\n> > renders this 010_dump_connstr.pl solution insufficient:\n> \n> Ugh.\n> \n> > 4. If GetMessageEncoding()==PG_SQL_ASCII, make pgwin32_message_to_UTF16()\n> > return NULL. The caller will always send untranslated bytes to write() or\n> > ReportEventA(). This seems consistent with the SQL_ASCII concept and with\n> > pg_do_encoding_conversion()'s interpretation of SQL_ASCII.\n> \n> > 5. When including a datname or rolname value in a message, hex-escape\n> > non-ASCII bytes. They are byte sequences, not text of known encoding.\n> > This preserves the most information, but it's overkill and ugly in the\n> > probably-common case of one encoding across all databases of a cluster.\n> \n> > I'm inclined to do (1) in back branches and (4) in HEAD only. (If starting\n> > fresh today, I would store the encoding of each rolname and dbname or just use\n> > UTF8 for those particular fields.) Other preferences?\n> \n> I agree that (4) is a fairly reasonable thing to do, and wouldn't mind\n> back-patching that.\n\nOkay. Absent objections, I'll just do it that way.\n\n> Taking a wider view, this seems closely related\n> to something I've been thinking about in connection with the recent\n> pg_stat_activity contretemps: that mechanism is also shoving strings\n> across database boundaries without a lot of worry about encodings.\n> Maybe we should try to develop a common solution.\n> \n> One difference from the datname/rolname situation is that for\n> pg_stat_activity we can know the source encoding --- we aren't storing\n> it now, but we easily could. If we're thinking of a future solution\n> only, adding a \"name encoding\" field to relevant shared catalogs makes\n> sense perhaps. Alternatively, requiring names in shared catalogs to be\n> UTF8 might be a reasonable answer too.\n> \n> In all these cases, throwing an error when we can't translate a character\n> into the destination encoding is not very pleasant. For pg_stat_activity,\n> I was imagining that translating such characters to '?' might be the best\n> answer. I don't know if we can get away with that for the datname/rolname\n> case --- at the very least, it opens problems with apparent duplication of\n> names that should be unique. I don't much like your hex-encoding answer,\n> though; that has its own uniqueness-violation hazards, plus it's ugly.\n\nAnother case of byte sequence masquerading as text is pg_settings.setting.\n\nIn most contexts, it's important to convey exact values. Error messages can\nuse '?'. I wouldn't let dump/reload of a rolname corrupt it that way, and I\nwouldn't recognize the '?' version for authentication. While\npg_stat_activity.query could use '?', I'd encourage adding bytea and encoding\ncolumns for exact transmission. pg_stat_activity can't standardize on UTF8\nwithout shrinking the set of valid queries or inaccurately reporting some,\nneither of which is attractive.\n\ndatname/rolname could afford to be more prescriptive, since non-ASCII names\nare full of bugs today. A useful consequence of UTF8 datname/rolname would be\ntoday's \"pg_dumpall --globals\" remaining simple. If we were to support\narbitrary encodings with a \"name encoding\" field, the general-case equivalent\nof \"pg_dumpall --globals\" would connect to several databases of different\nencodings in order to dump all objects, perhaps even creating a temporary\ndatabase if no suitable-encoding database existed.\n\nMULE_INTERNAL presents trouble since we don't have a UTF8<->MULE_INTERNAL\nconversion. If we standardized cross-database strings on UTF8, it would be\nimpossible to read such strings, create roles, etc. from a MULE_INTERNAL\ndatabase. I suppose we'd either add the conversion or deprecate\nMULE_INTERNAL, forbidding its use as the initdb encoding.\n\n\n",
"msg_date": "Sun, 12 May 2019 01:13:47 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Augment every test postgresql.conf"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was just modifying configure.in for another patch, then tried to\ngenerate the new configure with autoconf on Debian. However I am\nbumping into some noise in the process. First the state associated to\nrunstatedir support gets generated, which makes little sense for\nPostgres as that's a path for installing data files modified by the\nbinaries run:\n+ -runstatedir | --runstatedir | --runstatedi | --runstated \\\n+ | --runstate | --runstat | --runsta | --runst | --runs \\\n+ | --run | --ru | --r)\n+ ac_prev=runstatedir ;;\n\nThen I am getting some garbage for some of the macro definitions:\n-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n\nAre usually those diffs just discarded manually before committing\npatches? Or is there some specific configuration which can be used\nwith autoconf, in which case it would be interesting to document that\nfor developers?\n\nThanks,\n--\nMichael",
"msg_date": "Sat, 29 Dec 2018 23:08:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Garbage contents after running autoconf 2.69"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I was just modifying configure.in for another patch, then tried to\n> generate the new configure with autoconf on Debian. However I am\n> bumping into some noise in the process.\n\nProject practice is to use plain-vanilla autoconf 2.69. Vendor\npackages tend to contain various \"improvements\" that will cause you\nto get different results than other committers do. Fortunately\nautoconf is pretty trivial to install: grab from the GNU archive,\nconfigure, make, make install should do it.\n\nMy habit is to configure with, say, --prefix=/usr/local/autoconf-2.69\nand then insert /usr/local/autoconf-2.69/bin in my PATH. This makes\nit relatively painless to cope with using different autoconf versions\nfor different PG branches (though at the moment that's not a thing\nto worry about).\n\n> Or is there some specific configuration which can be used\n> with autoconf, in which case it would be interesting to document that\n> for developers?\n\nHmm, I thought this was documented somewhere, but I'm not awake\nenough to remember where.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 29 Dec 2018 10:36:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Garbage contents after running autoconf 2.69"
},
{
"msg_contents": "On Sat, Dec 29, 2018 at 10:36:02AM -0500, Tom Lane wrote:\n> Project practice is to use plain-vanilla autoconf 2.69. Vendor\n> packages tend to contain various \"improvements\" that will cause you\n> to get different results than other committers do. Fortunately\n> autoconf is pretty trivial to install: grab from the GNU archive,\n> configure, make, make install should do it.\n\nAh, thanks. I did not know that bit.\n\n> Hmm, I thought this was documented somewhere, but I'm not awake\n> enough to remember where.\n\nI could not find any reference on the wiki or in the code, but I may\nhave missed a reference of course. Anyway, I got it sorted out now.\n--\nMichael",
"msg_date": "Sun, 30 Dec 2018 15:25:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Garbage contents after running autoconf 2.69"
}
] |
[
{
"msg_contents": "initdb and pg_basebackup can use atexit() to register cleanup actions\ninstead of requiring the use of custom exit_nicely() etc. Patches attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 29 Dec 2018 16:12:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Use atexit() in initdb and pg_basebackup"
},
{
"msg_contents": "On 2018-Dec-29, Peter Eisentraut wrote:\n\n> @@ -387,6 +388,7 @@ StreamLog(void)\n> \tif (!conn)\n> \t\t/* Error message already written in GetConnection() */\n> \t\treturn;\n> +\tatexit(disconnect_atexit);\n> \n> \tif (!CheckServerVersionForStreaming(conn))\n> \t{\n\nSeems you're registering the atexit cb twice here; you should only do so\nin the first \"!conn\" block.\n\nIt would be nicer to be able to call atexit() in GetConnection() instead\nof at each callsite, but that would require a place to save each conn\nstruct into, which is probably more work than warranted.\n\n> @@ -3438,5 +3437,8 @@ main(int argc, char *argv[])\n> \n> \tdestroyPQExpBuffer(start_db_cmd);\n> \n> +\t/* prevent cleanup */\n> +\tmade_new_pgdata = found_existing_pgdata = made_new_xlogdir = found_existing_xlogdir = false;\n> +\n> \treturn 0;\n> }\n\nThis is a bit ugly, but meh.\n\nOther than the first point, LGTM.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 16:35:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use atexit() in initdb and pg_basebackup"
},
{
"msg_contents": "On Fri, Jan 04, 2019 at 04:35:51PM -0300, Alvaro Herrera wrote:\n> On 2018-Dec-29, Peter Eisentraut wrote:\n>> @@ -3438,5 +3437,8 @@ main(int argc, char *argv[])\n>> \n>> \tdestroyPQExpBuffer(start_db_cmd);\n>> \n>> +\t/* prevent cleanup */\n>> +\tmade_new_pgdata = found_existing_pgdata = made_new_xlogdir = found_existing_xlogdir = false;\n>> +\n>> \treturn 0;\n>> }\n> \n> This is a bit ugly, but meh.\n> \n> Other than the first point, LGTM.\n\nRe-meuh (French version). That's partially a problem of this patch\nbecause all those flags get reset. I think that it would be cleaner\nto replace all those boolean flags with just a simple bits16 or such,\nmaking the flag cleanup reset way cleaner, and less error-prone if\nmore flag types are added in the future.\n--\nMichael",
"msg_date": "Sat, 5 Jan 2019 10:23:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use atexit() in initdb and pg_basebackup"
},
{
"msg_contents": "On 04/01/2019 20:35, Alvaro Herrera wrote:\n> Seems you're registering the atexit cb twice here; you should only do so\n> in the first \"!conn\" block.\n\nOK, fixed.\n\n>> @@ -3438,5 +3437,8 @@ main(int argc, char *argv[])\n>> \n>> \tdestroyPQExpBuffer(start_db_cmd);\n>> \n>> +\t/* prevent cleanup */\n>> +\tmade_new_pgdata = found_existing_pgdata = made_new_xlogdir = found_existing_xlogdir = false;\n>> +\n>> \treturn 0;\n>> }\n> \n> This is a bit ugly, but meh.\n\nYeah. Actually, we already have a solution of this in pg_basebackup,\nwith a bool success variable. I rewrote it like that. At least it's\nbetter for uniformity.\n\nI also added an atexit() conversion in isolationtester. It's mostly the\nsame thing.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 5 Jan 2019 15:44:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Use atexit() in initdb and pg_basebackup"
},
{
"msg_contents": "On 2019-Jan-05, Peter Eisentraut wrote:\n\n> On 04/01/2019 20:35, Alvaro Herrera wrote:\n\n> >> +\t/* prevent cleanup */\n> >> +\tmade_new_pgdata = found_existing_pgdata = made_new_xlogdir = found_existing_xlogdir = false;\n> >> +\n> >> \treturn 0;\n> >> }\n> > \n> > This is a bit ugly, but meh.\n> \n> Yeah. Actually, we already have a solution of this in pg_basebackup,\n> with a bool success variable. I rewrote it like that. At least it's\n> better for uniformity.\n\nAh, yeah, much better, LGTM.\n\n> I also added an atexit() conversion in isolationtester. It's mostly the\n> same thing.\n\nLGTM in a quick eyeball.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 5 Jan 2019 12:42:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Use atexit() in initdb and pg_basebackup"
},
{
"msg_contents": "On 05/01/2019 16:42, Alvaro Herrera wrote:\n>> Yeah. Actually, we already have a solution of this in pg_basebackup,\n>> with a bool success variable. I rewrote it like that. At least it's\n>> better for uniformity.\n> \n> Ah, yeah, much better, LGTM.\n> \n>> I also added an atexit() conversion in isolationtester. It's mostly the\n>> same thing.\n> \n> LGTM in a quick eyeball.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 7 Jan 2019 16:37:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Use atexit() in initdb and pg_basebackup"
}
] |
[
{
"msg_contents": "I was working on a little thing where I needed to simulate BETWEEN\nSYMMETRIC so naturally I used least() and greatest(). I was a little\nsurprised to see that my expressions were not folded into straight\nconstants and the estimates were way off as a consequence.\n\nI came up with the attached patch to fix it, but it's so ridiculously\nsmall that I fear I'm missing something.\n\nI don't think this needs any documentation and I didn't see where we\nhave any existing tests for eval_const_expressions so I didn't create\nany either.\n\nThoughts?\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support",
"msg_date": "Sat, 29 Dec 2018 22:40:14 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Optimize constant MinMax expressions"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> I was working on a little thing where I needed to simulate BETWEEN\n> SYMMETRIC so naturally I used least() and greatest(). I was a little\n> surprised to see that my expressions were not folded into straight\n> constants and the estimates were way off as a consequence.\n\n> I came up with the attached patch to fix it, but it's so ridiculously\n> small that I fear I'm missing something.\n\nWell, the question this is begging is in the adjacent comment:\n\n * Generic handling for node types whose own processing is\n * known to be immutable, and for which we need no smarts\n\nCan we assume that the underlying datatype comparison function is\nimmutable? I guess so, since we assume that in nearby code such as\ncontain_mutable_functions_walker, but I don't think it should be done\nwithout at least a comment.\n\nBTW, poking around for other code involving MinMaxExpr, I notice that\ncontain_leaked_vars_walker is effectively assuming that all datatype\ncomparison functions are leakproof, an assumption I find a bit debatable.\nMaybe it's all right, but again, it should certainly not have gone without\na comment.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 29 Dec 2018 18:36:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize constant MinMax expressions"
},
{
"msg_contents": "On 30/12/2018 00:36, Tom Lane wrote:\n> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>> I was working on a little thing where I needed to simulate BETWEEN\n>> SYMMETRIC so naturally I used least() and greatest(). I was a little\n>> surprised to see that my expressions were not folded into straight\n>> constants and the estimates were way off as a consequence.\n> \n>> I came up with the attached patch to fix it, but it's so ridiculously\n>> small that I fear I'm missing something.\n> \n> Well, the question this is begging is in the adjacent comment:\n> \n> * Generic handling for node types whose own processing is\n> * known to be immutable, and for which we need no smarts\n> \n> Can we assume that the underlying datatype comparison function is\n> immutable? I guess so, since we assume that in nearby code such as\n> contain_mutable_functions_walker, but I don't think it should be done\n> without at least a comment.\n\nAdding a comment is easy enough. How is the attached?\n\n> BTW, poking around for other code involving MinMaxExpr, I notice that\n> contain_leaked_vars_walker is effectively assuming that all datatype\n> comparison functions are leakproof, an assumption I find a bit debatable.\n> Maybe it's all right, but again, it should certainly not have gone without\n> a comment.\n\nSurely this is out of scope for my patch?\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support",
"msg_date": "Sun, 30 Dec 2018 09:31:48 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize constant MinMax expressions"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 30/12/2018 00:36, Tom Lane wrote:\n>> Can we assume that the underlying datatype comparison function is\n>> immutable? I guess so, since we assume that in nearby code such as\n>> contain_mutable_functions_walker, but I don't think it should be done\n>> without at least a comment.\n\n> Adding a comment is easy enough. How is the attached?\n\nPushed with a bit of wordsmithing on the comment.\n\n>> BTW, poking around for other code involving MinMaxExpr, I notice that\n>> contain_leaked_vars_walker is effectively assuming that all datatype\n>> comparison functions are leakproof, an assumption I find a bit debatable.\n>> Maybe it's all right, but again, it should certainly not have gone without\n>> a comment.\n\n> Surely this is out of scope for my patch?\n\nI'd been thinking that we might just add a similar comment there, but\non reflection that doesn't seem like the right thing, so I started a\nseparate thread about it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 13:44:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize constant MinMax expressions"
},
{
"msg_contents": "On 30/12/2018 19:44, Tom Lane wrote:\n> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>> On 30/12/2018 00:36, Tom Lane wrote:\n>>> Can we assume that the underlying datatype comparison function is\n>>> immutable? I guess so, since we assume that in nearby code such as\n>>> contain_mutable_functions_walker, but I don't think it should be done\n>>> without at least a comment.\n> \n>> Adding a comment is easy enough. How is the attached?\n> \n> Pushed with a bit of wordsmithing on the comment.\n\nThanks! I've updated the commitfest entry to reflect that.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n",
"msg_date": "Sun, 30 Dec 2018 20:29:18 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize constant MinMax expressions"
}
] |
[
{
"msg_contents": "Use a separate random seed for SQL random()/setseed() functions.\n\nPreviously, the SQL random() function depended on libc's random(3),\nand setseed() invoked srandom(3). This results in interference between\nthese functions and backend-internal uses of random(3). We'd never paid\ntoo much mind to that, but in the wake of commit 88bdbd3f7 which added\nlog_statement_sample_rate, the interference arguably has a security\nconsequence: if log_statement_sample_rate is active then an unprivileged\nuser could probably control which if any of his SQL commands get logged,\nby issuing setseed() at the right times. That seems bad.\n\nTo fix this reliably, we need random() and setseed() to use their own\nprivate random state variable. Standard random(3) isn't amenable to such\nusage, so let's switch to pg_erand48(). It's hard to say whether that's\nmore or less \"random\" than any particular platform's version of random(3),\nbut it does have a wider seed value and a longer period than are required\nby POSIX, so we can hope that this isn't a big downgrade. Also, we should\nnow have uniform behavior of random() across platforms, which is worth\nsomething.\n\nWhile at it, upgrade the per-process seed initialization method to use\npg_strong_random() if available, greatly reducing the predictability\nof the initial seed value. (I'll separately do something similar for\nthe internal uses of random().)\n\nIn addition to forestalling the possible security problem, this has a\nbenefit in the other direction, which is that we can now document\nsetseed() as guaranteeing a reproducible sequence of random() values.\nPreviously, because of the possibility of internal calls of random(3),\nwe could not promise any such thing.\n\nDiscussion: https://postgr.es/m/3859.1545849900@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/6645ad6bdd81e7d5a764e0d94ef52fae053a9e13\n\nModified Files\n--------------\ndoc/src/sgml/func.sgml | 14 +++++++-----\nsrc/backend/utils/adt/float.c | 52 ++++++++++++++++++++++++++++++++++++-------\n2 files changed, 53 insertions(+), 13 deletions(-)\n\n",
"msg_date": "Sat, 29 Dec 2018 22:33:38 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Use a separate random seed for SQL random()/setseed()\n functions."
},
{
"msg_contents": "Hello Tom,\n\nI'm sorry I'm a bit late back into this discussion, I was on the road.\n\n> To fix this reliably, we need random() and setseed() to use their own\n> private random state variable.\n\nOk.\n\n> Standard random(3) isn't amenable to such usage, so let's switch to \n> pg_erand48().\n\nHmmm… bad idea?\n\n> It's hard to say whether that's more or less \"random\" than any \n> particular platform's version of random(3),\n\nIt looks much less pseudo-random on Linux: POSIX provides 3 pseudo-random \nfunctions (probably 2 too many): \"random\", \"rand\" (mapped on the previous \none by glibc), and \".rand48\". According to glibc documentation, \"random\" \nhas an internal state of 31 long integers i.e. 992 bits (checked into the \nsource code, although why it can only be seeded from 32 bits fails me) \nwith a \"nonlinear additive feedback\" PRNG, vs 48 bits of rand48 linear \ncongruential generator PRNG, also seeded from 32 bits.\n\nFor me, a 48 bit state is inadequate for anything but a toy application \nthat would need a few casual pseudo-random numbers. I recommand against \nusing the rand48 for a backend purpose which might have any, even remote, \nsecurity implication.\n\nAs rand48 is a LCG, it cycles on the low-order bits so that they should \nnot be used (although they are for erand48). The 48 bit state looks like a \n80's design, when hardware was between 16 and 32 bit, and was still in use \nin the 90's so that it is in POSIX and Java. I can retro-explain it as \nfollows: the aim was to produce reasonable 32 bit pseudo-random ints on \nslow machines while not using low-order bits, so 48 was the closest \nround-up possible. Why not go up to 64 bits was very probably because it \nwould have required more expensive mults to simulate 64 bit multiply on a \n16 or 32 bit architecture. The 48-bit LCG makes it \"good enough\" for less \nthan a cubic root of size samples, i.e. 2**16 draws. This is much too \nsmall on today GHz hardware.\n\nISTM that 64 bits would be on the too-low side as well. I'd shop for a 128 \nor 256-bit state generator. I'm unsure of the best choice, though. I have \nlooked at \"xorshift128+\" and \"xoshiro256**\", which have some critics \n(basically, non-cryptographic PRNG can have their state rebuilt from a few \noutputs, and there usually is a simple transformation on outputs which \nmake it fails statistical tests). ISTM that \"xoshiro256**\" would be a \nreasonable choice, much better than \"rand48\". An LCG with a larger state \n(>= 128) could be admissible as well.\n\n> but it does have a wider seed value and a longer period than are required\n> by POSIX, so we can hope that this isn't a big downgrade.\n\nI'd say that it is a significant downgrade that I wish postgres woud \navoid, especially with the argument that it for better security!\n\nI'd suggest again that (1) postgres should provide an \nalgorithm-independent interface to its PRNG with an external state and (2) \nuse an alternative to rand48, the choice of which should be discussed.\n\n-- \nFabien.",
"msg_date": "Sun, 30 Dec 2018 11:06:52 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use a separate random seed for SQL random()/setseed()\n functions."
}
] |
[
{
"msg_contents": "Starting from\nhttps://www.postgresql.org/message-id/CAEepm%3D2vORBhWQZ1DJmKXmCVi%2B15Tgrv%2B9brHLanWU7XE_FWxQ%40mail.gmail.com\n\nHere is a patch trying to implement what was proposed by Tom Lane:\n\n\"What we could/should do instead, I think, is have pgss_planner_hook\nmake its own pgss_store() call to log the planning time results\n(which would mean we don't need the added PlannedStmt field at all).\nThat would increase the overhead of this feature, which might mean\nthat it'd be worth having a pg_stat_statements GUC to enable it.\nBut it'd be a whole lot cleaner.\"\n\nNow:\npgss_post_parse_analyze, initialize pgss entry with sql text,\npgss_planner_hook, adds planning_time and counts,\npgss_ExecutorEnd, works unchanged.\n\nbut doesn't include any pg_stat_statements GUC to enable it yet.\n\nnote: I didn't catch the sentence \"which would mean we don't need the added PlannedStmt field at all\".\n\n\nRegarding initial remark from Thomas Munro:\n\n\"I agree with the sentiment on the old thread that\n{total,min,max,mean,stddev}_time now seem badly named, but adding\nexecution makes them so long... Thoughts?\"\n\nWhat would you think about:\n- userid\n- dbid\n- queryid\n- query\n- plans\n- plan_time\n- {min,max,mean,stddev}_plan_time\n- calls\n- exec_time\n- {min,max,mean,stddev}_exec_time\n- total_time (being the sum of plan_time and exec_time)\n- rows\n- ...\n\nRegards\nPAscal",
"msg_date": "Sat, 29 Dec 2018 23:27:18 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hello\n\nThank you for picking this up! Did you register patch in CF app? I did not found entry.\n\nCurrently we have pg_stat_statements 1.7 version and this patch does not apply... My fast and small view:\n\n> -\t\t\t errmsg(\"could not read file \\\"%s\\\": %m\",\n> +\t\t\t errmsg(\"could not read pg_stat_statement file \\\"%s\\\": %m\",\n\nNot sure this is need for this patch. Usually refactoring and new features are different topics.\n\n> +#define PG_STAT_STATEMENTS_COLS_V1_4\t25\n\nshould not be actual version? I think version in names is relevant to extension version.\n\nAnd this patch does not have documentation changes.\n\n> \"I agree with the sentiment on the old thread that\n> {total,min,max,mean,stddev}_time now seem badly named, but adding\n> execution makes them so long... Thoughts?\"\n>\n> What would you think about:\n> - userid\n> - dbid\n> - queryid\n> - query\n> - plans\n> - plan_time\n> - {min,max,mean,stddev}_plan_time\n> - calls\n> - exec_time\n> - {min,max,mean,stddev}_exec_time\n> - total_time (being the sum of plan_time and exec_time)\n> - rows\n> - ...\n\nWe have some consensus about backward incompatible changes in this function? *plan_time + *exec_time naming is ok for me\n\nregards, Sergei\n\n",
"msg_date": "Tue, 12 Feb 2019 16:51:57 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi Sergei,\n\nThank you for this review !\n\n>Did you register patch in CF app? I did not found entry. \nI think it is related to https://commitfest.postgresql.org/16/1373/\nbut I don't know how to link it with.\n\nYes, there are many things to improve, but before to go deeper, \nI would like to know if that way to do it (with 3 access to pgss hash)\nhas a chance to get consensus ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Tue, 12 Feb 2019 12:50:42 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi\n\n> I think it is related to https://commitfest.postgresql.org/16/1373/\n> but I don't know how to link it with.\n\nYou can just add new entry in open commitfest and then attach previous thread. This is usual way for pick up old patches. For example, as i did here: https://commitfest.postgresql.org/20/1711/\n\n> Yes, there are many things to improve, but before to go deeper,\n> I would like to know if that way to do it (with 3 access to pgss hash)\n> has a chance to get consensus ?\n\nI can not say something here, i am not experienced contributor here.\nCan you post some performance test results with slowdown comparison between master branch and proposed patch?\n\nregards, Sergei\n\n",
"msg_date": "Wed, 13 Feb 2019 10:53:44 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Thank you Sergei for your comments,\n\n> Did you register patch in CF app? I did not found entry.\ncreated today: https://commitfest.postgresql.org/22/1999/\n\n> Currently we have pg_stat_statements 1.7 version and this patch does not\n> apply... \nwill rebase and create a 1.8 version\n\n> -\t\t\t errmsg(\"could not read file \\\"%s\\\": %m\",\n> +\t\t\t errmsg(\"could not read pg_stat_statement file \\\"%s\\\": %m\",\nthis is a mistake, will fix\n\n> +#define PG_STAT_STATEMENTS_COLS_V1_4\t25\nI thought it was needed when adding new columns, isn't it ?\n\n> And this patch does not have documentation changes.\nwill fix\n\nand will provide some kind of benchmark to compare with actual version.\n\nRegards\nPAscal \n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Thu, 14 Feb 2019 14:21:39 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi\n\n>> +#define PG_STAT_STATEMENTS_COLS_V1_4 25\n>\n> I thought it was needed when adding new columns, isn't it ?\n\nYes, this is needed. I mean it should be PG_STAT_STATEMENTS_COLS_V1_8: because such change was made for 1.8 pg_stat_statements version. Same thing for other version-specific places.\n\nregards, Sergei\n\n",
"msg_date": "Fri, 15 Feb 2019 10:32:30 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi PAscal,\n\nOn 2/15/19 11:32 AM, Sergei Kornilov wrote:\n> Hi\n> \n>>> +#define PG_STAT_STATEMENTS_COLS_V1_4 25\n>>\n>> I thought it was needed when adding new columns, isn't it ?\n> \n> Yes, this is needed. I mean it should be PG_STAT_STATEMENTS_COLS_V1_8: because such change was made for 1.8 pg_stat_statements version. Same thing for other version-specific places.\n\nThis patch has been waiting for an update for over a month. Do you know \nwhen you will have one ready? Should we move the release target to PG13?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Wed, 20 Mar 2019 14:43:03 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi,\nHere is a rebased and corrected version .\n\nColumns naming has not been modified, I would propose to change it to:\n - plans: ok\n - planning_time --> plan_time\n - calls: ok\n - total_time --> exec_time\n - {min,max,mean,stddev}_time: ok\n - new total_time (being the sum of plan_time and exec_time)\n\nRegards\nPAscal",
"msg_date": "Fri, 22 Mar 2019 22:46:36 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 11:46 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Here is a rebased and corrected version .\n\nThis patch has multiple trailing whitespace, indent and coding style\nissues. You should consider running pg_indent before submitting a\npatch. I attach the diff after running pgindent if you want more\ndetails about the various issues.\n\n- * Track statement execution times across a whole database cluster.\n+ * Track statement planning and execution times across a whole cluster.\n\nif we're changing this, we should also fix the fact that's it's not\ntracking only the time but various resources?\n\n+ /* calc differences of buffer counters. */\n+ bufusage.shared_blks_hit =\n+ pgBufferUsage.shared_blks_hit - bufusage_start.shared_blks_hit;\n[...]\n\nThis is an exact duplication of pgss_ProcessUtility(), it's probably\nbetter to create a macro or a function for that instead.\n\n+ pgss_store(\"\",\n+ parse->queryId, /* signal that it's a\nutility stmt */\n+ -1,\n\nthe comment makes no sense, and also you can't pass an empty query\nstring / unknown len. There's no guarantee that the entry for the\ngiven queryId won't have been evicted, and in this case you'll create\na new and unrelated entry.\n\n@@ -832,13 +931,13 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query)\n * the normalized string would be the same as the query text anyway, so\n * there's no need for an early entry.\n */\n- if (jstate.clocations_count > 0)\n pgss_store(pstate->p_sourcetext,\n\nWhy did you remove this? pgss_store() isn't free, and asking to\ngenerate a normalized query for a query that doesn't have any constant\nor storing the entry early won't do anything useful AFAICT. Though if\nthat's useful, you definitely can't remove the test without adapting\nthe comment and the indentation.\n\n@@ -1249,15 +1351,19 @@ pgss_store(const char *query, uint64 queryId,\n if (e->counters.calls == 0)\n e->counters.usage = USAGE_INIT;\n\n- e->counters.calls += 1;\n- e->counters.total_time += total_time;\n- if (e->counters.calls == 1)\n+ if (planning_time == 0)\n+ {\n+ e->counters.calls += 1;\n+ e->counters.total_time += total_time;\n+ }\n+\n+ if (e->counters.calls == 1 && planning_time == 0)\n {\n e->counters.min_time = total_time;\n e->counters.max_time = total_time;\n e->counters.mean_time = total_time;\n }\n- else\n+ else if(planning_time == 0)\n {\n /*\n * Welford's method for accurately computing variance. See\n@@ -1276,6 +1382,9 @@ pgss_store(const char *query, uint64 queryId,\n if (e->counters.max_time < total_time)\n e->counters.max_time = total_time;\n }\n+ if (planning_time > 0)\n+ e->counters.plans += 1;\n+ e->counters.planning_time += planning_time;\n\nthere are 4 tests to check if planning_time is zero or not, it's quite\nmessy. Could you refactor the code to avoid so many tests? It would\nprobably be useful to add some asserts to check that we don't provide\nboth planning_time == 0 and execution related values. The function's\ncomment would also need to be adapted to mention the new rationale\nwith planning_time.\n\n * hash table entry for the PREPARE (with hash calculated from the query\n * string), and then a different one with the same query string (but hash\n * calculated from the query tree) would be used to accumulate costs of\n- * ensuing EXECUTEs. This would be confusing, and inconsistent with other\n- * cases where planning time is not included at all.\n+ * ensuing EXECUTEs.\n\nthe comment about confusing behavior is still valid.\n\n>\n> Columns naming has not been modified, I would propose to change it to:\n> - plans: ok\n> - planning_time --> plan_time\n> - calls: ok\n> - total_time --> exec_time\n> - {min,max,mean,stddev}_time: ok\n> - new total_time (being the sum of plan_time and exec_time)\n\nplan_time and exec_time are accumulated counters, so we need to keep\nthe total_ prefix in any case. I think it's ok to break the function\noutput names if we keep some kind of compatibility at the view level\n(which can keep total_time as the sum of total_plan_time and\ntotal_exec_time), so current queries against the view wouldn't break,\nand get what they probably wanted.",
"msg_date": "Sat, 23 Mar 2019 12:48:27 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "> This patch has multiple trailing whitespace, indent and coding style\n> issues. You should consider running pg_indent before submitting a\n> patch. I attach the diff after running pgindent if you want more\n> details about the various issues.\n\nfixed\n\n\n> - * Track statement execution times across a whole database cluster.\n> + * Track statement planning and execution times across a whole cluster.\n\n> if we're changing this, we should also fix the fact that's it's not\n> tracking only the time but various resources?\n\nfixed\n\n\n> + /* calc differences of buffer counters. */\n> + bufusage.shared_blks_hit =\n> + pgBufferUsage.shared_blks_hit - bufusage_start.shared_blks_hit;> >\n> [...]\n\n> This is an exact duplication of pgss_ProcessUtility(), it's probably\n> better to create a macro or a function for that instead.\n\nyes, maybe later (I don't know macros)\n\n\n> + pgss_store(\"\",\n> + parse->queryId, /* signal that it's a\n> utility stmt */\n> + -1,\n\n> the comment makes no sense, and also you can't pass an empty query\n> string / unknown len. There's no guarantee that the entry for the\n> given queryId won't have been evicted, and in this case you'll create\n> a new and unrelated entry.\n\nFixed, comment was wrong\nQuery text is not available in pgss_planner_hook\nthat's why pgss_store execution is forced in pgss_post_parse_analyze\n(to initialize pgss entry with its query text).\n\nThere is a very small risk that query has been evicted between\npgss_post_parse_analyze and pgss_planner_hook.\n\n\n\n> @@ -832,13 +931,13 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query)\n> * the normalized string would be the same as the query text anyway, so\n> * there's no need for an early entry.\n> */\n> - if (jstate.clocations_count > 0)\n> pgss_store(pstate->p_sourcetext,\n\n> Why did you remove this? pgss_store() isn't free, and asking to\n> generate a normalized query for a query that doesn't have any constant\n> or storing the entry early won't do anything useful AFAICT. Though if\n> that's useful, you definitely can't remove the test without adapting\n> the comment and the indentation.\n\nSee explanation in previous answer (comments have been updated accordingly)\n\n\n> there are 4 tests to check if planning_time is zero or not, it's quite\n> messy. Could you refactor the code to avoid so many tests? It would\n> probably be useful to add some asserts to check that we don't provide\n> both planning_time == 0 and execution related values. The function's\n> comment would also need to be adapted to mention the new rationale\n> with planning_time.\n\nFixed\n\n\n> * hash table entry for the PREPARE (with hash calculated from the query\n> * string), and then a different one with the same query string (but hash\n> * calculated from the query tree) would be used to accumulate costs of\n> - * ensuing EXECUTEs. This would be confusing, and inconsistent with other\n> - * cases where planning time is not included at all.\n> + * ensuing EXECUTEs.\n\n> the comment about confusing behavior is still valid.\n\nFixed\n\n\n>> Columns naming has not been modified, I would propose to change it to:\n>> - plans: ok\n>> - planning_time --> plan_time\n>> - calls: ok\n>> - total_time --> exec_time\n>> - {min,max,mean,stddev}_time: ok\n>> - new total_time (being the sum of plan_time and exec_time)\n\n> plan_time and exec_time are accumulated counters, so we need to keep\n> the total_ prefix in any case. I think it's ok to break the function\n> output names if we keep some kind of compatibility at the view level\n> (which can keep total_time as the sum of total_plan_time and\n> total_exec_time), so current queries against the view wouldn't break,\n> and get what they probably wanted.\n\nbefore to change this at all (view, function, code, doc) levels,\nI would like to be sure that column names will be:\n - plans\n - total_plan_time\n - calls\n - total_exec_time\n - min_time (without exec in name)\n - max_time (without exec in name)\n - mean_time (without exec in name)\n - stddev_time (without exec in name)\n - total_time (being the sum of total_plan_time and total_exec_time)\n\ncould other users confirm ?",
"msg_date": "Sat, 23 Mar 2019 22:08:05 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 11:08 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> > This patch has multiple trailing whitespace, indent and coding style\n> > issues. You should consider running pg_indent before submitting a\n> > patch. I attach the diff after running pgindent if you want more\n> > details about the various issues.\n>\n> fixed\n\nThere are still trailing whitespaces and comments wider than 80\ncharacters in the C code that should be fixed.\n\n> > + pgss_store(\"\",\n> > + parse->queryId, /* signal that it's a\n> > utility stmt */\n> > + -1,\n>\n> > the comment makes no sense, and also you can't pass an empty query\n> > string / unknown len. There's no guarantee that the entry for the\n> > given queryId won't have been evicted, and in this case you'll create\n> > a new and unrelated entry.\n>\n> Fixed, comment was wrong\n> Query text is not available in pgss_planner_hook\n> that's why pgss_store execution is forced in pgss_post_parse_analyze\n> (to initialize pgss entry with its query text).\n>\n> There is a very small risk that query has been evicted between\n> pgss_post_parse_analyze and pgss_planner_hook.\n>\n> > @@ -832,13 +931,13 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query)\n> > * the normalized string would be the same as the query text anyway, so\n> > * there's no need for an early entry.\n> > */\n> > - if (jstate.clocations_count > 0)\n> > pgss_store(pstate->p_sourcetext,\n>\n> > Why did you remove this? pgss_store() isn't free, and asking to\n> > generate a normalized query for a query that doesn't have any constant\n> > or storing the entry early won't do anything useful AFAICT. Though if\n> > that's useful, you definitely can't remove the test without adapting\n> > the comment and the indentation.\n>\n> See explanation in previous answer (comments have been updated accordingly)\n\nThe alternative being to expose query text to the planner, which could\nfix this (unlikely) issue and could also sometimes save a pgss_store()\ncall. I did a quick check and at least AQO and pg_hint_plan\nextensions have some hacks to be able to access the query text from\nthe planner, so there are at least multiple needs for that. Perhaps\nit's time to do it?\n\n> > there are 4 tests to check if planning_time is zero or not, it's quite\n> > messy. Could you refactor the code to avoid so many tests? It would\n> > probably be useful to add some asserts to check that we don't provide\n> > both planning_time == 0 and execution related values. The function's\n> > comment would also need to be adapted to mention the new rationale\n> > with planning_time.\n>\n> Fixed\n\n+ /* updating counters for execute OR planning */\n+ Assert(planning_time > 0 && total_time > 0);\n+ if (planning_time == 0)\n\nThis is obviously incorrect. The general sanity check for exclusion\nbetween planning_time and total_time should be at the beginning of\npgss_store. Maybe some others asserts are needed to verify that\nplanning_time cannot be provided along jstate or other conditions.\n\n",
"msg_date": "Sun, 24 Mar 2019 11:24:50 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 11:24 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > there are 4 tests to check if planning_time is zero or not, it's quite\n> > > messy. Could you refactor the code to avoid so many tests? It would\n> > > probably be useful to add some asserts to check that we don't provide\n> > > both planning_time == 0 and execution related values. The function's\n> > > comment would also need to be adapted to mention the new rationale\n> > > with planning_time.\n> >\n> > Fixed\n>\n> + /* updating counters for execute OR planning */\n> + Assert(planning_time > 0 && total_time > 0);\n> + if (planning_time == 0)\n>\n> This is obviously incorrect. The general sanity check for exclusion\n> between planning_time and total_time should be at the beginning of\n> pgss_store. Maybe some others asserts are needed to verify that\n> planning_time cannot be provided along jstate or other conditions.\n\nActually, since pgss_store is now called to either:\n\n- explicitly store a query text\n- accumulate planning duration\n- accumulate execution duration\n\nand they're all mutually exclusive, It's probably better to change\npgss_store to pass an enum to describe what the call is for , and keep\na single time parameter. It should make the code simpler.\n\n",
"msg_date": "Sun, 24 Mar 2019 13:10:59 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "As there are now 3 locking times on pgss hash struct, one day or an other, \nsomebody will ask for a GUC to disable this feature (to be able to run pgss\nunchanged with only one lock as today).\n\nWith this GUC, pgss_store should be able to store the query text and\naccumulated \nexecution duration in the same call (as today).\n\nWill try to provide this soon.\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Mon, 25 Mar 2019 13:30:45 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "here is a new version:\n\n - \"track_planning\" GUC added\n to permit to keep previous behavior unchanged\n - columns names have been changed / added:\n total_plan_time, total_exec_time, total_time\n - trailing whitespaces and comments wider than 80 characters\n not fixed\n - \"if (jstate.clocations_count > 0) pgss_store(pstate->p_sourcetext,...\"\n has been reverted\n - expose query text to the planner\n won't fix (out of my knowledge)\n - \"Assert(planning_time > 0 && total_time > 0);\"\n moved at the beginning of pgss_store\n\nRegards\nPAscal",
"msg_date": "Tue, 26 Mar 2019 23:21:21 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 12:21 AM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> here is a new version:\n>\n> - \"track_planning\" GUC added\n> to permit to keep previous behavior unchanged\n\ngood\n\n> - trailing whitespaces and comments wider than 80 characters\n> not fixed\n\nwhy? In case it's not clear, I'm talking about the .c file, not the\nregression tests.\n\n> - \"Assert(planning_time > 0 && total_time > 0);\"\n> moved at the beginning of pgss_store\n\nHave you tried to actually compile postgres and pg_stat_statements\nwith --enable-cassert? This test can *never* be true, since you\neither provide the planning time or the execution time or neither. As\nI said in my previous mail, adding a parameter to say which counter\nyou're updating, instead of adding another counter that's mutually\nexclusive with the other would make everything clearer.\n\n\n",
"msg_date": "Wed, 27 Mar 2019 11:37:07 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": ">> - trailing whitespaces and comments wider than 80 characters\n>> not fixed\n\n> why? In case it's not clear, I'm talking about the .c file, not the\n> regression tests.\n\nI work on a poor msys install on windows 7, where perl is broken ;o(\nSo no pgindent available. \nWill fix that later, or as soon as I get a pgindent diff.\n\n>> - \"Assert(planning_time > 0 && total_time > 0);\"\n>> moved at the beginning of pgss_store\n\n> Have you tried to actually compile postgres and pg_stat_statements\n> with --enable-cassert? This test can *never* be true, since you\n> either provide the planning time or the execution time or neither. As\n> I said in my previous mail, adding a parameter to say which counter\n> you're updating, instead of adding another counter that's mutually\n> exclusive with the other would make everything clearer.\n\nYes this \"assert\" is useless as is ... I'll remove it.\nI understand you proposal of pgss_store refactoring, but I don't have \nmuch time available now ... and I would like to check that performances \nare not broken before any other modification ...\n\nRegards\nPAscal\n\n\n\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 27 Mar 2019 13:36:08 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 9:36 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> >> - trailing whitespaces and comments wider than 80 characters\n> >> not fixed\n>\n> > why? In case it's not clear, I'm talking about the .c file, not the\n> > regression tests.\n>\n> I work on a poor msys install on windows 7, where perl is broken ;o(\n> So no pgindent available.\n> Will fix that later, or as soon as I get a pgindent diff.\n>\n> >> - \"Assert(planning_time > 0 && total_time > 0);\"\n> >> moved at the beginning of pgss_store\n>\n> > Have you tried to actually compile postgres and pg_stat_statements\n> > with --enable-cassert? This test can *never* be true, since you\n> > either provide the planning time or the execution time or neither. As\n> > I said in my previous mail, adding a parameter to say which counter\n> > you're updating, instead of adding another counter that's mutually\n> > exclusive with the other would make everything clearer.\n>\n> Yes this \"assert\" is useless as is ... I'll remove it.\n> I understand you proposal of pgss_store refactoring, but I don't have\n> much time available now ... and I would like to check that performances\n> are not broken before any other modification ...\n\nOk, but keep in mind that this is the last commitfest for pg12, and\nthere are only 4 days left. Will you have time to take care of it, or\ndo you need help on it?\n\n\n",
"msg_date": "Wed, 27 Mar 2019 22:22:18 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Julien Rouhaud wrote\n> On Wed, Mar 27, 2019 at 9:36 PM legrand legrand\n> <\n\n> legrand_legrand@\n\n> > wrote:\n>>\n>> >> - trailing whitespaces and comments wider than 80 characters\n>> >> not fixed\n>>\n>> > why? In case it's not clear, I'm talking about the .c file, not the\n>> > regression tests.\n>>\n>> I work on a poor msys install on windows 7, where perl is broken ;o(\n>> So no pgindent available.\n>> Will fix that later, or as soon as I get a pgindent diff.\n>>\n>> >> - \"Assert(planning_time > 0 && total_time > 0);\"\n>> >> moved at the beginning of pgss_store\n>>\n>> > Have you tried to actually compile postgres and pg_stat_statements\n>> > with --enable-cassert? This test can *never* be true, since you\n>> > either provide the planning time or the execution time or neither. As\n>> > I said in my previous mail, adding a parameter to say which counter\n>> > you're updating, instead of adding another counter that's mutually\n>> > exclusive with the other would make everything clearer.\n>>\n>> Yes this \"assert\" is useless as is ... I'll remove it.\n>> I understand you proposal of pgss_store refactoring, but I don't have\n>> much time available now ... and I would like to check that performances\n>> are not broken before any other modification ...\n> \n> Ok, but keep in mind that this is the last commitfest for pg12, and\n> there are only 4 days left. Will you have time to take care of it, or\n> do you need help on it?\n\nOups, sorry, I won't have time nor knowledge to finish in time ;o(\nAny help is welcome !\n\nRegards\nPAscal\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:39:56 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi\n\n>> Ok, but keep in mind that this is the last commitfest for pg12, and\n>> there are only 4 days left. Will you have time to take care of it, or\n>> do you need help on it?\n>\n> Oups, sorry, I won't have time nor knowledge to finish in time ;o(\n> Any help is welcome !\n\nNo need to rush, this patch has is unlikely to get committed in pg12 even a month earlier. We have a general policy that we don't like complex patches that first show up for the last commitfest of a dev cycle. Current commitfest is last one before feature freeze.\n\nI want such feature and will help with review in pg13 cycle.\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 28 Mar 2019 10:45:08 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 8:45 AM Sergei Kornilov <sk@zsrv.org> wrote:\n>\n> >> Ok, but keep in mind that this is the last commitfest for pg12, and\n> >> there are only 4 days left. Will you have time to take care of it, or\n> >> do you need help on it?\n> >\n> > Oups, sorry, I won't have time nor knowledge to finish in time ;o(\n> > Any help is welcome !\n>\n> No need to rush, this patch has is unlikely to get committed in pg12 even a month earlier. We have a general policy that we don't like complex patches that first show up for the last commitfest of a dev cycle. Current commitfest is last one before feature freeze.\n\nyes, but this patch first showed up years ago:\nhttps://commitfest.postgresql.org/16/1373/. Since nothing happened\nsince, it would be nice to have feedback on whether deeper changes on\nthe planning functions are required (so for pg13), or if current\napproach is ok (and then I hope it'd be acceptable for pg12).\n\n\n",
"msg_date": "Thu, 28 Mar 2019 09:48:41 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 9:48 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Mar 28, 2019 at 8:45 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> >\n> > >> Ok, but keep in mind that this is the last commitfest for pg12, and\n> > >> there are only 4 days left. Will you have time to take care of it, or\n> > >> do you need help on it?\n> > >\n> > > Oups, sorry, I won't have time nor knowledge to finish in time ;o(\n> > > Any help is welcome !\n> >\n> > No need to rush, this patch has is unlikely to get committed in pg12 even a month earlier. We have a general policy that we don't like complex patches that first show up for the last commitfest of a dev cycle. Current commitfest is last one before feature freeze.\n>\n> yes, but this patch first showed up years ago:\n> https://commitfest.postgresql.org/16/1373/. Since nothing happened\n> since, it would be nice to have feedback on whether deeper changes on\n> the planning functions are required (so for pg13), or if current\n> approach is ok (and then I hope it'd be acceptable for pg12).\n\nIf that's helpful I attach the updated patches. I split in two\ncommits, so if the query_text passing is not wanted it's quite easy to\nignore this part.",
"msg_date": "Thu, 28 Mar 2019 14:30:32 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi,\n\nI have played with this patch, it works fine.\n\nrem the last position of the \"new\" total_time column is confusing\n+CREATE VIEW pg_stat_statements AS\n+ SELECT *, total_plan_time + total_exec_time AS total_time\n+ FROM pg_stat_statements(true);\n\nI wanted to perform some benchmark between those 4 cases:\n0 - no pgss,\n1 - original pgss (no planning counter and 1 access to pgss hash),\n2 - pggs reading querytext in planner hook (* 2 accesses to pgss hash),\n3 - pggs reading querytext in parse hook (* 3 accesses to pgss hash)\n\nIt seems that the difference is so tiny, that there was no other way than\nrunning \nminimal \"Select 1;\" statement ...\n\n./pgbench -c 10 -j 5 -t 500000 -f select1stmt.sql postgres\n\ncase avg_tps pct_diff\n0 89 278 --\t\n1 88 745 0,6%\n2 88 282 1,1%\n3 86 660 2,9%\n\nThis means that even in this extrem test case, the worst degradation is less\nthan 3%\n(this overhead can be removed using pg_stat_statements.track_planning guc)\n\nnotes:\n- PostgreSQL 12devel on x86_64-w64-mingw32, compiled by gcc.exe \n(x86_64-win32-sehrev1, Built by MinGW-W64 project) 7.2.0, 64-bit,\n- cpu usage was less that 95%,\n- avg_tps is based on 3 runs,\n- there was a wait of arround 1 minute between each run to keep \n computer temperature and fan usage low.\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 1 Apr 2019 13:35:19 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Apr 1, 2019 at 10:35 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> I have played with this patch, it works fine.\n\nThanks for testing!\n\n> rem the last position of the \"new\" total_time column is confusing\n> +CREATE VIEW pg_stat_statements AS\n> + SELECT *, total_plan_time + total_exec_time AS total_time\n> + FROM pg_stat_statements(true);\n\nYes, there are quite a lot of fields in pg_stat_statements(), so I\nadded the total_time as the last field to avoid enumerating all of\nthem. I can change that if needed.\n\n\n> I wanted to perform some benchmark between those 4 cases:\n> 0 - no pgss,\n> 1 - original pgss (no planning counter and 1 access to pgss hash),\n> 2 - pggs reading querytext in planner hook (* 2 accesses to pgss hash),\n> 3 - pggs reading querytext in parse hook (* 3 accesses to pgss hash)\n>\n> It seems that the difference is so tiny, that there was no other way than\n> running\n> minimal \"Select 1;\" statement ...\n>\n> ./pgbench -c 10 -j 5 -t 500000 -f select1stmt.sql postgres\n>\n> case avg_tps pct_diff\n> 0 89 278 --\n> 1 88 745 0,6%\n> 2 88 282 1,1%\n> 3 86 660 2,9%\n>\n> This means that even in this extrem test case, the worst degradation is less\n> than 3%\n> (this overhead can be removed using pg_stat_statements.track_planning guc)\n\nIs the difference between 2 and 3 the extraneous pgss_store call to\nalways store the query text if planner hook doesn't have access to the\nquery text?\n\n\n",
"msg_date": "Tue, 2 Apr 2019 07:18:46 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi,\n\n>>\n>> case avg_tps pct_diff\n>> 0 89 278 --\n>> 1 88 745 0,6%\n>> 2 88 282 1,1%\n>> 3 86 660 2,9%\n>>\n>> This means that even in this extrem test case, the worst degradation is less\n>> than 3%\n>> (this overhead can be removed using pg_stat_statements.track_planning guc)\n\n> Is the difference between 2 and 3 the extraneous pgss_store call to\n> always store the query text if planner hook doesn't have access to the\n> query text?\n\nYes it is,\nbut I agree it seems a big gap (1,8%) compared to the difference between 1 and 2 (0,5%).\nMaybe this is just mesure \"noise\" ...\n\nRegards\nPAscal\n\n\n\n\n\n\n\n\nHi,\n\n>>\n>> case avg_tps pct_diff\n>> 0 89 278 --\n>> 1 88 745 0,6%\n>> 2 88 282 1,1%\n>> 3 86 660 2,9%\n>>\n>> This means that even in this extrem test case, the worst degradation is less\n>> than 3%\n>> (this overhead can be removed using pg_stat_statements.track_planning guc)\n\n> Is the difference between 2 and 3 the extraneous pgss_store call to\n> always store the query text if planner hook doesn't have access to the\n> query text?\n\nYes it is, \nbut I agree it seems a big gap (1,8%) compared to the difference between 1 and 2 (0,5%).\nMaybe this is just mesure \"noise\" ...\n\nRegards\nPAscal",
"msg_date": "Tue, 2 Apr 2019 07:22:52 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, Apr 2, 2019 at 9:22 AM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> >> case avg_tps pct_diff\n> >> 0 89 278 --\n> >> 1 88 745 0,6%\n> >> 2 88 282 1,1%\n> >> 3 86 660 2,9%\n> >>\n> >> This means that even in this extrem test case, the worst degradation is less\n> >> than 3%\n> >> (this overhead can be removed using pg_stat_statements.track_planning guc)\n>\n> > Is the difference between 2 and 3 the extraneous pgss_store call to\n> > always store the query text if planner hook doesn't have access to the\n> > query text?\n>\n> Yes it is,\n> but I agree it seems a big gap (1,8%) compared to the difference between 1 and 2 (0,5%).\n> Maybe this is just mesure \"noise\" ...\n\nRebased patches attached.",
"msg_date": "Mon, 1 Jul 2019 13:31:07 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hello\n\nI think the most important question for this topic is performance penalty.\nIt was a long story, first test on my desktop was too volatile. I setup separate PC with DB only and test few cases.\n\nPC spec: 2-core Intel Core 2 Duo E6550, 4GB ram, mechanical HDD\nAll tests on top 7dedfd22b79822b7f4210e6255b672ea82db6678 commit, build via ./configure --prefix=/home/melkij/tmp/ --enable-tap-tests\nDB settings:\n listen_addresses = '*'\n log_line_prefix = '%m %p %u@%d from %h [vxid:%v txid:%x] [%i] '\n lc_messages = 'C'\n shared_buffers = 512MB\n\npgbench runned from different host, in same L2 network.\nDatabase was generated by: pgbench -s 10 -i -h hostname postgres\nAfter database start I run:\n create extension if not exists pg_prewarm;\n select count(*), sum(pg_prewarm) from pg_tables join pg_prewarm(tablename::regclass) on true where schemaname= 'public';\n select count(*), sum(pg_prewarm) from pg_indexes join pg_prewarm(indexname::regclass) on true where schemaname= 'public';\nSo all data was in buffers.\n\nLoad generated by command: pgbench --builtin=select-only --time=300 -n -c 10 -h hostname postgres -M (vary)\n\nTests are:\nhead_no_pgss - unpatched version, empty shared_preload_libraries\nhead_track_none - unpatched version with:\n shared_preload_libraries = 'pg_stat_statements'\n pg_stat_statements.max = 5000\n pg_stat_statements.track = none\n pg_stat_statements.save = off\n pg_stat_statements.track_utility = off\nhead_track_top - the same but with pg_stat_statements.track=top\n5-times runned in every mode -M: simple, extended, prepared\n\npatch_not_loaded - build with latest published patches, empty shared_preload_libraries\npatch_track_none - patched build with\n shared_preload_libraries = 'pg_stat_statements'\n pg_stat_statements.max = 5000\n pg_stat_statements.track = none\n pg_stat_statements.save = off\n pg_stat_statements.track_utility = off\n pg_stat_statements.track_planning = off\npatch_track_top - the same but with pg_stat_statements.track=top\npatch_track_planning - with:\n shared_preload_libraries = 'pg_stat_statements'\n pg_stat_statements.max = 5000\n pg_stat_statements.track = top\n pg_stat_statements.save = off\n pg_stat_statements.track_utility = off\n pg_stat_statements.track_planning = on\n\n10-times runned in every mode -M: simple, extended, prepared\n\nResults:\n\n test | mode | average_tps | degradation_perc \n----------------------+----------+-------------+------------------\n head_no_pgss | extended | 13816 | 1.000\n patch_not_loaded | extended | 13755 | 0.996\n head_track_none | extended | 13607 | 0.985\n patch_track_none | extended | 13560 | 0.981\n head_track_top | extended | 13277 | 0.961\n patch_track_top | extended | 13189 | 0.955\n patch_track_planning | extended | 12983 | 0.940\n head_no_pgss | prepared | 29101 | 1.000\n head_track_none | prepared | 28510 | 0.980\n patch_track_none | prepared | 28481 | 0.979\n patch_not_loaded | prepared | 28382 | 0.975\n patch_track_planning | prepared | 28046 | 0.964\n head_track_top | prepared | 28035 | 0.963\n patch_track_top | prepared | 27973 | 0.961\n head_no_pgss | simple | 16733 | 1.000\n patch_not_loaded | simple | 16552 | 0.989\n head_track_none | simple | 16452 | 0.983\n patch_track_none | simple | 16365 | 0.978\n head_track_top | simple | 15867 | 0.948\n patch_track_top | simple | 15820 | 0.945\n patch_track_planning | simple | 15739 | 0.941\n\nSo I found slight slowdown with track_planning = off compared to HEAD. Possibly just at the level of measurement error. I think this is ok.\ntrack_planning = on also has no dramatic impact. In my opinion proposed design with pgss_store call is acceptable.\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 04 Sep 2019 19:19:47 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 07:19:47PM +0300, Sergei Kornilov wrote:\n>\n> ...\n>\n>Results:\n>\n> test | mode | average_tps | degradation_perc\n>----------------------+----------+-------------+------------------\n> head_no_pgss | extended | 13816 | 1.000\n> patch_not_loaded | extended | 13755 | 0.996\n> head_track_none | extended | 13607 | 0.985\n> patch_track_none | extended | 13560 | 0.981\n> head_track_top | extended | 13277 | 0.961\n> patch_track_top | extended | 13189 | 0.955\n> patch_track_planning | extended | 12983 | 0.940\n> head_no_pgss | prepared | 29101 | 1.000\n> head_track_none | prepared | 28510 | 0.980\n> patch_track_none | prepared | 28481 | 0.979\n> patch_not_loaded | prepared | 28382 | 0.975\n> patch_track_planning | prepared | 28046 | 0.964\n> head_track_top | prepared | 28035 | 0.963\n> patch_track_top | prepared | 27973 | 0.961\n> head_no_pgss | simple | 16733 | 1.000\n> patch_not_loaded | simple | 16552 | 0.989\n> head_track_none | simple | 16452 | 0.983\n> patch_track_none | simple | 16365 | 0.978\n> head_track_top | simple | 15867 | 0.948\n> patch_track_top | simple | 15820 | 0.945\n> patch_track_planning | simple | 15739 | 0.941\n>\n>So I found slight slowdown with track_planning = off compared to HEAD. Possibly just at the level of measurement error. I think this is ok.\n>track_planning = on also has no dramatic impact. In my opinion proposed design with pgss_store call is acceptable.\n>\n\nFWIW I've done some benchmarking on this too, with a single pgbench client\nrunning select-only test on a tiny database, in different modes (simple,\nextended, prepared). I've done that on two systems with different CPUs\n(spreadsheet with results attached).\n\nI don't see any performance regression - there are some small variations\nin both directions (say, ~1%) but that's well within the noise. So I think\nthe patch is fine in this regard.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 6 Sep 2019 16:19:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Sep 06, 2019 at 04:19:16PM +0200, Tomas Vondra wrote:\n>\n>FWIW I've done some benchmarking on this too, with a single pgbench client\n>running select-only test on a tiny database, in different modes (simple,\n>extended, prepared). I've done that on two systems with different CPUs\n>(spreadsheet with results attached).\n>\n\nAnd of course, I forgot to attach the spreadsheet, so here it is.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 6 Sep 2019 16:27:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hello\r\n\r\nOn 2019/09/06 23:19, Tomas Vondra wrote:\r\n> On Wed, Sep 04, 2019 at 07:19:47PM +0300, Sergei Kornilov wrote:\r\n>>\r\n>> ...\r\n>>\r\n>> Results:\r\n>>\r\n>> test | mode | average_tps | degradation_perc\r\n>> ----------------------+----------+-------------+------------------\r\n>> head_no_pgss | extended | 13816 | 1.000\r\n>> patch_not_loaded | extended | 13755 | 0.996\r\n>> head_track_none | extended | 13607 | 0.985\r\n>> patch_track_none | extended | 13560 | 0.981\r\n>> head_track_top | extended | 13277 | 0.961\r\n>> patch_track_top | extended | 13189 | 0.955\r\n>> patch_track_planning | extended | 12983 | 0.940\r\n>> head_no_pgss | prepared | 29101 | 1.000\r\n>> head_track_none | prepared | 28510 | 0.980\r\n>> patch_track_none | prepared | 28481 | 0.979\r\n>> patch_not_loaded | prepared | 28382 | 0.975\r\n>> patch_track_planning | prepared | 28046 | 0.964\r\n>> head_track_top | prepared | 28035 | 0.963\r\n>> patch_track_top | prepared | 27973 | 0.961\r\n>> head_no_pgss | simple | 16733 | 1.000\r\n>> patch_not_loaded | simple | 16552 | 0.989\r\n>> head_track_none | simple | 16452 | 0.983\r\n>> patch_track_none | simple | 16365 | 0.978\r\n>> head_track_top | simple | 15867 | 0.948\r\n>> patch_track_top | simple | 15820 | 0.945\r\n>> patch_track_planning | simple | 15739 | 0.941\r\n>>\r\n>> So I found slight slowdown with track_planning = off compared to HEAD. \r\n>> Possibly just at the level of measurement error. I think this is ok.\r\n>> track_planning = on also has no dramatic impact. In my opinion \r\n>> proposed design with pgss_store call is acceptable.\r\n>>\r\n> \r\n> FWIW I've done some benchmarking on this too, with a single pgbench client\r\n> running select-only test on a tiny database, in different modes (simple,\r\n> extended, prepared). I've done that on two systems with different CPUs\r\n> (spreadsheet with results attached).\r\n\r\nRefering to Sergei's results, if a user, currently using pgss with \r\ntracking execute time, uses the new feature, a user will see 0~2.2% \r\nperformance regression as below.\r\n\r\n >> head_track_top | extended | 13277 | 0.961\r\n >> patch_track_planning | extended | 12983 | 0.940\r\n >> patch_track_planning | prepared | 28046 | 0.964\r\n >> head_track_top | prepared | 28035 | 0.963\r\n >> head_track_top | simple | 15867 | 0.948\r\n >> patch_track_planning | simple | 15739 | 0.941\r\n\r\nIf a user will not turn on the track_planning, a user will see 0.2-0.6% \r\nperformance regression as below.\r\n\r\n >> head_track_top | extended | 13277 | 0.961\r\n >> patch_track_top | extended | 13189 | 0.955\r\n >> head_track_top | prepared | 28035 | 0.963\r\n >> patch_track_top | prepared | 27973 | 0.961\r\n >> head_track_top | simple | 15867 | 0.948\r\n >> patch_track_top | simple | 15820 | 0.945\r\n\r\n> \r\n> I don't see any performance regression - there are some small variations\r\n> in both directions (say, ~1%) but that's well within the noise. So I think\r\n> the patch is fine in this regard.\r\n\r\n+1\r\n\r\n\r\nI also saw the codes and have one comment.\r\n\r\n[0002 patch]\r\nIn pgss_planner_hook:\r\n\r\n+\t\t/* calc differences of buffer counters. */\r\n+\t\tbufusage = compute_buffer_counters(bufusage_start, pgBufferUsage);\r\n+\r\n+\t\t/*\r\n+\t\t * we only store planning duration, query text has been initialized\r\n+\t\t * during previous pgss_post_parse_analyze as it not available inside\r\n+\t\t * pgss_planner_hook.\r\n+\t\t */\r\n+\t\tpgss_store(query_text,\r\n\r\nDo we need to calculate bufusage in here?\r\nWe only store planning duration in the following pgss_store.\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Sun, 8 Sep 2019 09:45:27 +0000",
"msg_from": "Imai Yoshikazu <yoshikazu_i443@live.jp>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 4:19 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Sep 04, 2019 at 07:19:47PM +0300, Sergei Kornilov wrote:\n> >\n> > ...\n> >\n> >Results:\n> >\n> > test | mode | average_tps | degradation_perc\n> >----------------------+----------+-------------+------------------\n> > head_no_pgss | extended | 13816 | 1.000\n> > patch_not_loaded | extended | 13755 | 0.996\n> > head_track_none | extended | 13607 | 0.985\n> > patch_track_none | extended | 13560 | 0.981\n> > head_track_top | extended | 13277 | 0.961\n> > patch_track_top | extended | 13189 | 0.955\n> > patch_track_planning | extended | 12983 | 0.940\n> > head_no_pgss | prepared | 29101 | 1.000\n> > head_track_none | prepared | 28510 | 0.980\n> > patch_track_none | prepared | 28481 | 0.979\n> > patch_not_loaded | prepared | 28382 | 0.975\n> > patch_track_planning | prepared | 28046 | 0.964\n> > head_track_top | prepared | 28035 | 0.963\n> > patch_track_top | prepared | 27973 | 0.961\n> > head_no_pgss | simple | 16733 | 1.000\n> > patch_not_loaded | simple | 16552 | 0.989\n> > head_track_none | simple | 16452 | 0.983\n> > patch_track_none | simple | 16365 | 0.978\n> > head_track_top | simple | 15867 | 0.948\n> > patch_track_top | simple | 15820 | 0.945\n> > patch_track_planning | simple | 15739 | 0.941\n> >\n> >So I found slight slowdown with track_planning = off compared to HEAD. Possibly just at the level of measurement error. I think this is ok.\n> >track_planning = on also has no dramatic impact. In my opinion proposed design with pgss_store call is acceptable.\n> >\n>\n> FWIW I've done some benchmarking on this too, with a single pgbench client\n> running select-only test on a tiny database, in different modes (simple,\n> extended, prepared). I've done that on two systems with different CPUs\n> (spreadsheet with results attached).\n>\n> I don't see any performance regression - there are some small variations\n> in both directions (say, ~1%) but that's well within the noise. So I think\n> the patch is fine in this regard.\n\nThanks a lot Sergei and Tomas! It's good to know that this patch\ndoesn't add significant overhead.\n\n\n",
"msg_date": "Wed, 11 Sep 2019 00:30:25 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hello,\n\nOn Sun, Sep 8, 2019 at 11:45 AM Imai Yoshikazu <yoshikazu_i443@live.jp> wrote:\n>\n> I also saw the codes and have one comment.\n\nThanks for looking at this patch!\n\n> [0002 patch]\n> In pgss_planner_hook:\n>\n> + /* calc differences of buffer counters. */\n> + bufusage = compute_buffer_counters(bufusage_start, pgBufferUsage);\n> +\n> + /*\n> + * we only store planning duration, query text has been initialized\n> + * during previous pgss_post_parse_analyze as it not available inside\n> + * pgss_planner_hook.\n> + */\n> + pgss_store(query_text,\n>\n> Do we need to calculate bufusage in here?\n> We only store planning duration in the following pgss_store.\n\nGood point! Postgres can definitely access some buffers while\nplanning a query (the most obvious example would be\nget_actual_variable_range()), but as far as I can tell those were\npreviously not accounted for with pg_stat_statements as\nqueryDesc->totaltime->bufusage is only accumulating buffer usage in\nthe executor, and indeed current patch also ignore such computed\ncounters.\n\nI think it would be better to keep this bufusage calculation during\nplanning and fix pgss_store() to process them, but this would add\nslightly more overhead.\n\n\n> We only store planning duration in the following pgss_store.\n>\n> --\n> Yoshikazu Imai\n\n\n",
"msg_date": "Wed, 11 Sep 2019 01:27:06 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, Sept 10, 2019 at 11:27 PM, Julien Rouhaud wrote:\r\n> > [0002 patch]\r\n> > In pgss_planner_hook:\r\n> >\r\n> > + /* calc differences of buffer counters. */\r\n> > + bufusage = compute_buffer_counters(bufusage_start, pgBufferUsage);\r\n> > +\r\n> > + /*\r\n> > + * we only store planning duration, query text has been initialized\r\n> > + * during previous pgss_post_parse_analyze as it not available inside\r\n> > + * pgss_planner_hook.\r\n> > + */\r\n> > + pgss_store(query_text,\r\n> >\r\n> > Do we need to calculate bufusage in here?\r\n> > We only store planning duration in the following pgss_store.\r\n> \r\n> Good point! Postgres can definitely access some buffers while\r\n> planning a query (the most obvious example would be\r\n> get_actual_variable_range()), but as far as I can tell those were\r\n> previously not accounted for with pg_stat_statements as\r\n> queryDesc->totaltime->bufusage is only accumulating buffer usage in\r\n> the executor, and indeed current patch also ignore such computed\r\n> counters.\r\n> \r\n> I think it would be better to keep this bufusage calculation during\r\n> planning and fix pgss_store() to process them, but this would add\r\nslightly more overhead.\r\n\r\nSorry for my late reply.\r\nI think overhead would be trivial and we can include bufusage of planning from\r\nthe POV of overhead, but yeah, it will be backward incompatibility if we\r\ninclude them.\r\n\r\n\r\nBTW, ISTM it is good for including {min,max,mean,stddev}_plan_time to\r\npg_stat_statements. Generally plan_time would be almost the same time in each\r\nexecution for the same query, but there are some exceptions. For example, if we\r\nuse prepare statements which uses partition tables, time differs largely\r\nbetween creating a general plan and creating a custom plan.\r\n\r\n1. Create partition table which has 1024 partitions.\r\n2. Prepare select and update statements.\r\n sel) prepare sel(int) as select * from pt where a = $1;\r\n upd) prepare upd(int, int) as update pt set a = $2 where a = $1;\r\n3. Execute each statement for 8 times.\r\n 3-1. Select from pg_stat_statements view after every execution.\r\n select query, plans, total_plan_time, calls, total_exec_time from pg_stat_statements where query like 'prepare%';\r\n\r\n\r\nResults of pg_stat_statements of sel) are\r\nquery | plans | total_plan_time | calls | total_exec_time \r\n---------------------------------------------------+-------+-----------------+-------+-----------------\r\n prepare sel(int) as select * from pt where a = $1 | 1 | 0.164361 | 1 | 0.004613\r\n prepare sel(int) as select * from pt where a = $1 | 2 | 0.27715500000000004 | 2 | 0.009447\r\n prepare sel(int) as select * from pt where a = $1 | 3 | 0.39100100000000004 | 3 | 0.014281\r\n prepare sel(int) as select * from pt where a = $1 | 4 | 0.504004 | 4 | 0.019265\r\n prepare sel(int) as select * from pt where a = $1 | 5 | 0.628242 | 5 | 0.024091\r\n prepare sel(int) as select * from pt where a = $1 | 7 | 24.213586000000003 | 6 | 0.029144\r\n prepare sel(int) as select * from pt where a = $1 | 8 | 24.368900000000004 | 7 | 0.034099\r\n prepare sel(int) as select * from pt where a = $1 | 9 | 24.527956000000003 | 8 | 0.046152\r\n\r\n\r\nResults of pg_stat_statements of upd) are\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 1 | 0.280099 | 1 | 0.013138\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 2 | 0.405416 | 2 | 0.01894\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 3 | 0.532361 | 3 | 0.040716\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 4 | 0.671445 | 4 | 0.046566\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 5 | 0.798531 | 5 | 0.052729000000000005\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 7 | 896.915458 | 6 | 0.05888600000000001\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 8 | 897.043512 | 7 | 0.064446\r\n prepare upd(int, int) as update pt set a = $2 where a = $1 | 9 | 897.169711 | 8 | 0.070644\r\n\r\n\r\nHow do you think about that?\r\n\r\n\r\n--\r\nYoshikazu Imai \r\n\r\n",
"msg_date": "Fri, 8 Nov 2019 04:35:30 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 5:35 AM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n>\n> On Tue, Sept 10, 2019 at 11:27 PM, Julien Rouhaud wrote:\n> > > [0002 patch]\n> > > In pgss_planner_hook:\n> > >\n> > > + /* calc differences of buffer counters. */\n> > > + bufusage = compute_buffer_counters(bufusage_start, pgBufferUsage);\n> > > +\n> > > + /*\n> > > + * we only store planning duration, query text has been initialized\n> > > + * during previous pgss_post_parse_analyze as it not available inside\n> > > + * pgss_planner_hook.\n> > > + */\n> > > + pgss_store(query_text,\n> > >\n> > > Do we need to calculate bufusage in here?\n> > > We only store planning duration in the following pgss_store.\n> >\n> > Good point! Postgres can definitely access some buffers while\n> > planning a query (the most obvious example would be\n> > get_actual_variable_range()), but as far as I can tell those were\n> > previously not accounted for with pg_stat_statements as\n> > queryDesc->totaltime->bufusage is only accumulating buffer usage in\n> > the executor, and indeed current patch also ignore such computed\n> > counters.\n> >\n> > I think it would be better to keep this bufusage calculation during\n> > planning and fix pgss_store() to process them, but this would add\n> slightly more overhead.\n>\n> Sorry for my late reply.\n> I think overhead would be trivial and we can include bufusage of planning from\n> the POV of overhead, but yeah, it will be backward incompatibility if we\n> include them.\n\nOk, let's keep planning's bufusage then.\n\n> BTW, ISTM it is good for including {min,max,mean,stddev}_plan_time to\n> pg_stat_statements. Generally plan_time would be almost the same time in each\n> execution for the same query, but there are some exceptions. For example, if we\n> use prepare statements which uses partition tables, time differs largely\n> between creating a general plan and creating a custom plan.\n>\n> 1. Create partition table which has 1024 partitions.\n> 2. Prepare select and update statements.\n> sel) prepare sel(int) as select * from pt where a = $1;\n> upd) prepare upd(int, int) as update pt set a = $2 where a = $1;\n> 3. Execute each statement for 8 times.\n> 3-1. Select from pg_stat_statements view after every execution.\n> select query, plans, total_plan_time, calls, total_exec_time from pg_stat_statements where query like 'prepare%';\n>\n>\n> Results of pg_stat_statements of sel) are\n> query | plans | total_plan_time | calls | total_exec_time\n> ---------------------------------------------------+-------+-----------------+-------+-----------------\n> prepare sel(int) as select * from pt where a = $1 | 1 | 0.164361 | 1 | 0.004613\n> prepare sel(int) as select * from pt where a = $1 | 2 | 0.27715500000000004 | 2 | 0.009447\n> prepare sel(int) as select * from pt where a = $1 | 3 | 0.39100100000000004 | 3 | 0.014281\n> prepare sel(int) as select * from pt where a = $1 | 4 | 0.504004 | 4 | 0.019265\n> prepare sel(int) as select * from pt where a = $1 | 5 | 0.628242 | 5 | 0.024091\n> prepare sel(int) as select * from pt where a = $1 | 7 | 24.213586000000003 | 6 | 0.029144\n> prepare sel(int) as select * from pt where a = $1 | 8 | 24.368900000000004 | 7 | 0.034099\n> prepare sel(int) as select * from pt where a = $1 | 9 | 24.527956000000003 | 8 | 0.046152\n>\n>\n> Results of pg_stat_statements of upd) are\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 1 | 0.280099 | 1 | 0.013138\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 2 | 0.405416 | 2 | 0.01894\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 3 | 0.532361 | 3 | 0.040716\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 4 | 0.671445 | 4 | 0.046566\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 5 | 0.798531 | 5 | 0.052729000000000005\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 7 | 896.915458 | 6 | 0.05888600000000001\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 8 | 897.043512 | 7 | 0.064446\n> prepare upd(int, int) as update pt set a = $2 where a = $1 | 9 | 897.169711 | 8 | 0.070644\n>\n>\n> How do you think about that?\n\nThat's indeed a very valid point and something we should help user to\ninvestigate. I'll submit an updated patch with support for\nmin/max/mean/stddev plan time shortly.\n\n\n",
"msg_date": "Fri, 8 Nov 2019 15:31:36 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 3:31 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Nov 8, 2019 at 5:35 AM imai.yoshikazu@fujitsu.com\n> <imai.yoshikazu@fujitsu.com> wrote:\n> >\n> > On Tue, Sept 10, 2019 at 11:27 PM, Julien Rouhaud wrote:\n> > > > [0002 patch]\n> > > > In pgss_planner_hook:\n> > > >\n> > > > + /* calc differences of buffer counters. */\n> > > > + bufusage = compute_buffer_counters(bufusage_start, pgBufferUsage);\n> > > > +\n> > > > + /*\n> > > > + * we only store planning duration, query text has been initialized\n> > > > + * during previous pgss_post_parse_analyze as it not available inside\n> > > > + * pgss_planner_hook.\n> > > > + */\n> > > > + pgss_store(query_text,\n> > > >\n> > > > Do we need to calculate bufusage in here?\n> > > > We only store planning duration in the following pgss_store.\n> > >\n> > > Good point! Postgres can definitely access some buffers while\n> > > planning a query (the most obvious example would be\n> > > get_actual_variable_range()), but as far as I can tell those were\n> > > previously not accounted for with pg_stat_statements as\n> > > queryDesc->totaltime->bufusage is only accumulating buffer usage in\n> > > the executor, and indeed current patch also ignore such computed\n> > > counters.\n> > >\n> > > I think it would be better to keep this bufusage calculation during\n> > > planning and fix pgss_store() to process them, but this would add\n> > slightly more overhead.\n> >\n> > Sorry for my late reply.\n> > I think overhead would be trivial and we can include bufusage of planning from\n> > the POV of overhead, but yeah, it will be backward incompatibility if we\n> > include them.\n>\n> Ok, let's keep planning's bufusage then.\n>\n> > BTW, ISTM it is good for including {min,max,mean,stddev}_plan_time to\n> > pg_stat_statements. Generally plan_time would be almost the same time in each\n> > execution for the same query, but there are some exceptions. For example, if we\n> > use prepare statements which uses partition tables, time differs largely\n> > between creating a general plan and creating a custom plan.\n> >\n> > 1. Create partition table which has 1024 partitions.\n> > 2. Prepare select and update statements.\n> > sel) prepare sel(int) as select * from pt where a = $1;\n> > upd) prepare upd(int, int) as update pt set a = $2 where a = $1;\n> > 3. Execute each statement for 8 times.\n> > 3-1. Select from pg_stat_statements view after every execution.\n> > select query, plans, total_plan_time, calls, total_exec_time from pg_stat_statements where query like 'prepare%';\n> >\n> >\n> > Results of pg_stat_statements of sel) are\n> > query | plans | total_plan_time | calls | total_exec_time\n> > ---------------------------------------------------+-------+-----------------+-------+-----------------\n> > prepare sel(int) as select * from pt where a = $1 | 1 | 0.164361 | 1 | 0.004613\n> > prepare sel(int) as select * from pt where a = $1 | 2 | 0.27715500000000004 | 2 | 0.009447\n> > prepare sel(int) as select * from pt where a = $1 | 3 | 0.39100100000000004 | 3 | 0.014281\n> > prepare sel(int) as select * from pt where a = $1 | 4 | 0.504004 | 4 | 0.019265\n> > prepare sel(int) as select * from pt where a = $1 | 5 | 0.628242 | 5 | 0.024091\n> > prepare sel(int) as select * from pt where a = $1 | 7 | 24.213586000000003 | 6 | 0.029144\n> > prepare sel(int) as select * from pt where a = $1 | 8 | 24.368900000000004 | 7 | 0.034099\n> > prepare sel(int) as select * from pt where a = $1 | 9 | 24.527956000000003 | 8 | 0.046152\n> >\n> >\n> > Results of pg_stat_statements of upd) are\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 1 | 0.280099 | 1 | 0.013138\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 2 | 0.405416 | 2 | 0.01894\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 3 | 0.532361 | 3 | 0.040716\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 4 | 0.671445 | 4 | 0.046566\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 5 | 0.798531 | 5 | 0.052729000000000005\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 7 | 896.915458 | 6 | 0.05888600000000001\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 8 | 897.043512 | 7 | 0.064446\n> > prepare upd(int, int) as update pt set a = $2 where a = $1 | 9 | 897.169711 | 8 | 0.070644\n> >\n> >\n> > How do you think about that?\n>\n> That's indeed a very valid point and something we should help user to\n> investigate. I'll submit an updated patch with support for\n> min/max/mean/stddev plan time shortly.\n\nI attach v3 patches implementing those counters. Note that to avoid\nduplicating some code (related to Welford's method), I switched some\nof the counters to arrays rather than scalar variables. It\nunfortunately makes pg_stat_statements_internal() a little bit messy,\nbut I hope that it's still acceptable. While doing this refactoring I\nsaw that previous patches were failing to accumulate the buffers used\nduring planning, which is now fixed.",
"msg_date": "Sat, 9 Nov 2019 14:36:20 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sat, Nov 9, 2019 at 1:36 PM, Julien Rouhaud wrote:\r\n> On Fri, Nov 8, 2019 at 3:31 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> >\r\n> > On Fri, Nov 8, 2019 at 5:35 AM imai.yoshikazu@fujitsu.com\r\n> > <imai.yoshikazu@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tue, Sept 10, 2019 at 11:27 PM, Julien Rouhaud wrote:\r\n> > > > > [0002 patch]\r\n> > > > > In pgss_planner_hook:\r\n> > > > >\r\n> > > > > + /* calc differences of buffer counters. */\r\n> > > > > + bufusage =\r\n> > > > > + compute_buffer_counters(bufusage_start, pgBufferUsage);\r\n> > > > > +\r\n> > > > > + /*\r\n> > > > > + * we only store planning duration, query text has been initialized\r\n> > > > > + * during previous pgss_post_parse_analyze as it not available inside\r\n> > > > > + * pgss_planner_hook.\r\n> > > > > + */\r\n> > > > > + pgss_store(query_text,\r\n> > > > >\r\n> > > > > Do we need to calculate bufusage in here?\r\n> > > > > We only store planning duration in the following pgss_store.\r\n> > > >\r\n> > > > Good point! Postgres can definitely access some buffers while\r\n> > > > planning a query (the most obvious example would be\r\n> > > > get_actual_variable_range()), but as far as I can tell those were\r\n> > > > previously not accounted for with pg_stat_statements as\r\n> > > > queryDesc->totaltime->bufusage is only accumulating buffer usage\r\n> > > > queryDesc->totaltime->in\r\n> > > > the executor, and indeed current patch also ignore such computed\r\n> > > > counters.\r\n> > > >\r\n> > > > I think it would be better to keep this bufusage calculation\r\n> > > > during planning and fix pgss_store() to process them, but this\r\n> > > > would add\r\n> > > slightly more overhead.\r\n> > >\r\n> > > Sorry for my late reply.\r\n> > > I think overhead would be trivial and we can include bufusage of\r\n> > > planning from the POV of overhead, but yeah, it will be backward\r\n> > > incompatibility if we include them.\r\n> >\r\n> > Ok, let's keep planning's bufusage then.\r\n> >\r\n> > > BTW, ISTM it is good for including {min,max,mean,stddev}_plan_time\r\n> > > to pg_stat_statements. Generally plan_time would be almost the same\r\n> > > time in each execution for the same query, but there are some\r\n> > > exceptions. For example, if we use prepare statements which uses\r\n> > > partition tables, time differs largely between creating a general plan and creating a custom plan.\r\n> > >\r\n> > > 1. Create partition table which has 1024 partitions.\r\n> > > 2. Prepare select and update statements.\r\n> > > sel) prepare sel(int) as select * from pt where a = $1;\r\n> > > upd) prepare upd(int, int) as update pt set a = $2 where a = $1;\r\n> > > 3. Execute each statement for 8 times.\r\n> > > 3-1. Select from pg_stat_statements view after every execution.\r\n> > > select query, plans, total_plan_time, calls, total_exec_time\r\n> > > from pg_stat_statements where query like 'prepare%';\r\n> > >\r\n> > >\r\n> > > Results of pg_stat_statements of sel) are\r\n> > > query | plans | total_plan_time | calls | total_exec_time\r\n> > > ---------------------------------------------------+-------+-----------------+-------+-----------------\r\n> > > prepare sel(int) as select * from pt where a = $1 | 1 | 0.164361 | 1 | 0.004613\r\n> > > prepare sel(int) as select * from pt where a = $1 | 2 | 0.27715500000000004 | 2 | 0.009447\r\n> > > prepare sel(int) as select * from pt where a = $1 | 3 | 0.39100100000000004 | 3 | 0.014281\r\n> > > prepare sel(int) as select * from pt where a = $1 | 4 | 0.504004 | 4 | 0.019265\r\n> > > prepare sel(int) as select * from pt where a = $1 | 5 | 0.628242 | 5 | 0.024091\r\n> > > prepare sel(int) as select * from pt where a = $1 | 7 | 24.213586000000003 | 6 | 0.029144\r\n> > > prepare sel(int) as select * from pt where a = $1 | 8 | 24.368900000000004 | 7 | 0.034099\r\n> > > prepare sel(int) as select * from pt where a = $1 | 9 | 24.527956000000003 | 8 | 0.046152\r\n> > >\r\n> > >\r\n> > > Results of pg_stat_statements of upd) are\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 1 | 0.280099 | 1 | 0.013138\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 2 | 0.405416 | 2 | 0.01894\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 3 | 0.532361 | 3 | 0.040716\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 4 | 0.671445 | 4 | 0.046566\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 5 | 0.798531 | 5 | 0.052729000000000005\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 7 | 896.915458 | 6 | 0.05888600000000001\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 8 | 897.043512 | 7 | 0.064446\r\n> > > prepare upd(int, int) as update pt set a = $2 where a = $1 | 9 | 897.169711 | 8 | 0.070644\r\n> > >\r\n> > >\r\n> > > How do you think about that?\r\n> >\r\n> > That's indeed a very valid point and something we should help user to\r\n> > investigate. I'll submit an updated patch with support for\r\n> > min/max/mean/stddev plan time shortly.\r\n> \r\n> I attach v3 patches implementing those counters. \r\n\r\nThanks for updating the patch! Now I can see min/max/mean/stddev plan time.\r\n\r\n\r\n> Note that to avoid duplicating some code (related to Welford's method),\r\n> I switched some of the counters to arrays rather than scalar variables. It unfortunately makes\r\n> pg_stat_statements_internal() a little bit messy, but I hope that it's still acceptable. \r\n\r\nYeah, I also think it's acceptable, but I think the codes like below one is more\r\nunderstandable than using for loop in pg_stat_statements_internal(),\r\nalthough some codes will be duplicated.\r\n\r\npg_stat_statements_internal():\r\n\r\nif (api_version >= PGSS_V1_8)\r\n{\r\n kind = PGSS_PLAN;\r\n values[i++] = Int64GetDatumFast(tmp.calls[kind]);\r\n values[i++] = Float8GetDatumFast(tmp.total_time[kind]);\r\n values[i++] = Float8GetDatumFast(tmp.min_time[kind]);\r\n values[i++] = Float8GetDatumFast(tmp.max_time[kind]);\r\n values[i++] = Float8GetDatumFast(tmp.mean_time[kind]);\r\n values[i++] = Float8GetDatumFast(stddev(tmp));\r\n}\r\n\r\nkind = PGSS_EXEC;\r\nvalues[i++] = Int64GetDatumFast(tmp.calls[kind]);\r\nvalues[i++] = Float8GetDatumFast(tmp.total_time[kind]);\r\nif (api_version >= PGSS_V1_3)\r\n{\r\n values[i++] = Float8GetDatumFast(tmp.min_time[kind]);\r\n values[i++] = Float8GetDatumFast(tmp.max_time[kind]);\r\n values[i++] = Float8GetDatumFast(tmp.mean_time[kind]);\r\n values[i++] = Float8GetDatumFast(stddev(tmp));\r\n}\r\n\r\n\r\nstddev(Counters counters)\r\n{\r\n /*\r\n * Note we are calculating the population variance here, not the\r\n * sample variance, as we have data for the whole population, so\r\n * Bessel's correction is not used, and we don't divide by\r\n * tmp.calls - 1.\r\n */\r\n if (counters.calls[kind] > 1) \r\n return stddev = sqrt(counters.sum_var_time[kind] / counters.calls[kind]);\r\n\r\n return 0.0;\r\n}\r\n\r\n\r\n> While doing this refactoring\r\n> I saw that previous patches were failing to accumulate the buffers used during planning, which is now fixed.\r\n\r\nChecked.\r\nNow buffer usage columns include buffer usage during planning and executing,\r\nbut if we turn off track_planning, buffer usage during planning is also not\r\ntracked which is good for users who don't want to take into account of that.\r\n\r\n\r\nWhat I'm concerned about is column names will not be backward-compatible.\r\n{total, min, max, mean, stddev}_{plan, exec}_time are the best names which\r\ncorrectly show the meaning of its value, but we can't use\r\n{total, min, max, mean, stddev}_time anymore and it might be harmful for\r\nsome users.\r\nI don't come up with any good idea for that...\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Tue, 12 Nov 2019 04:41:54 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 5:41 AM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n>\n> On Sat, Nov 9, 2019 at 1:36 PM, Julien Rouhaud wrote:\n> >\n> > I attach v3 patches implementing those counters.\n>\n> Thanks for updating the patch! Now I can see min/max/mean/stddev plan time.\n>\n>\n> > Note that to avoid duplicating some code (related to Welford's method),\n> > I switched some of the counters to arrays rather than scalar variables. It unfortunately makes\n> > pg_stat_statements_internal() a little bit messy, but I hope that it's still acceptable.\n>\n> Yeah, I also think it's acceptable, but I think the codes like below one is more\n> understandable than using for loop in pg_stat_statements_internal(),\n> although some codes will be duplicated.\n>\n> pg_stat_statements_internal():\n>\n> if (api_version >= PGSS_V1_8)\n> {\n> kind = PGSS_PLAN;\n> values[i++] = Int64GetDatumFast(tmp.calls[kind]);\n> values[i++] = Float8GetDatumFast(tmp.total_time[kind]);\n> values[i++] = Float8GetDatumFast(tmp.min_time[kind]);\n> values[i++] = Float8GetDatumFast(tmp.max_time[kind]);\n> values[i++] = Float8GetDatumFast(tmp.mean_time[kind]);\n> values[i++] = Float8GetDatumFast(stddev(tmp));\n> }\n>\n> kind = PGSS_EXEC;\n> values[i++] = Int64GetDatumFast(tmp.calls[kind]);\n> values[i++] = Float8GetDatumFast(tmp.total_time[kind]);\n> if (api_version >= PGSS_V1_3)\n> {\n> values[i++] = Float8GetDatumFast(tmp.min_time[kind]);\n> values[i++] = Float8GetDatumFast(tmp.max_time[kind]);\n> values[i++] = Float8GetDatumFast(tmp.mean_time[kind]);\n> values[i++] = Float8GetDatumFast(stddev(tmp));\n> }\n>\n>\n> stddev(Counters counters)\n> {\n> /*\n> * Note we are calculating the population variance here, not the\n> * sample variance, as we have data for the whole population, so\n> * Bessel's correction is not used, and we don't divide by\n> * tmp.calls - 1.\n> */\n> if (counters.calls[kind] > 1)\n> return stddev = sqrt(counters.sum_var_time[kind] / counters.calls[kind]);\n>\n> return 0.0;\n> }\n\nYes, that's also a possibility (though this should also take the\n\"kind\" as parameter). I wanted to avoid adding a new function and\nsave some redundant code, but I can change it in the next version of\nthe patch if needed.\n\n> > While doing this refactoring\n> > I saw that previous patches were failing to accumulate the buffers used during planning, which is now fixed.\n>\n> Checked.\n> Now buffer usage columns include buffer usage during planning and executing,\n> but if we turn off track_planning, buffer usage during planning is also not\n> tracked which is good for users who don't want to take into account of that.\n\nIndeed. Note that there's a similar discussion on adding planning\nbuffer counters to explain in [1]. I'm unsure if merging planning and\nexecution counters in the same columns is ok or not.\n\n> What I'm concerned about is column names will not be backward-compatible.\n> {total, min, max, mean, stddev}_{plan, exec}_time are the best names which\n> correctly show the meaning of its value, but we can't use\n> {total, min, max, mean, stddev}_time anymore and it might be harmful for\n> some users.\n> I don't come up with any good idea for that...\n\nWell, perhaps keeping the old {total, min, max, mean, stddev}_time\nwould be ok, as those historically meant \"execution\". I don't have a\nstrong opinion here.\n\n[1] https://www.postgresql.org/message-id/20191112205506.rvadbx2dnku3paaw@alap3.anarazel.de\n\n\n",
"msg_date": "Wed, 13 Nov 2019 11:49:39 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 10:50 AM, Julien Rouhaud wrote:\r\n> On Tue, Nov 12, 2019 at 5:41 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\r\n> >\r\n> > On Sat, Nov 9, 2019 at 1:36 PM, Julien Rouhaud wrote:\r\n> > >\r\n> > > I attach v3 patches implementing those counters.\r\n> >\r\n> > Thanks for updating the patch! Now I can see min/max/mean/stddev plan time.\r\n> >\r\n> >\r\n> > > Note that to avoid duplicating some code (related to Welford's\r\n> > > method), I switched some of the counters to arrays rather than\r\n> > > scalar variables. It unfortunately makes\r\n> > > pg_stat_statements_internal() a little bit messy, but I hope that it's still acceptable.\r\n> >\r\n> > Yeah, I also think it's acceptable, but I think the codes like below\r\n> > one is more understandable than using for loop in\r\n> > pg_stat_statements_internal(), although some codes will be duplicated.\r\n> >\r\n> > pg_stat_statements_internal():\r\n> >\r\n> > if (api_version >= PGSS_V1_8)\r\n> > {\r\n> > kind = PGSS_PLAN;\r\n> > values[i++] = Int64GetDatumFast(tmp.calls[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.total_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.min_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.max_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.mean_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(stddev(tmp)); }\r\n> >\r\n> > kind = PGSS_EXEC;\r\n> > values[i++] = Int64GetDatumFast(tmp.calls[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.total_time[kind]);\r\n> > if (api_version >= PGSS_V1_3)\r\n> > {\r\n> > values[i++] = Float8GetDatumFast(tmp.min_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.max_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(tmp.mean_time[kind]);\r\n> > values[i++] = Float8GetDatumFast(stddev(tmp)); }\r\n> >\r\n> >\r\n> > stddev(Counters counters)\r\n> > {\r\n> > /*\r\n> > * Note we are calculating the population variance here, not the\r\n> > * sample variance, as we have data for the whole population, so\r\n> > * Bessel's correction is not used, and we don't divide by\r\n> > * tmp.calls - 1.\r\n> > */\r\n> > if (counters.calls[kind] > 1)\r\n> > return stddev = sqrt(counters.sum_var_time[kind] / counters.calls[kind]);\r\n> >\r\n> > return 0.0;\r\n> > }\r\n> \r\n> Yes, that's also a possibility (though this should also take the\r\n> \"kind\" as parameter). I wanted to avoid adding a new function and\r\n> save some redundant code, but I can change it in the next version of\r\n> the patch if needed.\r\n\r\nOkay. It's not much a serious problem, so we can leave it as it is.\r\n\r\n\r\n> > > While doing this refactoring\r\n> > > I saw that previous patches were failing to accumulate the buffers used during planning, which is now fixed.\r\n> >\r\n> > Checked.\r\n> > Now buffer usage columns include buffer usage during planning and executing,\r\n> > but if we turn off track_planning, buffer usage during planning is also not\r\n> > tracked which is good for users who don't want to take into account of that.\r\n> \r\n> Indeed. Note that there's a similar discussion on adding planning\r\n> buffer counters to explain in [1]. I'm unsure if merging planning and\r\n> execution counters in the same columns is ok or not.\r\n\r\nStoring buffer usage to different columns is useful to detect the cause of the problems if there are the cases many buffers are used during planning, but I'm also unsure those cases actually exist. \r\n\r\n\r\n> > What I'm concerned about is column names will not be backward-compatible.\r\n> > {total, min, max, mean, stddev}_{plan, exec}_time are the best names which\r\n> > correctly show the meaning of its value, but we can't use\r\n> > {total, min, max, mean, stddev}_time anymore and it might be harmful for\r\n> > some users.\r\n> > I don't come up with any good idea for that...\r\n> \r\n> Well, perhaps keeping the old {total, min, max, mean, stddev}_time\r\n> would be ok, as those historically meant \"execution\". I don't have a\r\n> strong opinion here.\r\n\r\nActually I also don't have strong opinion but I thought someone would complain about renaming of those columns and also some tools like monitoring which use those columns will not work. If we use {total, min, max, mean, stddev}_time, someone might mistakenly understand {total, min, max, mean, stddev}_time mean {total, min, max, mean, stddev} of planning and execution. \r\nIf I need to choose {total, min, max, mean, stddev}_time or {total, min, max, mean, stddev}_exec_time, I choose former one because choosing best name is not worth destructing the existing scripts or tools.\r\n\r\nThanks.\r\n--\r\nYoshikazu Imai \r\n",
"msg_date": "Fri, 15 Nov 2019 01:00:08 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 2:00 AM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n>\n> Actually I also don't have strong opinion but I thought someone would complain about renaming of those columns and also some tools like monitoring which use those columns will not work. If we use {total, min, max, mean, stddev}_time, someone might mistakenly understand {total, min, max, mean, stddev}_time mean {total, min, max, mean, stddev} of planning and execution.\n> If I need to choose {total, min, max, mean, stddev}_time or {total, min, max, mean, stddev}_exec_time, I choose former one because choosing best name is not worth destructing the existing scripts or tools.\n\nWe could definitely keep (plan|exec)_time for the SRF, and have the\n{total, min, max, mean, stddev}_time created by the view to be a sum\nof planning + execution for each counter, and it doesn't sound like a\nbad idea if having even more columns in the view is not an issue.\n\n\n",
"msg_date": "Tue, 19 Nov 2019 15:27:27 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 2:27 PM, Julien Rouhaud wrote:\r\n> On Fri, Nov 15, 2019 at 2:00 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\r\n> >\r\n> > Actually I also don't have strong opinion but I thought someone would complain about renaming of those columns and\r\n> also some tools like monitoring which use those columns will not work. If we use {total, min, max, mean, stddev}_time,\r\n> someone might mistakenly understand {total, min, max, mean, stddev}_time mean {total, min, max, mean, stddev} of planning\r\n> and execution.\r\n> > If I need to choose {total, min, max, mean, stddev}_time or {total, min, max, mean, stddev}_exec_time, I choose former\r\n> one because choosing best name is not worth destructing the existing scripts or tools.\r\n> \r\n> We could definitely keep (plan|exec)_time for the SRF, and have the {total, min, max, mean, stddev}_time created by\r\n> the view to be a sum of planning + execution for each counter\r\n\r\nI might misunderstand but if we define {total, min, max, mean, stddev}_time is\r\njust a sum of planning + execution for each counter like\r\n\"select total_plan_time + total_exec_time as total_time from pg_stat_statements\",\r\nI wonder we can calculate stddev_time correctly. If we prepare variables in\r\nthe codes to calculate those values, yes, we can correctly calculate those\r\nvalues even for the total_stddev.\r\n\r\n> and it doesn't sound like a bad idea if having even\r\n> more columns in the view is not an issue.\r\n\r\nI also wondered having many columns in the view is ok, but if it's ok, I agree\r\nall of those columns are in the view. Only problem I can come up with is the\r\nview will look bad with many columns, but it already looks bad because query\r\ncolumn values tend to be long and each row can't fit in the one row in the\r\nconsole.\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Wed, 20 Nov 2019 01:06:42 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 2:06 AM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n>\n> On Tue, Nov 19, 2019 at 2:27 PM, Julien Rouhaud wrote:\n> > On Fri, Nov 15, 2019 at 2:00 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\n> > >\n> > > Actually I also don't have strong opinion but I thought someone would complain about renaming of those columns and\n> > also some tools like monitoring which use those columns will not work. If we use {total, min, max, mean, stddev}_time,\n> > someone might mistakenly understand {total, min, max, mean, stddev}_time mean {total, min, max, mean, stddev} of planning\n> > and execution.\n> > > If I need to choose {total, min, max, mean, stddev}_time or {total, min, max, mean, stddev}_exec_time, I choose former\n> > one because choosing best name is not worth destructing the existing scripts or tools.\n> >\n> > We could definitely keep (plan|exec)_time for the SRF, and have the {total, min, max, mean, stddev}_time created by\n> > the view to be a sum of planning + execution for each counter\n>\n> I might misunderstand but if we define {total, min, max, mean, stddev}_time is\n> just a sum of planning + execution for each counter like\n> \"select total_plan_time + total_exec_time as total_time from pg_stat_statements\",\n> I wonder we can calculate stddev_time correctly. If we prepare variables in\n> the codes to calculate those values, yes, we can correctly calculate those\n> values even for the total_stddev.\n\nYes you're right, this can't possibly work for most of the counters.\nAnd also, since there's no guarantee that each execution will follow a\nplanning, providing such global counters for min/max/mean and stddev\nwouldn't make much sense.\n\n\n",
"msg_date": "Wed, 20 Nov 2019 17:54:51 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 4:55 PM, Julien Rouhaud wrote:\r\n> On Wed, Nov 20, 2019 at 2:06 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\r\n> >\r\n> > On Tue, Nov 19, 2019 at 2:27 PM, Julien Rouhaud wrote:\r\n> > > On Fri, Nov 15, 2019 at 2:00 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Actually I also don't have strong opinion but I thought someone\r\n> > > > would complain about renaming of those columns and\r\n> > > also some tools like monitoring which use those columns will not\r\n> > > work. If we use {total, min, max, mean, stddev}_time, someone might\r\n> > > mistakenly understand {total, min, max, mean, stddev}_time mean {total, min, max, mean, stddev} of planning and\r\n> execution.\r\n> > > > If I need to choose {total, min, max, mean, stddev}_time or\r\n> > > > {total, min, max, mean, stddev}_exec_time, I choose former\r\n> > > one because choosing best name is not worth destructing the existing scripts or tools.\r\n> > >\r\n> > > We could definitely keep (plan|exec)_time for the SRF, and have the\r\n> > > {total, min, max, mean, stddev}_time created by the view to be a sum\r\n> > > of planning + execution for each counter\r\n> >\r\n> > I might misunderstand but if we define {total, min, max, mean,\r\n> > stddev}_time is just a sum of planning + execution for each counter\r\n> > like \"select total_plan_time + total_exec_time as total_time from\r\n> > pg_stat_statements\", I wonder we can calculate stddev_time correctly.\r\n> > If we prepare variables in the codes to calculate those values, yes,\r\n> > we can correctly calculate those values even for the total_stddev.\r\n> \r\n> Yes you're right, this can't possibly work for most of the counters.\r\n> And also, since there's no guarantee that each execution will follow a planning, providing such global counters for\r\n> min/max/mean and stddev wouldn't make much sense.\r\n\r\nAh, I see. Planning counts and execution counts differ.\r\nIt might be difficult to redefine the meaning of {min, max, mean, stddev}_time precisely, and even if we can redefine it correctly, it would not be intuitive.\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Fri, 22 Nov 2019 10:23:09 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 11:23 AM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n>\n> On Wed, Nov 20, 2019 at 4:55 PM, Julien Rouhaud wrote:\n> > On Wed, Nov 20, 2019 at 2:06 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\n> > >\n> > > On Tue, Nov 19, 2019 at 2:27 PM, Julien Rouhaud wrote:\n> > > > On Fri, Nov 15, 2019 at 2:00 AM imai.yoshikazu@fujitsu.com <imai.yoshikazu@fujitsu.com> wrote:\n> > > > >\n> > > > > Actually I also don't have strong opinion but I thought someone\n> > > > > would complain about renaming of those columns and\n> > > > also some tools like monitoring which use those columns will not\n> > > > work. If we use {total, min, max, mean, stddev}_time, someone might\n> > > > mistakenly understand {total, min, max, mean, stddev}_time mean {total, min, max, mean, stddev} of planning and\n> > execution.\n> > > > > If I need to choose {total, min, max, mean, stddev}_time or\n> > > > > {total, min, max, mean, stddev}_exec_time, I choose former\n> > > > one because choosing best name is not worth destructing the existing scripts or tools.\n> > > >\n> > > > We could definitely keep (plan|exec)_time for the SRF, and have the\n> > > > {total, min, max, mean, stddev}_time created by the view to be a sum\n> > > > of planning + execution for each counter\n> > >\n> > > I might misunderstand but if we define {total, min, max, mean,\n> > > stddev}_time is just a sum of planning + execution for each counter\n> > > like \"select total_plan_time + total_exec_time as total_time from\n> > > pg_stat_statements\", I wonder we can calculate stddev_time correctly.\n> > > If we prepare variables in the codes to calculate those values, yes,\n> > > we can correctly calculate those values even for the total_stddev.\n> >\n> > Yes you're right, this can't possibly work for most of the counters.\n> > And also, since there's no guarantee that each execution will follow a planning, providing such global counters for\n> > min/max/mean and stddev wouldn't make much sense.\n>\n> Ah, I see. Planning counts and execution counts differ.\n> It might be difficult to redefine the meaning of {min, max, mean, stddev}_time precisely, and even if we can redefine it correctly, it would not be intuitive.\n\nThomas' automatic patch tester just warned me that the patchset is\nbroken since 3fd40b628c7db4, which removed the queryString from\nExecCreateTableAs. New patch version that re-add the queryString, no\nchanges otherwise.",
"msg_date": "Sun, 5 Jan 2020 09:53:43 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi Julien,\n\nI would like to create a link with\nhttps://www.postgresql.org/message-id/1577490124579-0.post@n3.nabble.com\n\nwhere we met an ASSET FAILURE because query text was not initialized ...\n\nThe question raised is:\n\n- should query text be always provided \nor \n- if not how to deal that case (in pgss).\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 5 Jan 2020 08:10:55 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 4:11 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Hi Julien,\n>\n> I would like to create a link with\n> https://www.postgresql.org/message-id/1577490124579-0.post@n3.nabble.com\n>\n> where we met an ASSET FAILURE because query text was not initialized ...\n>\n> The question raised is:\n>\n> - should query text be always provided\n> or\n> - if not how to deal that case (in pgss).\n\nI'd think that since the query text was until now always provided,\nthere's no reason why this patch should change that. That being said,\nthere has been other concerns raised wrt. temporary tables in the IVM\npatchset, so ISTM that there might be important architectural changes\nupcoming, so having to deal with this case in pgss is not rushed\n(especially since handling that in pgss would be trivial), and can\nhelp to catch issue with the query text pasing.\n\n\n",
"msg_date": "Sun, 5 Jan 2020 16:31:52 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Julien Rouhaud wrote\n> On Sun, Jan 5, 2020 at 4:11 PM legrand legrand\n> <\n\n> legrand_legrand@\n\n> > wrote:\n>>\n>> Hi Julien,\n>>\n>> I would like to create a link with\n>> https://www.postgresql.org/message-id/\n\n> 1577490124579-0.post@.nabble\n\n>>\n>> where we met an ASSET FAILURE because query text was not initialized ...\n>>\n>> The question raised is:\n>>\n>> - should query text be always provided\n>> or\n>> - if not how to deal that case (in pgss).\n> \n> I'd think that since the query text was until now always provided,\n> there's no reason why this patch should change that. That being said,\n> there has been other concerns raised wrt. temporary tables in the IVM\n> patchset, so ISTM that there might be important architectural changes\n> upcoming, so having to deal with this case in pgss is not rushed\n> (especially since handling that in pgss would be trivial), and can\n> help to catch issue with the query text pasing.\n\nIVM revealed that ASSERT,\nbut IVM works fine with pg_stat_statements.track_planning = off.\nThere may be others parts of postgresql that would have workede fine as\nwell.\n\nThis means 2 things:\n- there is a (litle) risk to meet other assert failures when using planning\ncounters in pgss,\n- we have an easy workarround to fix it (disabling track_planning).\n\nBut I would have prefered this new feature to work the same way with or\nwithout track_planning activated ;o(\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 5 Jan 2020 11:01:59 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 7:02 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Julien Rouhaud wrote\n> > On Sun, Jan 5, 2020 at 4:11 PM legrand legrand\n> > <\n>\n> > legrand_legrand@\n>\n> > > wrote:\n> >>\n> >> Hi Julien,\n> >>\n> >> I would like to create a link with\n> >> https://www.postgresql.org/message-id/\n>\n> > 1577490124579-0.post@.nabble\n>\n> >>\n> >> where we met an ASSET FAILURE because query text was not initialized ...\n> >>\n> >> The question raised is:\n> >>\n> >> - should query text be always provided\n> >> or\n> >> - if not how to deal that case (in pgss).\n> >\n> > I'd think that since the query text was until now always provided,\n> > there's no reason why this patch should change that. That being said,\n> > there has been other concerns raised wrt. temporary tables in the IVM\n> > patchset, so ISTM that there might be important architectural changes\n> > upcoming, so having to deal with this case in pgss is not rushed\n> > (especially since handling that in pgss would be trivial), and can\n> > help to catch issue with the query text pasing.\n>\n> IVM revealed that ASSERT,\n> but IVM works fine with pg_stat_statements.track_planning = off.\n\nYes, but on the other hand the current IVM patchset also adds the only\npg_plan_query call that don't provide a query text. I didn't see any\nother possibility, and if there are other cases they're unfortunately\nnot covered by the full regression tests.\n\n> There may be others parts of postgresql that would have workede fine as\n> well.\n>\n> This means 2 things:\n> - there is a (litle) risk to meet other assert failures when using planning\n> counters in pgss,\n> - we have an easy workarround to fix it (disabling track_planning).\n>\n> But I would have prefered this new feature to work the same way with or\n> without track_planning activated ;o(\n\nDefinitely, but fixing the issue in pgss (ignoring planner calls when\nwe don't have a query text) means that pgss won't give an exhaustive\nview of activity anymore, so a fix in IVM would be a better solution.\nLet's wait and see if Nagata-san and other people involved in that\nhave an opinion on it.\n\n\n",
"msg_date": "Sun, 5 Jan 2020 20:00:17 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi Julien,\n\nbot is still unhappy\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/638701399\n\nportalcmds.c: In function ‘PerformCursorOpen’:\nportalcmds.c:93:7: error: ‘queryString’ may be used uninitialized in this\nfunction [-Werror=maybe-uninitialized]\n plan = pg_plan_query(query, queryString, cstmt->options, params);\n ^\nportalcmds.c:50:8: note: ‘queryString’ was declared here\n char *queryString;\n ^\ncc1: all warnings being treated as errors\n<builtin>: recipe for target 'portalcmds.o' failed\nmake[3]: *** [portalcmds.o] Error 1\nmake[3]: Leaving directory\n'/home/travis/build/postgresql-cfbot/postgresql/src/backend/commands'\ncommon.mk:39: recipe for target 'commands-recursive' failed\nmake[2]: *** [commands-recursive] Error 2\nmake[2]: *** Waiting for unfinished jobs....\n\nregards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 18 Jan 2020 10:14:44 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jan 18, 2020 at 6:14 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Hi Julien,\n>\n> bot is still unhappy\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/638701399\n>\n> portalcmds.c: In function ‘PerformCursorOpen’:\n> portalcmds.c:93:7: error: ‘queryString’ may be used uninitialized in this\n> function [-Werror=maybe-uninitialized]\n> plan = pg_plan_query(query, queryString, cstmt->options, params);\n> ^\n> portalcmds.c:50:8: note: ‘queryString’ was declared here\n> char *queryString;\n> ^\n> cc1: all warnings being treated as errors\n> <builtin>: recipe for target 'portalcmds.o' failed\n> make[3]: *** [portalcmds.o] Error 1\n> make[3]: Leaving directory\n> '/home/travis/build/postgresql-cfbot/postgresql/src/backend/commands'\n> common.mk:39: recipe for target 'commands-recursive' failed\n> make[2]: *** [commands-recursive] Error 2\n> make[2]: *** Waiting for unfinished jobs....\n\nIndeed, thanks for the report! PFA rebased v4 version of the patchset.",
"msg_date": "Tue, 21 Jan 2020 11:21:20 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi Julien,\n\n>> But I would have prefered this new feature to work the same way with or\n>> without track_planning activated ;o(\n\n> Definitely, but fixing the issue in pgss (ignoring planner calls when\n> we don't have a query text) means that pgss won't give an exhaustive\n> view of activity anymore, so a fix in IVM would be a better solution.\n> Let's wait and see if Nagata-san and other people involved in that\n> have an opinion on it.\n\nIt seems IVM team does not consider this point as a priority ... \nWe should not wait for them, if we want to keep a chance to be \nincluded in PG13.\n\nSo we have to make this feature more robust, an assert failure being \nconsidered as a severe regression (even if this is not coming from pgss).\n\nI like the idea of adding a check for a non-zero queryId in the new \npgss_planner_hook() (zero queryid shouldn't be reserved for\nutility_statements ?).\n\nFixing the corner case where a query (with no sql text) can be planned \nwithout being parsed is an other subject that should be resolved in an \nother thread.\n\nThis kind of query was ignored in pgss, it should be ignored in pgss with \nplanning counters.\n\nAny thoughts ?\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 28 Feb 2020 08:06:35 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 28, 2020 at 4:06 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> It seems IVM team does not consider this point as a priority ...\n\nWell, IVM is a big project and I agree that fixing this issue isn't\nthe most urgent one, especially since there's no guarantee that this\npgss planning patch will be committed, or with the current behavior.\n\n> We should not wait for them, if we want to keep a chance to be\n> included in PG13.\n>\n> So we have to make this feature more robust, an assert failure being\n> considered as a severe regression (even if this is not coming from pgss).\n\nI'm still not convinced that handling NULL query string, as in\nsometimes ignoring planning counters, is the right solution here. For\nnow all code is able to provide it (or at least all the code that goes\nthrough make installcheck). I'm wondering if it'd be better to\ninstead add a similar assert in pg_plan_query, to make sure that this\nrequirements is always met even without using pg_stat_statements, or\nany other extension that would also rely on that.\n\nI also realized that the last version of the patch I sent was a rebase\nof the wrong version, I'll send the correct version soon.\n\n> I like the idea of adding a check for a non-zero queryId in the new\n> pgss_planner_hook() (zero queryid shouldn't be reserved for\n> utility_statements ?).\n\nSome assert hit later, I can say that it's not always true. For\ninstance a CREATE TABLE AS won't run parse analysis for the underlying\nquery, as this has already been done for the original statement, but\nwill still call the planner. I'll change pgss_planner_hook to ignore\nsuch cases, as pgss_store would otherwise think that it's a utility\nstatement. That'll probably incidentally fix the IVM incompatibility.\n\n\n",
"msg_date": "Sun, 1 Mar 2020 15:05:10 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": ">> I like the idea of adding a check for a non-zero queryId in the new\n>> pgss_planner_hook() (zero queryid shouldn't be reserved for\n>> utility_statements ?).\n\n> Some assert hit later, I can say that it's not always true. For\n> instance a CREATE TABLE AS won't run parse analysis for the underlying\n> query, as this has already been done for the original statement, but\n> will still call the planner. I'll change pgss_planner_hook to ignore\n> such cases, as pgss_store would otherwise think that it's a utility\n> statement. That'll probably incidentally fix the IVM incompatibility. \n\nToday with or without test on parse->queryId != UINT64CONST(0),\nCTAS is collected as a utility_statement without planning counter.\nThis seems to me respectig the rule, not sure that this needs any \nnew (risky) change to the actual (quite stable) patch. \n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 1 Mar 2020 07:55:36 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sun, Mar 1, 2020 at 3:55 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> >> I like the idea of adding a check for a non-zero queryId in the new\n> >> pgss_planner_hook() (zero queryid shouldn't be reserved for\n> >> utility_statements ?).\n>\n> > Some assert hit later, I can say that it's not always true. For\n> > instance a CREATE TABLE AS won't run parse analysis for the underlying\n> > query, as this has already been done for the original statement, but\n> > will still call the planner. I'll change pgss_planner_hook to ignore\n> > such cases, as pgss_store would otherwise think that it's a utility\n> > statement. That'll probably incidentally fix the IVM incompatibility.\n>\n> Today with or without test on parse->queryId != UINT64CONST(0),\n> CTAS is collected as a utility_statement without planning counter.\n> This seems to me respectig the rule, not sure that this needs any\n> new (risky) change to the actual (quite stable) patch.\n\nBut the queryid ends up not being computed the same way:\n\n# select queryid, query, plans, calls from pg_stat_statements where\nquery like 'create table%';\n queryid | query | plans | calls\n---------------------+--------------------------------+-------+-------\n 8275950546884151007 | create table test as select 1; | 1 | 0\n 7546197440584636081 | create table test as select 1 | 0 | 1\n(2 rows)\n\nThat's because CreateTableAsStmt->query doesn't have a query\nlocation/len, as transformTopLevelStmt is only setting that for the\ntop level Query. That's probably an oversight in ab1f0c82257, but I'm\nnot sure what's the best way to fix that. Should we pass that\ninformation to all transformXXX function, or let transformTopLevelStmt\nhandle that.\n\n\n",
"msg_date": "Mon, 2 Mar 2020 09:54:03 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Julien Rouhaud wrote\n> On Sun, Mar 1, 2020 at 3:55 PM legrand legrand\n> <\n\n> legrand_legrand@\n\n> > wrote:\n>>\n>> >> I like the idea of adding a check for a non-zero queryId in the new\n>> >> pgss_planner_hook() (zero queryid shouldn't be reserved for\n>> >> utility_statements ?).\n>>\n>> > Some assert hit later, I can say that it's not always true. For\n>> > instance a CREATE TABLE AS won't run parse analysis for the underlying\n>> > query, as this has already been done for the original statement, but\n>> > will still call the planner. I'll change pgss_planner_hook to ignore\n>> > such cases, as pgss_store would otherwise think that it's a utility\n>> > statement. That'll probably incidentally fix the IVM incompatibility.\n>>\n>> Today with or without test on parse->queryId != UINT64CONST(0),\n>> CTAS is collected as a utility_statement without planning counter.\n>> This seems to me respectig the rule, not sure that this needs any\n>> new (risky) change to the actual (quite stable) patch.\n> \n> But the queryid ends up not being computed the same way:\n> \n> # select queryid, query, plans, calls from pg_stat_statements where\n> query like 'create table%';\n> queryid | query | plans | calls\n> ---------------------+--------------------------------+-------+-------\n> 8275950546884151007 | create table test as select 1; | 1 | 0\n> 7546197440584636081 | create table test as select 1 | 0 | 1\n> (2 rows)\n> \n> That's because CreateTableAsStmt->query doesn't have a query\n> location/len, as transformTopLevelStmt is only setting that for the\n> top level Query. That's probably an oversight in ab1f0c82257, but I'm\n> not sure what's the best way to fix that. Should we pass that\n> information to all transformXXX function, or let transformTopLevelStmt\n> handle that.\n\n\narf, this was not the case in my testing env (that is not up to date) :o(\nand would not have appeared at all with the proposed test on \nparse->queryId != UINT64CONST(0) ...\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 2 Mar 2020 05:01:16 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 1:01 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Julien Rouhaud wrote\n> > On Sun, Mar 1, 2020 at 3:55 PM legrand legrand\n> > <\n>\n> > legrand_legrand@\n>\n> > > wrote:\n> >>\n> >> >> I like the idea of adding a check for a non-zero queryId in the new\n> >> >> pgss_planner_hook() (zero queryid shouldn't be reserved for\n> >> >> utility_statements ?).\n> >>\n> >> > Some assert hit later, I can say that it's not always true. For\n> >> > instance a CREATE TABLE AS won't run parse analysis for the underlying\n> >> > query, as this has already been done for the original statement, but\n> >> > will still call the planner. I'll change pgss_planner_hook to ignore\n> >> > such cases, as pgss_store would otherwise think that it's a utility\n> >> > statement. That'll probably incidentally fix the IVM incompatibility.\n> >>\n> >> Today with or without test on parse->queryId != UINT64CONST(0),\n> >> CTAS is collected as a utility_statement without planning counter.\n> >> This seems to me respectig the rule, not sure that this needs any\n> >> new (risky) change to the actual (quite stable) patch.\n> >\n> > But the queryid ends up not being computed the same way:\n> >\n> > # select queryid, query, plans, calls from pg_stat_statements where\n> > query like 'create table%';\n> > queryid | query | plans | calls\n> > ---------------------+--------------------------------+-------+-------\n> > 8275950546884151007 | create table test as select 1; | 1 | 0\n> > 7546197440584636081 | create table test as select 1 | 0 | 1\n> > (2 rows)\n> >\n> > That's because CreateTableAsStmt->query doesn't have a query\n> > location/len, as transformTopLevelStmt is only setting that for the\n> > top level Query. That's probably an oversight in ab1f0c82257, but I'm\n> > not sure what's the best way to fix that. Should we pass that\n> > information to all transformXXX function, or let transformTopLevelStmt\n> > handle that.\n>\n>\n> arf, this was not the case in my testing env (that is not up to date) :o(\n> and would not have appeared at all with the proposed test on\n> parse->queryId != UINT64CONST(0) ...\n\nI'm not sure what was the exact behavior you had, but that shouldn't\nhave changed since previous version. The underlying query isn't a top\nlevel statement, so maybe you didn't set pg_stat_statements.track =\n'all'?\n\n\n",
"msg_date": "Mon, 2 Mar 2020 13:14:03 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Never mind ...\n\nPlease consider PG13 shortest path ;o)\n\nMy one is parse->queryId != UINT64CONST(0) in pgss_planner_hook().\nIt fixes IVM problem (verified), \nand keep CTAS equal to pgss without planning counters (verified too).\n\nRegards\nPAscal\n\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Thu, 5 Mar 2020 13:26:19 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 05, 2020 at 01:26:19PM -0700, legrand legrand wrote:\n> Please consider PG13 shortest path ;o)\n>\n> My one is parse->queryId != UINT64CONST(0) in pgss_planner_hook().\n> It fixes IVM problem (verified),\n> and keep CTAS equal to pgss without planning counters (verified too).\n\nI still disagree that hiding this problem is the right fix, but since no one\nobjected here's a v5 with that behavior. Hopefully this will be fixed in v14.",
"msg_date": "Mon, 9 Mar 2020 11:31:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi Julien,\n\nOn Mon, Mar 9, 2020 at 10:32 AM, Julien Rouhaud wrote:\n> On Thu, Mar 05, 2020 at 01:26:19PM -0700, legrand legrand wrote:\n> > Please consider PG13 shortest path ;o)\n> >\n> > My one is parse->queryId != UINT64CONST(0) in pgss_planner_hook().\n> > It fixes IVM problem (verified),\n> > and keep CTAS equal to pgss without planning counters (verified too).\n> \n> I still disagree that hiding this problem is the right fix, but since no one\n> objected here's a v5 with that behavior. Hopefully this will be fixed in v14.\n\nIs there any case that query_text will be NULL when executing pg_plan_query?\nIf query_text will be NULL, we need to add codes to avoid errors in\npgss_store like as current patch. If query_text will not be NULL, we should\nadd Assert in pg_plan_query so that other developers can notice that they\nwould not mistakenly set query_text as NULL even without using pgss_planning\ncounter.\n\n--\nYoshikazu Imai\n\n\n\n",
"msg_date": "Thu, 12 Mar 2020 05:28:38 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hi Imai-san,\n\nOn Thu, Mar 12, 2020 at 05:28:38AM +0000, imai.yoshikazu@fujitsu.com wrote:\n> Hi Julien,\n>\n> On Mon, Mar 9, 2020 at 10:32 AM, Julien Rouhaud wrote:\n> > On Thu, Mar 05, 2020 at 01:26:19PM -0700, legrand legrand wrote:\n> > > Please consider PG13 shortest path ;o)\n> > >\n> > > My one is parse->queryId != UINT64CONST(0) in pgss_planner_hook().\n> > > It fixes IVM problem (verified),\n> > > and keep CTAS equal to pgss without planning counters (verified too).\n> >\n> > I still disagree that hiding this problem is the right fix, but since no one\n> > objected here's a v5 with that behavior. Hopefully this will be fixed in v14.\n>\n> Is there any case that query_text will be NULL when executing pg_plan_query?\n\nWith current sources, there are no cases where the query text isn't provided\nAFAICS.\n\n> If query_text will be NULL, we need to add codes to avoid errors in\n> pgss_store like as current patch. If query_text will not be NULL, we should\n> add Assert in pg_plan_query so that other developers can notice that they\n> would not mistakenly set query_text as NULL even without using pgss_planning\n> counter.\n\nI totally agree. I already had such assert locally, and regression tests pass\nwithout any problem. I'm attaching a v6 with that extra assert. If the\nfirst patch is committed, it'll now be a requirement to provide it. Or if\npeople think it's not, it'll make sure that we'll discuss it.",
"msg_date": "Thu, 12 Mar 2020 07:31:09 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 6:31 AM, Julien Rouhaud wrote:\n> On Thu, Mar 12, 2020 at 05:28:38AM +0000, imai.yoshikazu@fujitsu.com wrote:\n> > Hi Julien,\n> >\n> > On Mon, Mar 9, 2020 at 10:32 AM, Julien Rouhaud wrote:\n> > > On Thu, Mar 05, 2020 at 01:26:19PM -0700, legrand legrand wrote:\n> > > > Please consider PG13 shortest path ;o)\n> > > >\n> > > > My one is parse->queryId != UINT64CONST(0) in pgss_planner_hook().\n> > > > It fixes IVM problem (verified),\n> > > > and keep CTAS equal to pgss without planning counters (verified too).\n> > >\n> > > I still disagree that hiding this problem is the right fix, but since no one\n> > > objected here's a v5 with that behavior. Hopefully this will be fixed in v14.\n> >\n> > Is there any case that query_text will be NULL when executing pg_plan_query?\n> \n> With current sources, there are no cases where the query text isn't provided\n> AFAICS.\n> \n> > If query_text will be NULL, we need to add codes to avoid errors in\n> > pgss_store like as current patch. If query_text will not be NULL, we should\n> > add Assert in pg_plan_query so that other developers can notice that they\n> > would not mistakenly set query_text as NULL even without using pgss_planning\n> > counter.\n> \n> I totally agree. I already had such assert locally, and regression tests pass\n> without any problem. I'm attaching a v6 with that extra assert. If the\n> first patch is committed, it'll now be a requirement to provide it. Or if\n> people think it's not, it'll make sure that we'll discuss it.\n\nI see. I also don't come up with any case of query_text is NULL. Now we need\nother people's opinion about here.\n\n\nI'll summary code review of this thread.\n\n[Performance]\n\nIf track_planning is not enabled, performance will drop 0.2-0.6% which can be\nignored. If track_planning is enabled, performance will drop 0-2.2%. 2.2% is a\nbit large but I think it is still acceptable because people using this feature\nmight take account that some overhead will happen for additional calling of a\ngettime function.\n\nhttps://www.postgresql.org/message-id/CY4PR20MB12227E5CE199FFBB90C68A13BCB40%40CY4PR20MB1222.namprd20.prod.outlook.com\n\n[Values in each row]\n\n* Rows for planner time are added as {total/min/max/mean/stddev}_plan_time. \n\n These are enough statistics for users who want to investigate the\n planning time.\n\n* Rows for executor time are changed from {total/min/max/mean/stddev}_time\nto {total/min/max/mean/stddev}_exec_time.\n\n Because of changing the name of the rows, there's no backward compatibility.\n Thus some users needs to modify scripts which using previous version of the\n pg_stat_statements. I believe it is not expensive to rewrite scripts along\n this change and it would be better to give an appropriate name to a row\n for future users.\n I also haven't seen big opposition about losing backward compatibility so\n far.\n\n* We don't provide {total/min/max/mean/stddev}_time.\n\n Users can calculate total_time as total_plan_time + total_exec_time on their\n own. About {min/max/mean/stddev}_time, it will not make much sense\n because it is not ensured that executor follows planner and each counter\n value will be different largely between planner and executor.\n\n* bufusage still only counts the buffer usage during executor.\n\n Now we have the ability to count the buffer usage during planner but we keep\n the bufusage count the buffer usage during executor for now.\n\n[Coding]\n\n* We add Assert in pg_plan_query so that query_text will not be NULL when\nexecuting planner.\n\n There's no case query_text will be NULL with current sources. It is not\n ensured there will be any case query_text will be possibly NULL in the\n future though. Some considerations are needed by other people about this.\n\n\nI don't have any other comments for now. After looking patches over again and\nif there are no other comments about this patch, I'll set this patch as ready\nfor committer for getting more opinion.\n\n--\nYoshikazu Imai\n\n\n\n",
"msg_date": "Thu, 12 Mar 2020 09:19:37 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 10:19 AM imai.yoshikazu@fujitsu.com\n<imai.yoshikazu@fujitsu.com> wrote:\n>\n> I'll summary code review of this thread.\n\nThanks for the summary! I just have some minor comments\n\n> [Performance]\n>\n> If track_planning is not enabled, performance will drop 0.2-0.6% which can be\n> ignored. If track_planning is enabled, performance will drop 0-2.2%. 2.2% is a\n> bit large but I think it is still acceptable because people using this feature\n> might take account that some overhead will happen for additional calling of a\n> gettime function.\n>\n> https://www.postgresql.org/message-id/CY4PR20MB12227E5CE199FFBB90C68A13BCB40%40CY4PR20MB1222.namprd20.prod.outlook.com\n>\n> [Values in each row]\n>\n> * bufusage still only counts the buffer usage during executor.\n>\n> Now we have the ability to count the buffer usage during planner but we keep\n> the bufusage count the buffer usage during executor for now.\n\nThe bufusage should reflect the sum of planning and execution usage if\ntrack_planning is enabled. Did I miss something there?\n\n> [Coding]\n>\n> * We add Assert in pg_plan_query so that query_text will not be NULL when\n> executing planner.\n>\n> There's no case query_text will be NULL with current sources. It is not\n> ensured there will be any case query_text will be possibly NULL in the\n> future though. Some considerations are needed by other people about this.\n\nThere's at least the current version of IVM patchset that lacks the\nquerytext. Looking at various extensions, I see that pg_background\nand pglogical call pg_plan_query internally but shouldn't have any\nissue providing the query text. But there's also citus extension,\nwhich don't keep around the query string at least when distributing\nplans, which makes sense since it's of no use and they're heavily\nmodifying the original Query. I think that citus folks opinion on the\nsubject would be useful, so I'm Cc-ing Marco.\n\n\n",
"msg_date": "Thu, 12 Mar 2020 11:31:15 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> There's at least the current version of IVM patchset that lacks the\n> querytext. Looking at various extensions, I see that pg_background\n> and pglogical call pg_plan_query internally but shouldn't have any\n> issue providing the query text. But there's also citus extension,\n> which don't keep around the query string at least when distributing\n> plans, which makes sense since it's of no use and they're heavily\n> modifying the original Query. I think that citus folks opinion on the\n> subject would be useful, so I'm Cc-ing Marco.\n\nMost of the time we keep our Query * data structures in a form that\ncan be deparsed back into a query string by a modified copy of\nruleutils.c, so we could generate a correct query string if absolutely\nnecessary, though there are performance-sensitive cases where we'd\nrather not have the deparsing overhead.\n\nIn case of INSERT..SELECT into a distributed table, we might call\npg_plan_query on the SELECT part (Query *) and send the output into a\nDestReceiver that sends tuples into shards of the distributed table\nvia COPY. The fact that SELECT does not show up in pg_stat_statements\nseparately is generally fine because it's an implementation detail,\nand it would probably be a little confusing because the user never ran\nthe SELECT query. Moreover, the call to pg_plan_query would already be\nreflected in the planning or execution time of the top-level query, so\nit would be double counted if it had its own entry.\n\nAnother case is when some of the shards turn out to be local to the\nserver that handles the distributed query. In that case we plan the\nqueries on those shards via pg_plan_query instead of deparsing and\nsending the query string to a remote server. It would be less\nconfusing for these queries to show in pg_stat_statements, because the\nqueries on the shards on remote servers will show up as well. However,\nthis is a performance-sensitive code path where we'd rather avoid\ndeparsing.\n\nIn general, I'd prefer if there was no requirement to pass a correct\nquery string. I'm ok with passing \"SELECT 'citus_internal'\" or just \"\"\nif that does not lead to issues. Passing NULL to signal that the\nplanner call should not be tracked separately does seem a bit cleaner.\n\nMarco\n\n\n",
"msg_date": "Thu, 12 Mar 2020 13:11:22 +0100",
"msg_from": "Marco Slot <marco.slot@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 1:11 PM Marco Slot <marco.slot@gmail.com> wrote:\n>\n> On Thu, Mar 12, 2020 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > There's at least the current version of IVM patchset that lacks the\n> > querytext. Looking at various extensions, I see that pg_background\n> > and pglogical call pg_plan_query internally but shouldn't have any\n> > issue providing the query text. But there's also citus extension,\n> > which don't keep around the query string at least when distributing\n> > plans, which makes sense since it's of no use and they're heavily\n> > modifying the original Query. I think that citus folks opinion on the\n> > subject would be useful, so I'm Cc-ing Marco.\n>\n> Most of the time we keep our Query * data structures in a form that\n> can be deparsed back into a query string by a modified copy of\n> ruleutils.c, so we could generate a correct query string if absolutely\n> necessary, though there are performance-sensitive cases where we'd\n> rather not have the deparsing overhead.\n\nYes, deparsing is probably too expensive for that use case.\n\n> In case of INSERT..SELECT into a distributed table, we might call\n> pg_plan_query on the SELECT part (Query *) and send the output into a\n> DestReceiver that sends tuples into shards of the distributed table\n> via COPY. The fact that SELECT does not show up in pg_stat_statements\n> separately is generally fine because it's an implementation detail,\n> and it would probably be a little confusing because the user never ran\n> the SELECT query. Moreover, the call to pg_plan_query would already be\n> reflected in the planning or execution time of the top-level query, so\n> it would be double counted if it had its own entry.\n\nWell, surprising statements can already appears in pg_stat_statements\nwhen you use some psql features, or if you have triggers as those will\nrun additional queries under the hood.\n\nThe difference here is that since citus is a CustomNode, underlying\ncalls to planner will be accounted for that node, and that's indeed\nannoying. I can see that citus is doing some calls to spi_exec or\nExecutor* (in ExecuteLocalTaskPlan), which could also trigger\npg_stat_statements, but I don't know if a queryid is present there.\n\n> Another case is when some of the shards turn out to be local to the\n> server that handles the distributed query. In that case we plan the\n> queries on those shards via pg_plan_query instead of deparsing and\n> sending the query string to a remote server. It would be less\n> confusing for these queries to show in pg_stat_statements, because the\n> queries on the shards on remote servers will show up as well. However,\n> this is a performance-sensitive code path where we'd rather avoid\n> deparsing.\n\nAgreed.\n\n> In general, I'd prefer if there was no requirement to pass a correct\n> query string. I'm ok with passing \"SELECT 'citus_internal'\" or just \"\"\n> if that does not lead to issues. Passing NULL to signal that the\n> planner call should not be tracked separately does seem a bit cleaner.\n\nThat's very interesting feedback, thanks! I'm not a fan of giving a\nway for queries to say that they want to be ignored by\npg_stat_statements, but double counting the planning counters seem\neven worse, so I'm +0.5 to accept NULL query string in the planner,\nincidentally making pgss ignore those.\n\n\n",
"msg_date": "Thu, 12 Mar 2020 19:36:33 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 10:31 AM, Julien Rouhaud wrote:\r\n> > * bufusage still only counts the buffer usage during executor.\r\n> >\r\n> > Now we have the ability to count the buffer usage during planner but we\r\n> keep\r\n> > the bufusage count the buffer usage during executor for now.\r\n> \r\n> The bufusage should reflect the sum of planning and execution usage if\r\n> track_planning is enabled. Did I miss something there?\r\n\r\nAh, you're right. I somehow misunderstood it. Sorry for the annoying.\r\n\r\n> > * We add Assert in pg_plan_query so that query_text will not be NULL\r\n> > when executing planner.\r\n> >\r\n> > There's no case query_text will be NULL with current sources. It is not\r\n> > ensured there will be any case query_text will be possibly NULL in the\r\n> > future though. Some considerations are needed by other people about\r\n> this.\r\n> \r\n> There's at least the current version of IVM patchset that lacks the querytext.\r\n\r\nI saw IVM patchset but I thought it is difficult to impose them to give appropriate\r\nquerytext.\r\n\r\n\r\n> Looking at various extensions, I see that pg_background and pglogical call\r\n> pg_plan_query internally but shouldn't have any issue providing the query text.\r\n> But there's also citus extension, which don't keep around the query string at\r\n> least when distributing plans, which makes sense since it's of no use and\r\n> they're heavily modifying the original Query. I think that citus folks opinion on\r\n> the subject would be useful, so I'm Cc-ing Marco.\r\n\r\nThank you for looking those codes. I will comment about this in another mail.\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Fri, 13 Mar 2020 06:35:48 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 6:37 PM, Julien Rouhaud wrote:\r\n> On Thu, Mar 12, 2020 at 1:11 PM Marco Slot <marco.slot@gmail.com> wrote:\r\n> > On Thu, Mar 12, 2020 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com>\r\n> wrote:\r\n> > > There's at least the current version of IVM patchset that lacks the\r\n> > > querytext. Looking at various extensions, I see that pg_background\r\n> > > and pglogical call pg_plan_query internally but shouldn't have any\r\n> > > issue providing the query text. But there's also citus extension,\r\n> > > which don't keep around the query string at least when distributing\r\n> > > plans, which makes sense since it's of no use and they're heavily\r\n> > > modifying the original Query. I think that citus folks opinion on\r\n> > > the subject would be useful, so I'm Cc-ing Marco.\r\n> >\r\n> > Most of the time we keep our Query * data structures in a form that\r\n> > can be deparsed back into a query string by a modified copy of\r\n> > ruleutils.c, so we could generate a correct query string if absolutely\r\n> > necessary, though there are performance-sensitive cases where we'd\r\n> > rather not have the deparsing overhead.\r\n> \r\n> Yes, deparsing is probably too expensive for that use case.\r\n> \r\n> > In case of INSERT..SELECT into a distributed table, we might call\r\n> > pg_plan_query on the SELECT part (Query *) and send the output into a\r\n> > DestReceiver that sends tuples into shards of the distributed table\r\n> > via COPY. The fact that SELECT does not show up in pg_stat_statements\r\n> > separately is generally fine because it's an implementation detail,\r\n> > and it would probably be a little confusing because the user never ran\r\n> > the SELECT query. Moreover, the call to pg_plan_query would already be\r\n> > reflected in the planning or execution time of the top-level query, so\r\n> > it would be double counted if it had its own entry.\r\n> \r\n> Well, surprising statements can already appears in pg_stat_statements when\r\n> you use some psql features, or if you have triggers as those will run additional\r\n> queries under the hood.\r\n> \r\n> The difference here is that since citus is a CustomNode, underlying calls to\r\n> planner will be accounted for that node, and that's indeed annoying. I can see\r\n> that citus is doing some calls to spi_exec or\r\n> Executor* (in ExecuteLocalTaskPlan), which could also trigger\r\n> pg_stat_statements, but I don't know if a queryid is present there.\r\n> \r\n> > Another case is when some of the shards turn out to be local to the\r\n> > server that handles the distributed query. In that case we plan the\r\n> > queries on those shards via pg_plan_query instead of deparsing and\r\n> > sending the query string to a remote server. It would be less\r\n> > confusing for these queries to show in pg_stat_statements, because the\r\n> > queries on the shards on remote servers will show up as well. However,\r\n> > this is a performance-sensitive code path where we'd rather avoid\r\n> > deparsing.\r\n> \r\n> Agreed.\r\n> \r\n> > In general, I'd prefer if there was no requirement to pass a correct\r\n> > query string. I'm ok with passing \"SELECT 'citus_internal'\" or just \"\"\r\n> > if that does not lead to issues. Passing NULL to signal that the\r\n> > planner call should not be tracked separately does seem a bit cleaner.\r\n> \r\n> That's very interesting feedback, thanks! I'm not a fan of giving a way for\r\n> queries to say that they want to be ignored by pg_stat_statements, but double\r\n> counting the planning counters seem even worse, so I'm +0.5 to accept NULL\r\n> query string in the planner, incidentally making pgss ignore those.\r\n\r\nIt is preferable that we can count various queries statistics as much as possible\r\nbut if it causes overhead even when without using pg_stat_statements, we would\r\nnot have to force them to set appropriate query_text.\r\nAbout settings a fixed string in query_text, I think it doesn't make much sense\r\nbecause users can't take any actions by seeing those queries' stats. Moreover, if\r\nwe set a fixed string in query_text to avoid pg_stat_statement's errors, codes\r\nwould be inexplicable for other developers who don't know there's such\r\nrequirements.\r\nAfter all, I agree accepting NULL query string in the planner.\r\n\r\nI don't know it is useful but there are also codes that avoid an error when\r\nsourceText is NULL.\r\n\r\nexecutor_errposition(EState *estate, int location)\r\n{\r\n ...\r\n /* Can't do anything if source text is not available */\r\n if (estate == NULL || estate->es_sourceText == NULL)\r\n}\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Fri, 13 Mar 2020 06:54:28 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "imai.yoshikazu@fujitsu.com wrote\n> On Thu, Mar 12, 2020 at 6:37 PM, Julien Rouhaud wrote:\n>> On Thu, Mar 12, 2020 at 1:11 PM Marco Slot <\n\n> marco.slot@\n\n> > wrote:\n>> > On Thu, Mar 12, 2020 at 11:31 AM Julien Rouhaud <\n\n> rjuju123@\n\n> >\n>> wrote:\n>> > > There's at least the current version of IVM patchset that lacks the\n>> > > querytext. Looking at various extensions, I see that pg_background\n>> > > and pglogical call pg_plan_query internally but shouldn't have any\n>> > > issue providing the query text. But there's also citus extension,\n>> > > which don't keep around the query string at least when distributing\n>> > > plans, which makes sense since it's of no use and they're heavily\n>> > > modifying the original Query. I think that citus folks opinion on\n>> > > the subject would be useful, so I'm Cc-ing Marco.\n>> >\n>> > Most of the time we keep our Query * data structures in a form that\n>> > can be deparsed back into a query string by a modified copy of\n>> > ruleutils.c, so we could generate a correct query string if absolutely\n>> > necessary, though there are performance-sensitive cases where we'd\n>> > rather not have the deparsing overhead.\n>> \n>> Yes, deparsing is probably too expensive for that use case.\n>> \n>> > In case of INSERT..SELECT into a distributed table, we might call\n>> > pg_plan_query on the SELECT part (Query *) and send the output into a\n>> > DestReceiver that sends tuples into shards of the distributed table\n>> > via COPY. The fact that SELECT does not show up in pg_stat_statements\n>> > separately is generally fine because it's an implementation detail,\n>> > and it would probably be a little confusing because the user never ran\n>> > the SELECT query. Moreover, the call to pg_plan_query would already be\n>> > reflected in the planning or execution time of the top-level query, so\n>> > it would be double counted if it had its own entry.\n>> \n>> Well, surprising statements can already appears in pg_stat_statements\n>> when\n>> you use some psql features, or if you have triggers as those will run\n>> additional\n>> queries under the hood.\n>> \n>> The difference here is that since citus is a CustomNode, underlying calls\n>> to\n>> planner will be accounted for that node, and that's indeed annoying. I\n>> can see\n>> that citus is doing some calls to spi_exec or\n>> Executor* (in ExecuteLocalTaskPlan), which could also trigger\n>> pg_stat_statements, but I don't know if a queryid is present there.\n>> \n>> > Another case is when some of the shards turn out to be local to the\n>> > server that handles the distributed query. In that case we plan the\n>> > queries on those shards via pg_plan_query instead of deparsing and\n>> > sending the query string to a remote server. It would be less\n>> > confusing for these queries to show in pg_stat_statements, because the\n>> > queries on the shards on remote servers will show up as well. However,\n>> > this is a performance-sensitive code path where we'd rather avoid\n>> > deparsing.\n>> \n>> Agreed.\n>> \n>> > In general, I'd prefer if there was no requirement to pass a correct\n>> > query string. I'm ok with passing \"SELECT 'citus_internal'\" or just \"\"\n>> > if that does not lead to issues. Passing NULL to signal that the\n>> > planner call should not be tracked separately does seem a bit cleaner.\n>> \n>> That's very interesting feedback, thanks! I'm not a fan of giving a way\n>> for\n>> queries to say that they want to be ignored by pg_stat_statements, but\n>> double\n>> counting the planning counters seem even worse, so I'm +0.5 to accept\n>> NULL\n>> query string in the planner, incidentally making pgss ignore those.\n> \n> It is preferable that we can count various queries statistics as much as\n> possible\n> but if it causes overhead even when without using pg_stat_statements, we\n> would\n> not have to force them to set appropriate query_text.\n> About settings a fixed string in query_text, I think it doesn't make much\n> sense\n> because users can't take any actions by seeing those queries' stats.\n> Moreover, if\n> we set a fixed string in query_text to avoid pg_stat_statement's errors,\n> codes\n> would be inexplicable for other developers who don't know there's such\n> requirements.\n> After all, I agree accepting NULL query string in the planner.\n> \n> I don't know it is useful but there are also codes that avoid an error\n> when\n> sourceText is NULL.\n> \n> executor_errposition(EState *estate, int location)\n> {\n> ...\n> /* Can't do anything if source text is not available */\n> if (estate == NULL || estate->es_sourceText == NULL)\n> }\n> \n> --\n> Yoshikazu Imai\n\nHello Imai,\n\nMy understanding of V5 patch, that checks for Non-Zero queryid,\nmanage properly case where sourceText is NULL.\n\nA NULL sourceText means that there was no Parsing for the associated \nquery, if there was no Parsing, there is no queryid (queryId=0), \nand no planning counters update.\n\nIt doesn't change pg_plan_query behaviour (no regression for Citus, IVM,\n...),\nand was tested with success for IVM.\n\nIf my understanding is wrong, then setting track_planning = false\nwould still be the work arround for the very rare (no core) extension(s) \nthat may hit the NULL query text assertion failure.\n\nWhat do you think about this ? \nWould this make V5 patch ready for committers ?\n\nThanks in advance.\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 14 Mar 2020 03:04:00 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "> I don't know it is useful but there are also codes that avoid an error when\n> sourceText is NULL.\n\n> executor_errposition(EState *estate, int location)\n> {\n> ...\n> /* Can't do anything if source text is not available */\n> if (estate == NULL || estate->es_sourceText == NULL)\n> }\n\n\nor maybe would you prefer to replace the Non-Zero queryid test \nby Non-NULL sourcetext one ?\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 14 Mar 2020 03:39:23 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sat, Mar 14, 2020 at 03:04:00AM -0700, legrand legrand wrote:\n> imai.yoshikazu@fujitsu.com wrote\n> > On Thu, Mar 12, 2020 at 6:37 PM, Julien Rouhaud wrote:\n> >> That's very interesting feedback, thanks! I'm not a fan of giving a way\n> >> for\n> >> queries to say that they want to be ignored by pg_stat_statements, but\n> >> double\n> >> counting the planning counters seem even worse, so I'm +0.5 to accept\n> >> NULL\n> >> query string in the planner, incidentally making pgss ignore those.\n> >\n> > It is preferable that we can count various queries statistics as much as\n> > possible\n> > but if it causes overhead even when without using pg_stat_statements, we\n> > would\n> > not have to force them to set appropriate query_text.\n> > About settings a fixed string in query_text, I think it doesn't make much\n> > sense\n> > because users can't take any actions by seeing those queries' stats.\n> > Moreover, if\n> > we set a fixed string in query_text to avoid pg_stat_statement's errors,\n> > codes\n> > would be inexplicable for other developers who don't know there's such\n> > requirements.\n> > After all, I agree accepting NULL query string in the planner.\n> >\n> > I don't know it is useful but there are also codes that avoid an error\n> > when\n> > sourceText is NULL.\n> >\n> > executor_errposition(EState *estate, int location)\n> > {\n> > ...\n> > /* Can't do anything if source text is not available */\n> > if (estate == NULL || estate->es_sourceText == NULL)\n\n\nI'm wondering if that's really possible. But pgss uses the QueryDesc, which\nshould always have a query text (since pgss relies on that).\n\n\n> My understanding of V5 patch, that checks for Non-Zero queryid,\n> manage properly case where sourceText is NULL.\n>\n> A NULL sourceText means that there was no Parsing for the associated\n> query, if there was no Parsing, there is no queryid (queryId=0),\n> and no planning counters update.\n>\n> It doesn't change pg_plan_query behaviour (no regression for Citus, IVM,\n> ...),\n> and was tested with success for IVM.\n>\n> If my understanding is wrong, then setting track_planning = false\n> would still be the work arround for the very rare (no core) extension(s)\n> that may hit the NULL query text assertion failure.\n>\n> What do you think about this ?\n\n\nI don't think that's a correct assumption. I obviously didn't read all of\ncitus extension, but it looks like what's happening is that they get generate a\ncustom Query from the original one, with all the modification needed for\ndistributed execution and whatnot, which is then fed to the planner. I think\nit's entirely mossible that the modified Query herits from a previously set\nqueryid, while still not really having a query text. And if citus doesn't do\nthat, it doesn't seem like an illegal use cuse anyway.\n\nI'm instead attaching a v7 which removes the assert in pg_plan_query, and\nmodify pgss_planner_hook to also ignore queries without a query text, as this\nseems the best option.",
"msg_date": "Sat, 14 Mar 2020 18:27:33 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sat, Mar 14, 2020 at 5:28 PM, Julien Rouhaud wrote:\n> On Sat, Mar 14, 2020 at 03:04:00AM -0700, legrand legrand wrote:\n> > imai.yoshikazu@fujitsu.com wrote\n> > > On Thu, Mar 12, 2020 at 6:37 PM, Julien Rouhaud wrote:\n> > >> That's very interesting feedback, thanks! I'm not a fan of giving a way\n> > >> for\n> > >> queries to say that they want to be ignored by pg_stat_statements, but\n> > >> double\n> > >> counting the planning counters seem even worse, so I'm +0.5 to accept\n> > >> NULL\n> > >> query string in the planner, incidentally making pgss ignore those.\n> > >\n> > > It is preferable that we can count various queries statistics as much as\n> > > possible\n> > > but if it causes overhead even when without using pg_stat_statements, we\n> > > would\n> > > not have to force them to set appropriate query_text.\n> > > About settings a fixed string in query_text, I think it doesn't make much\n> > > sense\n> > > because users can't take any actions by seeing those queries' stats.\n> > > Moreover, if\n> > > we set a fixed string in query_text to avoid pg_stat_statement's errors,\n> > > codes\n> > > would be inexplicable for other developers who don't know there's such\n> > > requirements.\n> > > After all, I agree accepting NULL query string in the planner.\n> > >\n> > > I don't know it is useful but there are also codes that avoid an error\n> > > when\n> > > sourceText is NULL.\n> > >\n> > > executor_errposition(EState *estate, int location)\n> > > {\n> > > ...\n> > > /* Can't do anything if source text is not available */\n> > > if (estate == NULL || estate->es_sourceText == NULL)\n> \n> \n> I'm wondering if that's really possible. But pgss uses the QueryDesc, which\n> should always have a query text (since pgss relies on that).\n\nI cited those codes because I just wanted to say there's already an assumption\nthat query text in QueryDesc would be NULL, whether or not it is true.\n\n\n> > My understanding of V5 patch, that checks for Non-Zero queryid,\n> > manage properly case where sourceText is NULL.\n> >\n> > A NULL sourceText means that there was no Parsing for the associated\n> > query, if there was no Parsing, there is no queryid (queryId=0),\n> > and no planning counters update.\n> >\n> > It doesn't change pg_plan_query behaviour (no regression for Citus, IVM,\n> > ...),\n> > and was tested with success for IVM.\n> >\n> > If my understanding is wrong, then setting track_planning = false\n> > would still be the work arround for the very rare (no core) extension(s)\n> > that may hit the NULL query text assertion failure.\n> >\n> > What do you think about this ?\n> \n> \n> I don't think that's a correct assumption. I obviously didn't read all of\n> citus extension, but it looks like what's happening is that they get generate a\n> custom Query from the original one, with all the modification needed for\n> distributed execution and whatnot, which is then fed to the planner. I think\n> it's entirely mossible that the modified Query herits from a previously set\n> queryid, while still not really having a query text. And if citus doesn't do\n> that, it doesn't seem like an illegal use cuse anyway.\n\nIndeed. It can happen that queryid has some value while query_text is NULL.\n\n\n> I'm instead attaching a v7 which removes the assert in pg_plan_query, and\n> modify pgss_planner_hook to also ignore queries without a query text, as this\n> seems the best option.\n\nThank you.\nIt also seems to me that is the best option.\n\nBTW, I recheck the patchset.\nI think codes are ready for committer but should we modify the documentation?\n{min,max,mean,stddev}_time is now obsoleted so it is better to modify it to\n{min,max,mean,stddev}_exec_time and add {min,max,mean,stddev}_plan_time.\n\n\n--\nYoshikazu Imai\n\n\n\n",
"msg_date": "Mon, 16 Mar 2020 01:34:11 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "> I'm instead attaching a v7 which removes the assert in pg_plan_query, and\n> modify pgss_planner_hook to also ignore queries without a query text, as\n> this\n> seems the best option. \n\nOk, it was the second solution, go on !\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 16 Mar 2020 12:44:14 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 01:34:11AM +0000, imai.yoshikazu@fujitsu.com wrote:\n> On Sat, Mar 14, 2020 at 5:28 PM, Julien Rouhaud wrote:\n> > I don't think that's a correct assumption. I obviously didn't read all of\n> > citus extension, but it looks like what's happening is that they get generate a\n> > custom Query from the original one, with all the modification needed for\n> > distributed execution and whatnot, which is then fed to the planner. I think\n> > it's entirely mossible that the modified Query herits from a previously set\n> > queryid, while still not really having a query text. And if citus doesn't do\n> > that, it doesn't seem like an illegal use cuse anyway.\n>\n> Indeed. It can happen that queryid has some value while query_text is NULL.\n>\n>\n> > I'm instead attaching a v7 which removes the assert in pg_plan_query, and\n> > modify pgss_planner_hook to also ignore queries without a query text, as this\n> > seems the best option.\n>\n> Thank you.\n> It also seems to me that is the best option.\n\n\nThanks Imai-san and PAscal for the feedback, it seems that we have an\nagreement!\n\n\n> BTW, I recheck the patchset.\n> I think codes are ready for committer but should we modify the documentation?\n> {min,max,mean,stddev}_time is now obsoleted so it is better to modify it to\n> {min,max,mean,stddev}_exec_time and add {min,max,mean,stddev}_plan_time.\n\n\nOh indeed, I totally forgot about this. I'm attaching v8 with updated\ndocumentation that should match what was implemented since some versions.",
"msg_date": "Mon, 16 Mar 2020 22:49:12 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 9:49 PM, Julien Rouhaud wrote:\n> On Mon, Mar 16, 2020 at 01:34:11AM +0000, imai.yoshikazu@fujitsu.com\n> wrote:\n> > On Sat, Mar 14, 2020 at 5:28 PM, Julien Rouhaud wrote:\n> > > I don't think that's a correct assumption. I obviously didn't read\n> > > all of citus extension, but it looks like what's happening is that\n> > > they get generate a custom Query from the original one, with all the\n> > > modification needed for distributed execution and whatnot, which is\n> > > then fed to the planner. I think it's entirely mossible that the\n> > > modified Query herits from a previously set queryid, while still not\n> > > really having a query text. And if citus doesn't do that, it doesn't seem like\n> an illegal use cuse anyway.\n> >\n> > Indeed. It can happen that queryid has some value while query_text is NULL.\n> >\n> >\n> > > I'm instead attaching a v7 which removes the assert in\n> > > pg_plan_query, and modify pgss_planner_hook to also ignore queries\n> > > without a query text, as this seems the best option.\n> >\n> > Thank you.\n> > It also seems to me that is the best option.\n> \n> \n> Thanks Imai-san and PAscal for the feedback, it seems that we have an\n> agreement!\n> \n> \n> > BTW, I recheck the patchset.\n> > I think codes are ready for committer but should we modify the\n> documentation?\n> > {min,max,mean,stddev}_time is now obsoleted so it is better to modify\n> > it to {min,max,mean,stddev}_exec_time and add\n> {min,max,mean,stddev}_plan_time.\n> \n> \n> Oh indeed, I totally forgot about this. I'm attaching v8 with updated\n> documentation that should match what was implemented since some\n> versions.\n\nOkay, I checked it.\nSo I'll mark this as a ready for committer.\n\nThanks\n--\nYoshikazu Imai\n\n\n",
"msg_date": "Tue, 17 Mar 2020 00:07:30 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hello\n\nI was inactive for a while... Sorry.\n\n>> BTW, I recheck the patchset.\n>> I think codes are ready for committer but should we modify the documentation?\n>> {min,max,mean,stddev}_time is now obsoleted so it is better to modify it to\n>> {min,max,mean,stddev}_exec_time and add {min,max,mean,stddev}_plan_time.\n>\n> Oh indeed, I totally forgot about this. I'm attaching v8 with updated\n> documentation that should match what was implemented since some versions.\n\nYet another is missed in docs: total_time\n\nI specifically verified that the new loaded library works with the old version of the extension in the database. I have not noticed issues here.\n\n> 2.2% is a bit large but I think it is still acceptable because people using this feature\n> might take account that some overhead will happen for additional calling of a\n> gettime function.\n\nI will be happy even with 10% overhead due to enabled track_planning... (but in this case disabled by default) log_min_duration_statement = 0 with log parsing is much more expensive.\nI think 1-2% is acceptable and we can set track_planning = on by default as patch does.\n\n> * Rows for executor time are changed from {total/min/max/mean/stddev}_time to {total/min/max/mean/stddev}_exec_time.\n\nMaybe release it as 2.0 version instead of minor update 1.18?\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 20 Mar 2020 17:09:05 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 05:09:05PM +0300, Sergei Kornilov wrote:\n> Hello\n> \n> Yet another is missed in docs: total_time\n\nOh good catch! I rechecked many time the field, and totally missed that the\ndocumentation is referring to the view, which has an additional column, and not\nthe function. Attached v9 fixes that.\n\n> I specifically verified that the new loaded library works with the old version of the extension in the database. I have not noticed issues here.\n\nThanks for those extra checks.\n\n> > 2.2% is a bit large but I think it is still acceptable because people using this feature\n> > might take account that some overhead will happen for additional calling of a\n> > gettime function.\n> \n> I will be happy even with 10% overhead due to enabled track_planning... (but in this case disabled by default) log_min_duration_statement = 0 with log parsing is much more expensive.\n> I think 1-2% is acceptable and we can set track_planning = on by default as patch does.\n> \n> > * Rows for executor time are changed from {total/min/max/mean/stddev}_time to {total/min/max/mean/stddev}_exec_time.\n> \n> Maybe release it as 2.0 version instead of minor update 1.18?\n\nI don't have an opinion on that, I'd be fine with any version. I kept 1.18 in\nthe patch for now.",
"msg_date": "Fri, 20 Mar 2020 20:30:04 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/21 4:30, Julien Rouhaud wrote:\n> On Fri, Mar 20, 2020 at 05:09:05PM +0300, Sergei Kornilov wrote:\n>> Hello\n>>\n>> Yet another is missed in docs: total_time\n> \n> Oh good catch! I rechecked many time the field, and totally missed that the\n> documentation is referring to the view, which has an additional column, and not\n> the function. Attached v9 fixes that.\n\nThanks for the patch! Here are the review comments from me.\n\n-\tPGSS_V1_3\n+\tPGSS_V1_3,\n+\tPGSS_V1_8\n\nWAL usage patch [1] increments this version to 1_4 instead of 1_8.\nI *guess* that's because previously this version was maintained\nindependently from pg_stat_statements' version. For example,\npg_stat_statements 1.4 seems to have used PGSS_V1_3.\n\n+\t/*\n+\t * We can't process the query if no query_text is provided, as pgss_store\n+\t * needs it. We also ignore query without queryid, as it would be treated\n+\t * as a utility statement, which may not be the case.\n+\t */\n\nCould you tell me why the planning stats are not tracked when executing\nutility statements? In some utility statements like REFRESH MATERIALIZED VIEW,\nthe planner would work.\n\n+static BufferUsage\n+compute_buffer_counters(BufferUsage start, BufferUsage stop)\n+{\n+\tBufferUsage result;\n\nBufferUsageAccumDiff() has very similar logic. Isn't it better to expose\nand use that function rather than creating new similar function?\n\n \t\tvalues[i++] = Int64GetDatumFast(tmp.rows);\n \t\tvalues[i++] = Int64GetDatumFast(tmp.shared_blks_hit);\n \t\tvalues[i++] = Int64GetDatumFast(tmp.shared_blks_read);\n\nPreviously (without the patch) pg_stat_statements_1_3() reported\nthe buffer usage counters updated only in execution phase. But,\nin the patched version, pg_stat_statements_1_3() reports the total\nof buffer usage counters updated in both planning and execution\nphases. Is this OK? I'm not sure how seriously we should ensure\nthe backward compatibility for pg_stat_statements....\n\n+/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n\nISTM it's good timing to have also pg_stat_statements--1.8.sql since\nthe definition of pg_stat_statements() is changed. Thought?\n\n[1]\nhttps://postgr.es/m/CAB-hujrP8ZfUkvL5OYETipQwA=e3n7oqHFU=4ZLxWS_Cza3kQQ@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 25 Mar 2020 22:09:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 10:09:37PM +0900, Fujii Masao wrote:\n> \n> On 2020/03/21 4:30, Julien Rouhaud wrote:\n> > On Fri, Mar 20, 2020 at 05:09:05PM +0300, Sergei Kornilov wrote:\n> > > Hello\n> > > \n> > > Yet another is missed in docs: total_time\n> > \n> > Oh good catch! I rechecked many time the field, and totally missed that the\n> > documentation is referring to the view, which has an additional column, and not\n> > the function. Attached v9 fixes that.\n> \n> Thanks for the patch! Here are the review comments from me.\n> \n> -\tPGSS_V1_3\n> +\tPGSS_V1_3,\n> +\tPGSS_V1_8\n> \n> WAL usage patch [1] increments this version to 1_4 instead of 1_8.\n> I *guess* that's because previously this version was maintained\n> independently from pg_stat_statements' version. For example,\n> pg_stat_statements 1.4 seems to have used PGSS_V1_3.\n\nOh right. It seems that I changed that many versions ago, I'm not sure why.\nI'm personally fine with any, but I think this was previously raised and\nconsensus was to keep distinct counters. Unless you prefer to keep it this\nway, I'll send an updated version (with other possible modifications depending\non the rest of the mail) using PGSS_V1_4.\n\n> +\t/*\n> +\t * We can't process the query if no query_text is provided, as pgss_store\n> +\t * needs it. We also ignore query without queryid, as it would be treated\n> +\t * as a utility statement, which may not be the case.\n> +\t */\n> \n> Could you tell me why the planning stats are not tracked when executing\n> utility statements? In some utility statements like REFRESH MATERIALIZED VIEW,\n> the planner would work.\n\nI explained that in [1]. The problem is that the underlying statement doesn't\nget the proper stmt_location and stmt_len, so you eventually end up with two\ndifferent entries. I suggested fixing transformTopLevelStmt() to handle the\nvarious DDL that can contain optimisable statements, but everyone preferred to\npostpone that for a future enhencement.\n\n> +static BufferUsage\n> +compute_buffer_counters(BufferUsage start, BufferUsage stop)\n> +{\n> +\tBufferUsage result;\n> \n> BufferUsageAccumDiff() has very similar logic. Isn't it better to expose\n> and use that function rather than creating new similar function?\n\nOh, I thought this wouldn't be acceptable. That's indeed better so I'll do\nthat instead.\n\n> \t\tvalues[i++] = Int64GetDatumFast(tmp.rows);\n> \t\tvalues[i++] = Int64GetDatumFast(tmp.shared_blks_hit);\n> \t\tvalues[i++] = Int64GetDatumFast(tmp.shared_blks_read);\n> \n> Previously (without the patch) pg_stat_statements_1_3() reported\n> the buffer usage counters updated only in execution phase. But,\n> in the patched version, pg_stat_statements_1_3() reports the total\n> of buffer usage counters updated in both planning and execution\n> phases. Is this OK? I'm not sure how seriously we should ensure\n> the backward compatibility for pg_stat_statements....\n\nThat's indeed a behavior change, although the new behavior is probably better\nas user want to know how much resource a query is consuming overall. We could\ndistinguish all buffers with a plan/exec version, but it seems quite overkill.\n\n> +/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n> \n> ISTM it's good timing to have also pg_stat_statements--1.8.sql since\n> the definition of pg_stat_statements() is changed. Thought?\n\nI thought that since CreateExtension() was modified to be able to find it's way\nautomatically, we shouldn't provide base version anymore, to minimize\nmaintenance burden and also avoid possible bug/discrepancy. The only drawback\nis that it'll do multiple CREATE or DROP/CREATE of the function usually once\nper database, which doesn't seem like a big problem.\n\n[1] https://www.postgresql.org/message-id/CAOBaU_Y-y+VOhTZgDOuDk6-9V72-ZXdWccXo_kx0P4DDBEEh9A@mail.gmail.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 14:45:53 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Hello\n\n> WAL usage patch [1] increments this version to 1_4 instead of 1_8.\n> I *guess* that's because previously this version was maintained\n> independently from pg_stat_statements' version. For example,\n> pg_stat_statements 1.4 seems to have used PGSS_V1_3.\n\nAs far as I remember, this was my proposed change in review a year ago.\nI think that having a clear analogy between the extension version and the function name would be more clear than sequential numbering of PGSS_V with different extension versions.\nFor pgss 1.4 it was fine to use PGSS_V1_3, because there were no changes in pg_stat_statements_internal.\npg_stat_statements 1.3 will call pg_stat_statements_1_3\npg_stat_statements 1.4 - 1.7 will still call pg_stat_statements_1_3. In my opinion, this is the correct naming, since we did not need a new function.\nbut pg_stat_statements 1.8 will call pg_stat_statements_1_4. It's not confusing?\n\nWell, no strong opinion.\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 25 Mar 2020 20:17:59 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/26 2:17, Sergei Kornilov wrote:\n> Hello\n> \n>> WAL usage patch [1] increments this version to 1_4 instead of 1_8.\n>> I *guess* that's because previously this version was maintained\n>> independently from pg_stat_statements' version. For example,\n>> pg_stat_statements 1.4 seems to have used PGSS_V1_3.\n> \n> As far as I remember, this was my proposed change in review a year ago.\n> I think that having a clear analogy between the extension version and the function name would be more clear than sequential numbering of PGSS_V with different extension versions.\n> For pgss 1.4 it was fine to use PGSS_V1_3, because there were no changes in pg_stat_statements_internal.\n> pg_stat_statements 1.3 will call pg_stat_statements_1_3\n> pg_stat_statements 1.4 - 1.7 will still call pg_stat_statements_1_3. In my opinion, this is the correct naming, since we did not need a new function.\n> but pg_stat_statements 1.8 will call pg_stat_statements_1_4. It's not confusing?\n\nYeah, I withdraw my comment and agree that 1_8 looks less confusing.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 26 Mar 2020 10:56:55 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/25 22:45, Julien Rouhaud wrote:\n> On Wed, Mar 25, 2020 at 10:09:37PM +0900, Fujii Masao wrote:\n>>\n>> On 2020/03/21 4:30, Julien Rouhaud wrote:\n>>> On Fri, Mar 20, 2020 at 05:09:05PM +0300, Sergei Kornilov wrote:\n>>>> Hello\n>>>>\n>>>> Yet another is missed in docs: total_time\n>>>\n>>> Oh good catch! I rechecked many time the field, and totally missed that the\n>>> documentation is referring to the view, which has an additional column, and not\n>>> the function. Attached v9 fixes that.\n>>\n>> Thanks for the patch! Here are the review comments from me.\n>>\n>> -\tPGSS_V1_3\n>> +\tPGSS_V1_3,\n>> +\tPGSS_V1_8\n>>\n>> WAL usage patch [1] increments this version to 1_4 instead of 1_8.\n>> I *guess* that's because previously this version was maintained\n>> independently from pg_stat_statements' version. For example,\n>> pg_stat_statements 1.4 seems to have used PGSS_V1_3.\n> \n> Oh right. It seems that I changed that many versions ago, I'm not sure why.\n> I'm personally fine with any, but I think this was previously raised and\n> consensus was to keep distinct counters. Unless you prefer to keep it this\n> way, I'll send an updated version (with other possible modifications depending\n> on the rest of the mail) using PGSS_V1_4.\n> \n>> +\t/*\n>> +\t * We can't process the query if no query_text is provided, as pgss_store\n>> +\t * needs it. We also ignore query without queryid, as it would be treated\n>> +\t * as a utility statement, which may not be the case.\n>> +\t */\n>>\n>> Could you tell me why the planning stats are not tracked when executing\n>> utility statements? In some utility statements like REFRESH MATERIALIZED VIEW,\n>> the planner would work.\n> \n> I explained that in [1]. The problem is that the underlying statement doesn't\n> get the proper stmt_location and stmt_len, so you eventually end up with two\n> different entries.\n\nIt's not problematic to have two different entries in that case. Right?\nThe actual problem is that the statements reported in those entries are\nvery similar? For example, when \"create table test as select 1;\" is executed,\nit's strange to get the following two entries, as you explained.\n\n create table test as select 1;\n create table test as select 1\n\nBut it seems valid to get the following two entries in that case?\n\n select 1\n create table test as select 1\n\nThe former is the nested statement and the latter is the top statement.\n\n> I suggested fixing transformTopLevelStmt() to handle the\n> various DDL that can contain optimisable statements, but everyone preferred to\n> postpone that for a future enhencement.\n\nUnderstood. Thanks for explanation!\n\n>> +static BufferUsage\n>> +compute_buffer_counters(BufferUsage start, BufferUsage stop)\n>> +{\n>> +\tBufferUsage result;\n>>\n>> BufferUsageAccumDiff() has very similar logic. Isn't it better to expose\n>> and use that function rather than creating new similar function?\n> \n> Oh, I thought this wouldn't be acceptable. That's indeed better so I'll do\n> that instead.\n\nThanks! But of course this is trivial thing, so it's ok to do that later.\n\n>> \t\tvalues[i++] = Int64GetDatumFast(tmp.rows);\n>> \t\tvalues[i++] = Int64GetDatumFast(tmp.shared_blks_hit);\n>> \t\tvalues[i++] = Int64GetDatumFast(tmp.shared_blks_read);\n>>\n>> Previously (without the patch) pg_stat_statements_1_3() reported\n>> the buffer usage counters updated only in execution phase. But,\n>> in the patched version, pg_stat_statements_1_3() reports the total\n>> of buffer usage counters updated in both planning and execution\n>> phases. Is this OK? I'm not sure how seriously we should ensure\n>> the backward compatibility for pg_stat_statements....\n> \n> That's indeed a behavior change, although the new behavior is probably better\n> as user want to know how much resource a query is consuming overall. We could\n> distinguish all buffers with a plan/exec version, but it seems quite overkill.\n\nOk.\n\n> \n>> +/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n>>\n>> ISTM it's good timing to have also pg_stat_statements--1.8.sql since\n>> the definition of pg_stat_statements() is changed. Thought?\n> \n> I thought that since CreateExtension() was modified to be able to find it's way\n> automatically, we shouldn't provide base version anymore, to minimize\n> maintenance burden and also avoid possible bug/discrepancy. The only drawback\n> is that it'll do multiple CREATE or DROP/CREATE of the function usually once\n> per database, which doesn't seem like a big problem.\n\nOk.\n\nHere are other comments.\n\n-\t\tif (jstate)\n+\t\tif (kind == PGSS_JUMBLE)\n\nWhy is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n\nIf it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\nand replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n\n+ <entry><structfield>total_time</structfield></entry>\n+ <entry><type>double precision</type></entry>\n+ <entry></entry>\n+ <entry>\n+ Total time spend planning and executing the statement, in milliseconds\n+ </entry>\n+ </row>\n\npg_stat_statements view has this column but the function not.\nWe should make both have the column or not at all, for consistency?\nI'm not sure if it's good thing to expose the sum of total_plan_time\nand total_exec_time as total_time. If some users want that, they can\neasily calculate it from total_plan_time and total_exec_time by using\ntheir own logic.\n\n+\t\tnested_level++;\n+\t\tPG_TRY();\n\nIn old thread [1], Tom Lane commented the usage of nested_level\nin the planner hook. There seems no reply to that so far. What's\nyour opinion about that comment?\n\n[1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 26 Mar 2020 20:08:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 08:08:35PM +0900, Fujii Masao wrote:\n> \n> On 2020/03/25 22:45, Julien Rouhaud wrote:\n> > On Wed, Mar 25, 2020 at 10:09:37PM +0900, Fujii Masao wrote:\n> > > +\t/*\n> > > +\t * We can't process the query if no query_text is provided, as pgss_store\n> > > +\t * needs it. We also ignore query without queryid, as it would be treated\n> > > +\t * as a utility statement, which may not be the case.\n> > > +\t */\n> > > \n> > > Could you tell me why the planning stats are not tracked when executing\n> > > utility statements? In some utility statements like REFRESH MATERIALIZED VIEW,\n> > > the planner would work.\n> > \n> > I explained that in [1]. The problem is that the underlying statement doesn't\n> > get the proper stmt_location and stmt_len, so you eventually end up with two\n> > different entries.\n> \n> It's not problematic to have two different entries in that case. Right?\n\nI will unnecessarily bloat the entries, and makes users life harder too. This\nexample is quite easy to deal with, but if the application is sending\nmulti-query statements, you'll just end up with a mess impossible to properly\nhandle.\n\n> The actual problem is that the statements reported in those entries are\n> very similar? For example, when \"create table test as select 1;\" is executed,\n> it's strange to get the following two entries, as you explained.\n> \n> create table test as select 1;\n> create table test as select 1\n> \n> But it seems valid to get the following two entries in that case?\n> \n> select 1\n> create table test as select 1\n> \n> The former is the nested statement and the latter is the top statement.\n\nI think that there should only be 1 entry, the utility command. It seems easy\nto correlate the planning time to the underlying query, but I'm not entirely\nsure that the execution counters won't be impacted by the fact is being run in\na utilty statements. Also, for now current pgss behavior is to always merge\nunderlying optimisable statements in the utility command, and it seems a bit\nlate in this release cycle to revisit that.\n\nI'd be happy to work on improving that for the next release, among other\nthings. For instance the total lack of normalization for utility commands [2]\nis also something that has been bothering me for a long time. In some\nworkloads, you can end up with the entries almost entirely filled with\n1-time-execution commands, just because it's using random identifiers, so you\nhave no other choice than to disable track_utility, although it would have been\nuseful for other commands.\n\n> Here are other comments.\n> \n> -\t\tif (jstate)\n> +\t\tif (kind == PGSS_JUMBLE)\n> \n> Why is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n> \n> If it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\n> and replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n\nYes, we could be using jstate here. I originally used that to avoid passing\nPGSS_EXEC (or the other one) as a way to say \"ignore this information as\nthere's the jstate which says it's yet another meaning\". If that's not an\nissue, I can change that as PGSS_NUM_KIND will clearly improve the explicit \"2\"\nall over the place.\n\n> + <entry><structfield>total_time</structfield></entry>\n> + <entry><type>double precision</type></entry>\n> + <entry></entry>\n> + <entry>\n> + Total time spend planning and executing the statement, in milliseconds\n> + </entry>\n> + </row>\n> \n> pg_stat_statements view has this column but the function not.\n> We should make both have the column or not at all, for consistency?\n> I'm not sure if it's good thing to expose the sum of total_plan_time\n> and total_exec_time as total_time. If some users want that, they can\n> easily calculate it from total_plan_time and total_exec_time by using\n> their own logic.\n\nI think we originally added it as a way to avoid too much compatibility break,\nand also because it seems like a field most users will be interested in anyway.\nNow that I'm thinking about it again, I indeed think it was a mistake to have\nthat in view part only. Not mainly for consistency, but for users who would be\ninterested in the total_time field while not wanting to pay the overhead of\nretrieving the query text if they don't need it. So I'll change that!\n\n> +\t\tnested_level++;\n> +\t\tPG_TRY();\n> \n> In old thread [1], Tom Lane commented the usage of nested_level\n> in the planner hook. There seems no reply to that so far. What's\n> your opinion about that comment?\n> \n> [1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n\nOh thanks, I didn't noticed this part of the discussion. I agree with Tom's\nconcern, and I think that having a specific nesting level variable for the\nplanner is the best workaround, so I'll implement that.\n\n[2] https://twitter.com/fujii_masao/status/1242978261572837377\n\n\n",
"msg_date": "Thu, 26 Mar 2020 14:22:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 02:22:47PM +0100, Julien Rouhaud wrote:\n> On Thu, Mar 26, 2020 at 08:08:35PM +0900, Fujii Masao wrote:\n> > \n> > Here are other comments.\n> > \n> > -\t\tif (jstate)\n> > +\t\tif (kind == PGSS_JUMBLE)\n> > \n> > Why is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n> > \n> > If it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\n> > and replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n> \n> Yes, we could be using jstate here. I originally used that to avoid passing\n> PGSS_EXEC (or the other one) as a way to say \"ignore this information as\n> there's the jstate which says it's yet another meaning\". If that's not an\n> issue, I can change that as PGSS_NUM_KIND will clearly improve the explicit \"2\"\n> all over the place.\n\nDone, passing PGSS_PLAN when jumble is intended, with a comment saying that the\npgss_kind is ignored in that case.\n\n> > + <entry><structfield>total_time</structfield></entry>\n> > + <entry><type>double precision</type></entry>\n> > + <entry></entry>\n> > + <entry>\n> > + Total time spend planning and executing the statement, in milliseconds\n> > + </entry>\n> > + </row>\n> > \n> > pg_stat_statements view has this column but the function not.\n> > We should make both have the column or not at all, for consistency?\n> > I'm not sure if it's good thing to expose the sum of total_plan_time\n> > and total_exec_time as total_time. If some users want that, they can\n> > easily calculate it from total_plan_time and total_exec_time by using\n> > their own logic.\n> \n> I think we originally added it as a way to avoid too much compatibility break,\n> and also because it seems like a field most users will be interested in anyway.\n> Now that I'm thinking about it again, I indeed think it was a mistake to have\n> that in view part only. Not mainly for consistency, but for users who would be\n> interested in the total_time field while not wanting to pay the overhead of\n> retrieving the query text if they don't need it. So I'll change that!\n\nDone\n\n> > +\t\tnested_level++;\n> > +\t\tPG_TRY();\n> > \n> > In old thread [1], Tom Lane commented the usage of nested_level\n> > in the planner hook. There seems no reply to that so far. What's\n> > your opinion about that comment?\n> > \n> > [1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n> \n> Oh thanks, I didn't noticed this part of the discussion. I agree with Tom's\n> concern, and I think that having a specific nesting level variable for the\n> planner is the best workaround, so I'll implement that.\n\nDone.\n\nI also exported BufferUsageAccumDiff as mentioned previously, as it seems\nclearner and will avoid future useless code churn, and run pgindent.\n\nv10 attached.",
"msg_date": "Fri, 27 Mar 2020 11:00:01 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/27 19:00, Julien Rouhaud wrote:\n> On Thu, Mar 26, 2020 at 02:22:47PM +0100, Julien Rouhaud wrote:\n>> On Thu, Mar 26, 2020 at 08:08:35PM +0900, Fujii Masao wrote:\n>>>\n>>> Here are other comments.\n>>>\n>>> -\t\tif (jstate)\n>>> +\t\tif (kind == PGSS_JUMBLE)\n>>>\n>>> Why is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n>>>\n>>> If it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\n>>> and replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n>>\n>> Yes, we could be using jstate here. I originally used that to avoid passing\n>> PGSS_EXEC (or the other one) as a way to say \"ignore this information as\n>> there's the jstate which says it's yet another meaning\". If that's not an\n>> issue, I can change that as PGSS_NUM_KIND will clearly improve the explicit \"2\"\n>> all over the place.\n> \n> Done, passing PGSS_PLAN when jumble is intended, with a comment saying that the\n> pgss_kind is ignored in that case.\n> \n>>> + <entry><structfield>total_time</structfield></entry>\n>>> + <entry><type>double precision</type></entry>\n>>> + <entry></entry>\n>>> + <entry>\n>>> + Total time spend planning and executing the statement, in milliseconds\n>>> + </entry>\n>>> + </row>\n>>>\n>>> pg_stat_statements view has this column but the function not.\n>>> We should make both have the column or not at all, for consistency?\n>>> I'm not sure if it's good thing to expose the sum of total_plan_time\n>>> and total_exec_time as total_time. If some users want that, they can\n>>> easily calculate it from total_plan_time and total_exec_time by using\n>>> their own logic.\n>>\n>> I think we originally added it as a way to avoid too much compatibility break,\n>> and also because it seems like a field most users will be interested in anyway.\n>> Now that I'm thinking about it again, I indeed think it was a mistake to have\n>> that in view part only. Not mainly for consistency, but for users who would be\n>> interested in the total_time field while not wanting to pay the overhead of\n>> retrieving the query text if they don't need it. So I'll change that!\n> \n> Done\n> \n>>> +\t\tnested_level++;\n>>> +\t\tPG_TRY();\n>>>\n>>> In old thread [1], Tom Lane commented the usage of nested_level\n>>> in the planner hook. There seems no reply to that so far. What's\n>>> your opinion about that comment?\n>>>\n>>> [1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n>>\n>> Oh thanks, I didn't noticed this part of the discussion. I agree with Tom's\n>> concern, and I think that having a specific nesting level variable for the\n>> planner is the best workaround, so I'll implement that.\n> \n> Done.\n> \n> I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> clearner and will avoid future useless code churn, and run pgindent.\n> \n> v10 attached.\n\nThanks for updating the patches!\n\nRegarding 0001 patch, I have one nitpicking comment;\n\n-\t\tresult = standard_planner(parse, cursorOptions, boundParams);\n+\t\tresult = standard_planner(parse, query_text, cursorOptions, boundParams);\n\n-standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)\n+standard_planner(Query *parse, const char *querytext, int cursorOptions,\n+\t\t\t\t ParamListInfo boundParams)\n\n-pg_plan_query(Query *querytree, int cursorOptions, ParamListInfo boundParams)\n+pg_plan_query(Query *querytree, const char *query_text, int cursorOptions,\n+\t\t\t ParamListInfo boundParams)\n\nThe patch uses \"query_text\" and \"querytext\" as the name of newly-added\nargument. They should be unified? IMO \"query_string\" looks better name\nbecause it's used in other functions like pg_analyze_and_rewrite(),\npg_parse_query() for the sake of consistency. Thought?\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 27 Mar 2020 20:02:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On 2020/03/27 19:00, Julien Rouhaud wrote:\n> On Thu, Mar 26, 2020 at 02:22:47PM +0100, Julien Rouhaud wrote:\n>> On Thu, Mar 26, 2020 at 08:08:35PM +0900, Fujii Masao wrote:\n>>>\n>>> Here are other comments.\n>>>\n>>> -\t\tif (jstate)\n>>> +\t\tif (kind == PGSS_JUMBLE)\n>>>\n>>> Why is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n>>>\n>>> If it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\n>>> and replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n>>\n>> Yes, we could be using jstate here. I originally used that to avoid passing\n>> PGSS_EXEC (or the other one) as a way to say \"ignore this information as\n>> there's the jstate which says it's yet another meaning\". If that's not an\n>> issue, I can change that as PGSS_NUM_KIND will clearly improve the explicit \"2\"\n>> all over the place.\n> \n> Done, passing PGSS_PLAN when jumble is intended, with a comment saying that the\n> pgss_kind is ignored in that case.\n> \n>>> + <entry><structfield>total_time</structfield></entry>\n>>> + <entry><type>double precision</type></entry>\n>>> + <entry></entry>\n>>> + <entry>\n>>> + Total time spend planning and executing the statement, in milliseconds\n>>> + </entry>\n>>> + </row>\n>>>\n>>> pg_stat_statements view has this column but the function not.\n>>> We should make both have the column or not at all, for consistency?\n>>> I'm not sure if it's good thing to expose the sum of total_plan_time\n>>> and total_exec_time as total_time. If some users want that, they can\n>>> easily calculate it from total_plan_time and total_exec_time by using\n>>> their own logic.\n>>\n>> I think we originally added it as a way to avoid too much compatibility break,\n>> and also because it seems like a field most users will be interested in anyway.\n>> Now that I'm thinking about it again, I indeed think it was a mistake to have\n>> that in view part only. Not mainly for consistency, but for users who would be\n>> interested in the total_time field while not wanting to pay the overhead of\n>> retrieving the query text if they don't need it. So I'll change that!\n> \n> Done\n> \n\nThanks for updating the patch! But I'm still wondering if it's really\ngood thing to expose total_time. For example, when the query fails\nwith an error many times and \"calls\" becomes very different from\n\"plans\", \"total_plan_time\" + \"total_exec_time\" is really what the users\nare interested in? Some users may be interested in the sum of mean\ntimes, but it's not exposed...\n\nSo what I'd like to say is that the information that users are interested\nin would vary on each situation and case. At least for me it seems\nenough for pgss to report only the basic information. Then users\ncan calculate to get the numbers (like total_time) they're interested in,\nfrom those basic information.\n\nBut of course, I'd like to hear more opinions about this...\n\n+\t\tif (api_version >= PGSS_V1_8)\n+\t\t\tvalues[i++] = Int64GetDatumFast(tmp.total_time[0] +\n+\t\t\t\t\t\t\t\t\t\t\ttmp.total_time[1]);\n\nBTW, Int64GetDatumFast() should be Float8GetDatumFast()?\n\n>>> +\t\tnested_level++;\n>>> +\t\tPG_TRY();\n>>>\n>>> In old thread [1], Tom Lane commented the usage of nested_level\n>>> in the planner hook. There seems no reply to that so far. What's\n>>> your opinion about that comment?\n>>>\n>>> [1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n>>\n>> Oh thanks, I didn't noticed this part of the discussion. I agree with Tom's\n>> concern, and I think that having a specific nesting level variable for the\n>> planner is the best workaround, so I'll implement that.\n> \n> Done.\n> \n> I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> clearner and will avoid future useless code churn, and run pgindent.\n\nMany thanks!! I'm thinking to commit this part separately.\nSo I made that patch based on your patch. Attached.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Fri, 27 Mar 2020 22:01:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 12:02 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/03/27 19:00, Julien Rouhaud wrote:\n> > On Thu, Mar 26, 2020 at 02:22:47PM +0100, Julien Rouhaud wrote:\n> >> On Thu, Mar 26, 2020 at 08:08:35PM +0900, Fujii Masao wrote:\n> >>>\n> >>> Here are other comments.\n> >>>\n> >>> - if (jstate)\n> >>> + if (kind == PGSS_JUMBLE)\n> >>>\n> >>> Why is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n> >>>\n> >>> If it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\n> >>> and replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n> >>\n> >> Yes, we could be using jstate here. I originally used that to avoid passing\n> >> PGSS_EXEC (or the other one) as a way to say \"ignore this information as\n> >> there's the jstate which says it's yet another meaning\". If that's not an\n> >> issue, I can change that as PGSS_NUM_KIND will clearly improve the explicit \"2\"\n> >> all over the place.\n> >\n> > Done, passing PGSS_PLAN when jumble is intended, with a comment saying that the\n> > pgss_kind is ignored in that case.\n> >\n> >>> + <entry><structfield>total_time</structfield></entry>\n> >>> + <entry><type>double precision</type></entry>\n> >>> + <entry></entry>\n> >>> + <entry>\n> >>> + Total time spend planning and executing the statement, in milliseconds\n> >>> + </entry>\n> >>> + </row>\n> >>>\n> >>> pg_stat_statements view has this column but the function not.\n> >>> We should make both have the column or not at all, for consistency?\n> >>> I'm not sure if it's good thing to expose the sum of total_plan_time\n> >>> and total_exec_time as total_time. If some users want that, they can\n> >>> easily calculate it from total_plan_time and total_exec_time by using\n> >>> their own logic.\n> >>\n> >> I think we originally added it as a way to avoid too much compatibility break,\n> >> and also because it seems like a field most users will be interested in anyway.\n> >> Now that I'm thinking about it again, I indeed think it was a mistake to have\n> >> that in view part only. Not mainly for consistency, but for users who would be\n> >> interested in the total_time field while not wanting to pay the overhead of\n> >> retrieving the query text if they don't need it. So I'll change that!\n> >\n> > Done\n> >\n> >>> + nested_level++;\n> >>> + PG_TRY();\n> >>>\n> >>> In old thread [1], Tom Lane commented the usage of nested_level\n> >>> in the planner hook. There seems no reply to that so far. What's\n> >>> your opinion about that comment?\n> >>>\n> >>> [1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n> >>\n> >> Oh thanks, I didn't noticed this part of the discussion. I agree with Tom's\n> >> concern, and I think that having a specific nesting level variable for the\n> >> planner is the best workaround, so I'll implement that.\n> >\n> > Done.\n> >\n> > I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> > clearner and will avoid future useless code churn, and run pgindent.\n> >\n> > v10 attached.\n>\n> Thanks for updating the patches!\n>\n> Regarding 0001 patch, I have one nitpicking comment;\n>\n> - result = standard_planner(parse, cursorOptions, boundParams);\n> + result = standard_planner(parse, query_text, cursorOptions, boundParams);\n>\n> -standard_planner(Query *parse, int cursorOptions, ParamListInfo boundParams)\n> +standard_planner(Query *parse, const char *querytext, int cursorOptions,\n> + ParamListInfo boundParams)\n>\n> -pg_plan_query(Query *querytree, int cursorOptions, ParamListInfo boundParams)\n> +pg_plan_query(Query *querytree, const char *query_text, int cursorOptions,\n> + ParamListInfo boundParams)\n>\n> The patch uses \"query_text\" and \"querytext\" as the name of newly-added\n> argument. They should be unified? IMO \"query_string\" looks better name\n> because it's used in other functions like pg_analyze_and_rewrite(),\n> pg_parse_query() for the sake of consistency. Thought?\n\nIndeed, and +1 for query_text.\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:27:13 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/03/27 19:00, Julien Rouhaud wrote:\n> > On Thu, Mar 26, 2020 at 02:22:47PM +0100, Julien Rouhaud wrote:\n> >> On Thu, Mar 26, 2020 at 08:08:35PM +0900, Fujii Masao wrote:\n> >>>\n> >>> Here are other comments.\n> >>>\n> >>> - if (jstate)\n> >>> + if (kind == PGSS_JUMBLE)\n> >>>\n> >>> Why is PGSS_JUMBLE necessary? ISTM that we can still use jstate here, instead.\n> >>>\n> >>> If it's ok to remove PGSS_JUMBLE, we can define PGSS_NUM_KIND(=2) instead\n> >>> and replace 2 in, e.g., total_time[2] with PGSS_NUM_KIND. Thought?\n> >>\n> >> Yes, we could be using jstate here. I originally used that to avoid passing\n> >> PGSS_EXEC (or the other one) as a way to say \"ignore this information as\n> >> there's the jstate which says it's yet another meaning\". If that's not an\n> >> issue, I can change that as PGSS_NUM_KIND will clearly improve the explicit \"2\"\n> >> all over the place.\n> >\n> > Done, passing PGSS_PLAN when jumble is intended, with a comment saying that the\n> > pgss_kind is ignored in that case.\n> >\n> >>> + <entry><structfield>total_time</structfield></entry>\n> >>> + <entry><type>double precision</type></entry>\n> >>> + <entry></entry>\n> >>> + <entry>\n> >>> + Total time spend planning and executing the statement, in milliseconds\n> >>> + </entry>\n> >>> + </row>\n> >>>\n> >>> pg_stat_statements view has this column but the function not.\n> >>> We should make both have the column or not at all, for consistency?\n> >>> I'm not sure if it's good thing to expose the sum of total_plan_time\n> >>> and total_exec_time as total_time. If some users want that, they can\n> >>> easily calculate it from total_plan_time and total_exec_time by using\n> >>> their own logic.\n> >>\n> >> I think we originally added it as a way to avoid too much compatibility break,\n> >> and also because it seems like a field most users will be interested in anyway.\n> >> Now that I'm thinking about it again, I indeed think it was a mistake to have\n> >> that in view part only. Not mainly for consistency, but for users who would be\n> >> interested in the total_time field while not wanting to pay the overhead of\n> >> retrieving the query text if they don't need it. So I'll change that!\n> >\n> > Done\n> >\n>\n> Thanks for updating the patch! But I'm still wondering if it's really\n> good thing to expose total_time. For example, when the query fails\n> with an error many times and \"calls\" becomes very different from\n> \"plans\", \"total_plan_time\" + \"total_exec_time\" is really what the users\n> are interested in?\n\nThat's also the case when running explain without analyze, or prepared\nstatements that fallback to generic plans. As a user, knowing how\nlong postgres actually spent processing a query is interesting as a\nway to find likely low hanging fruits, even if there's no\nplanning/execution strict correlation. The planning/execution detail\nis also useful but that's probably not what I'd be starting from (at\nleast in OLTP workload).\n\nThe error scenario is unfortunate, but that's yet another topic.\n\n> Some users may be interested in the sum of mean\n> times, but it's not exposed...\n\nYes, we had a discussion about summing the other fields, but it seems\nto me that doing a sum of computed fields doesn't really make sense.\nMean without variance is already not that useful.\n\n> So what I'd like to say is that the information that users are interested\n> in would vary on each situation and case. At least for me it seems\n> enough for pgss to report only the basic information. Then users\n> can calculate to get the numbers (like total_time) they're interested in,\n> from those basic information.\n>\n> But of course, I'd like to hear more opinions about this...\n\n+1\n\nUnless someone chime in by tomorrow, I'll just drop the sum as it\nseems less controversial and not a blocker in userland if users are\ninterested.\n\n>\n> + if (api_version >= PGSS_V1_8)\n> + values[i++] = Int64GetDatumFast(tmp.total_time[0] +\n> + tmp.total_time[1]);\n>\n> BTW, Int64GetDatumFast() should be Float8GetDatumFast()?\n\nOh indeed, embarrassing copy/pasto.\n\n>\n> >>> + nested_level++;\n> >>> + PG_TRY();\n> >>>\n> >>> In old thread [1], Tom Lane commented the usage of nested_level\n> >>> in the planner hook. There seems no reply to that so far. What's\n> >>> your opinion about that comment?\n> >>>\n> >>> [1] https://www.postgresql.org/message-id/28980.1515803777@sss.pgh.pa.us\n> >>\n> >> Oh thanks, I didn't noticed this part of the discussion. I agree with Tom's\n> >> concern, and I think that having a specific nesting level variable for the\n> >> planner is the best workaround, so I'll implement that.\n> >\n> > Done.\n> >\n> > I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> > clearner and will avoid future useless code churn, and run pgindent.\n>\n> Many thanks!! I'm thinking to commit this part separately.\n> So I made that patch based on your patch. Attached.\n\nThanks! It looks good to me.\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:42:50 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 03:42:50PM +0100, Julien Rouhaud wrote:\n> On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> \n> > So what I'd like to say is that the information that users are interested\n> > in would vary on each situation and case. At least for me it seems\n> > enough for pgss to report only the basic information. Then users\n> > can calculate to get the numbers (like total_time) they're interested in,\n> > from those basic information.\n> >\n> > But of course, I'd like to hear more opinions about this...\n> \n> +1\n> \n> Unless someone chime in by tomorrow, I'll just drop the sum as it\n> seems less controversial and not a blocker in userland if users are\n> interested.\n\nDone in attached v11, with also the s/querytext/query_text/ discrepancy noted\npreviously.\n\n> > >\n> > > I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> > > clearner and will avoid future useless code churn, and run pgindent.\n> >\n> > Many thanks!! I'm thinking to commit this part separately.\n> > So I made that patch based on your patch. Attached.\n> \n> Thanks! It looks good to me.\n\nI also kept that part in a distinct commit for convenience.",
"msg_date": "Sun, 29 Mar 2020 08:15:49 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/29 15:15, Julien Rouhaud wrote:\n> On Fri, Mar 27, 2020 at 03:42:50PM +0100, Julien Rouhaud wrote:\n>> On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>\n>>> So what I'd like to say is that the information that users are interested\n>>> in would vary on each situation and case. At least for me it seems\n>>> enough for pgss to report only the basic information. Then users\n>>> can calculate to get the numbers (like total_time) they're interested in,\n>>> from those basic information.\n>>>\n>>> But of course, I'd like to hear more opinions about this...\n>>\n>> +1\n>>\n>> Unless someone chime in by tomorrow, I'll just drop the sum as it\n>> seems less controversial and not a blocker in userland if users are\n>> interested.\n> \n> Done in attached v11, with also the s/querytext/query_text/ discrepancy noted\n> previously.\n\nThanks for updating the patch! But I still think query_string is better\nname because it's used in other several places, for the sake of consistency.\nSo I changed the argument name that way and commit the 0001 patch.\nIf you think query_text is better, let's keep discussing this topic!\n\nAnyway many thanks for your great job!\n\n>>>> I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n>>>> clearner and will avoid future useless code churn, and run pgindent.\n>>>\n>>> Many thanks!! I'm thinking to commit this part separately.\n>>> So I made that patch based on your patch. Attached.\n>>\n>> Thanks! It looks good to me.\n> \n> I also kept that part in a distinct commit for convenience.\n\nI also pushed 0002 patch. Thanks!\n\nI will review 0003 patch again.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 30 Mar 2020 13:56:43 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 01:56:43PM +0900, Fujii Masao wrote:\n> \n> \n> On 2020/03/29 15:15, Julien Rouhaud wrote:\n> > On Fri, Mar 27, 2020 at 03:42:50PM +0100, Julien Rouhaud wrote:\n> > > On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > > \n> > > \n> > > > So what I'd like to say is that the information that users are interested\n> > > > in would vary on each situation and case. At least for me it seems\n> > > > enough for pgss to report only the basic information. Then users\n> > > > can calculate to get the numbers (like total_time) they're interested in,\n> > > > from those basic information.\n> > > > \n> > > > But of course, I'd like to hear more opinions about this...\n> > > \n> > > +1\n> > > \n> > > Unless someone chime in by tomorrow, I'll just drop the sum as it\n> > > seems less controversial and not a blocker in userland if users are\n> > > interested.\n> > \n> > Done in attached v11, with also the s/querytext/query_text/ discrepancy noted\n> > previously.\n> \n> Thanks for updating the patch! But I still think query_string is better\n> name because it's used in other several places, for the sake of consistency.\n\nYou're absolutely right. That's what I actually wanted to do given your\nprevious comment, but somehow managed to miss it, sorry about that and thanks\nfor fixing.\n\n> So I changed the argument name that way and commit the 0001 patch.\n> If you think query_text is better, let's keep discussing this topic!\n> \n> Anyway many thanks for your great job!\n\nThanks a lot!\n\n> \n> > > > > I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> > > > > clearner and will avoid future useless code churn, and run pgindent.\n> > > > \n> > > > Many thanks!! I'm thinking to commit this part separately.\n> > > > So I made that patch based on your patch. Attached.\n> > > \n> > > Thanks! It looks good to me.\n> > \n> > I also kept that part in a distinct commit for convenience.\n> \n> I also pushed 0002 patch. Thanks!\n> \n> I will review 0003 patch again.\n\nAnd thanks for that too :)\n\n\n",
"msg_date": "Mon, 30 Mar 2020 10:03:59 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/30 17:03, Julien Rouhaud wrote:\n> On Mon, Mar 30, 2020 at 01:56:43PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/03/29 15:15, Julien Rouhaud wrote:\n>>> On Fri, Mar 27, 2020 at 03:42:50PM +0100, Julien Rouhaud wrote:\n>>>> On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>\n>>>>> So what I'd like to say is that the information that users are interested\n>>>>> in would vary on each situation and case. At least for me it seems\n>>>>> enough for pgss to report only the basic information. Then users\n>>>>> can calculate to get the numbers (like total_time) they're interested in,\n>>>>> from those basic information.\n>>>>>\n>>>>> But of course, I'd like to hear more opinions about this...\n>>>>\n>>>> +1\n>>>>\n>>>> Unless someone chime in by tomorrow, I'll just drop the sum as it\n>>>> seems less controversial and not a blocker in userland if users are\n>>>> interested.\n>>>\n>>> Done in attached v11, with also the s/querytext/query_text/ discrepancy noted\n>>> previously.\n>>\n>> Thanks for updating the patch! But I still think query_string is better\n>> name because it's used in other several places, for the sake of consistency.\n> \n> You're absolutely right. That's what I actually wanted to do given your\n> previous comment, but somehow managed to miss it, sorry about that and thanks\n> for fixing.\n> \n>> So I changed the argument name that way and commit the 0001 patch.\n>> If you think query_text is better, let's keep discussing this topic!\n>>\n>> Anyway many thanks for your great job!\n> \n> Thanks a lot!\n> \n>>\n>>>>>> I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n>>>>>> clearner and will avoid future useless code churn, and run pgindent.\n>>>>>\n>>>>> Many thanks!! I'm thinking to commit this part separately.\n>>>>> So I made that patch based on your patch. Attached.\n>>>>\n>>>> Thanks! It looks good to me.\n>>>\n>>> I also kept that part in a distinct commit for convenience.\n>>\n>> I also pushed 0002 patch. Thanks!\n>>\n>> I will review 0003 patch again.\n> \n> And thanks for that too :)\n\nWhile testing the patched pgss, I found that the patched version\nmay track the statements that the original version doesn't.\nPlease imagine the case where the following queries are executed,\nwith pgss.track = top.\n\n PREPARE hoge AS SELECT * FROM t;\n EXPLAIN EXECUTE hoge;\n\nThe pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\nin the patched version, but not in the orignal version.\n\nIs this problematic?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 31 Mar 2020 01:36:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/03/30 17:03, Julien Rouhaud wrote:\n> > On Mon, Mar 30, 2020 at 01:56:43PM +0900, Fujii Masao wrote:\n> >>\n> >>\n> >> On 2020/03/29 15:15, Julien Rouhaud wrote:\n> >>> On Fri, Mar 27, 2020 at 03:42:50PM +0100, Julien Rouhaud wrote:\n> >>>> On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>\n> >>>>> So what I'd like to say is that the information that users are interested\n> >>>>> in would vary on each situation and case. At least for me it seems\n> >>>>> enough for pgss to report only the basic information. Then users\n> >>>>> can calculate to get the numbers (like total_time) they're interested in,\n> >>>>> from those basic information.\n> >>>>>\n> >>>>> But of course, I'd like to hear more opinions about this...\n> >>>>\n> >>>> +1\n> >>>>\n> >>>> Unless someone chime in by tomorrow, I'll just drop the sum as it\n> >>>> seems less controversial and not a blocker in userland if users are\n> >>>> interested.\n> >>>\n> >>> Done in attached v11, with also the s/querytext/query_text/ discrepancy noted\n> >>> previously.\n> >>\n> >> Thanks for updating the patch! But I still think query_string is better\n> >> name because it's used in other several places, for the sake of consistency.\n> >\n> > You're absolutely right. That's what I actually wanted to do given your\n> > previous comment, but somehow managed to miss it, sorry about that and thanks\n> > for fixing.\n> >\n> >> So I changed the argument name that way and commit the 0001 patch.\n> >> If you think query_text is better, let's keep discussing this topic!\n> >>\n> >> Anyway many thanks for your great job!\n> >\n> > Thanks a lot!\n> >\n> >>\n> >>>>>> I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n> >>>>>> clearner and will avoid future useless code churn, and run pgindent.\n> >>>>>\n> >>>>> Many thanks!! I'm thinking to commit this part separately.\n> >>>>> So I made that patch based on your patch. Attached.\n> >>>>\n> >>>> Thanks! It looks good to me.\n> >>>\n> >>> I also kept that part in a distinct commit for convenience.\n> >>\n> >> I also pushed 0002 patch. Thanks!\n> >>\n> >> I will review 0003 patch again.\n> >\n> > And thanks for that too :)\n>\n> While testing the patched pgss, I found that the patched version\n> may track the statements that the original version doesn't.\n> Please imagine the case where the following queries are executed,\n> with pgss.track = top.\n>\n> PREPARE hoge AS SELECT * FROM t;\n> EXPLAIN EXECUTE hoge;\n>\n> The pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\n> in the patched version, but not in the orignal version.\n>\n> Is this problematic?\n\nOh indeed. That's a side effect of having different the executed query\nand the planned query being different.\n\nI guess the question is to chose if the top level executed query of a\nutilty statement containing an optimisable query, should the top level\nplanner call of that optimisable statement be considered at top level\nor not. I tend to think that's the correct behavior here, as this is\nalso what would happen if a regular DML was provided. What do you\nthink?\n\n\n",
"msg_date": "Mon, 30 Mar 2020 20:16:06 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/31 3:16, Julien Rouhaud wrote:\n> On Mon, Mar 30, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/03/30 17:03, Julien Rouhaud wrote:\n>>> On Mon, Mar 30, 2020 at 01:56:43PM +0900, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/03/29 15:15, Julien Rouhaud wrote:\n>>>>> On Fri, Mar 27, 2020 at 03:42:50PM +0100, Julien Rouhaud wrote:\n>>>>>> On Fri, Mar 27, 2020 at 2:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>\n>>>>>>> So what I'd like to say is that the information that users are interested\n>>>>>>> in would vary on each situation and case. At least for me it seems\n>>>>>>> enough for pgss to report only the basic information. Then users\n>>>>>>> can calculate to get the numbers (like total_time) they're interested in,\n>>>>>>> from those basic information.\n>>>>>>>\n>>>>>>> But of course, I'd like to hear more opinions about this...\n>>>>>>\n>>>>>> +1\n>>>>>>\n>>>>>> Unless someone chime in by tomorrow, I'll just drop the sum as it\n>>>>>> seems less controversial and not a blocker in userland if users are\n>>>>>> interested.\n>>>>>\n>>>>> Done in attached v11, with also the s/querytext/query_text/ discrepancy noted\n>>>>> previously.\n>>>>\n>>>> Thanks for updating the patch! But I still think query_string is better\n>>>> name because it's used in other several places, for the sake of consistency.\n>>>\n>>> You're absolutely right. That's what I actually wanted to do given your\n>>> previous comment, but somehow managed to miss it, sorry about that and thanks\n>>> for fixing.\n>>>\n>>>> So I changed the argument name that way and commit the 0001 patch.\n>>>> If you think query_text is better, let's keep discussing this topic!\n>>>>\n>>>> Anyway many thanks for your great job!\n>>>\n>>> Thanks a lot!\n>>>\n>>>>\n>>>>>>>> I also exported BufferUsageAccumDiff as mentioned previously, as it seems\n>>>>>>>> clearner and will avoid future useless code churn, and run pgindent.\n>>>>>>>\n>>>>>>> Many thanks!! I'm thinking to commit this part separately.\n>>>>>>> So I made that patch based on your patch. Attached.\n>>>>>>\n>>>>>> Thanks! It looks good to me.\n>>>>>\n>>>>> I also kept that part in a distinct commit for convenience.\n>>>>\n>>>> I also pushed 0002 patch. Thanks!\n>>>>\n>>>> I will review 0003 patch again.\n>>>\n>>> And thanks for that too :)\n>>\n>> While testing the patched pgss, I found that the patched version\n>> may track the statements that the original version doesn't.\n>> Please imagine the case where the following queries are executed,\n>> with pgss.track = top.\n>>\n>> PREPARE hoge AS SELECT * FROM t;\n>> EXPLAIN EXECUTE hoge;\n>>\n>> The pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\n>> in the patched version, but not in the orignal version.\n>>\n>> Is this problematic?\n> \n> Oh indeed. That's a side effect of having different the executed query\n> and the planned query being different.\n> \n> I guess the question is to chose if the top level executed query of a\n> utilty statement containing an optimisable query, should the top level\n> planner call of that optimisable statement be considered at top level\n> or not. I tend to think that's the correct behavior here, as this is\n> also what would happen if a regular DML was provided. What do you\n> think?\n\nTBH, not sure if that's ok yet...\n\nI'm now just wondering if both plan_nested_level and\nexec_nested_level should be incremented in pgss_ProcessUtility().\nThis is just a guess, so I need more investigation about this.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 31 Mar 2020 12:21:43 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 12:21:43PM +0900, Fujii Masao wrote:\n> \n> On 2020/03/31 3:16, Julien Rouhaud wrote:\n> > On Mon, Mar 30, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > \n> > > While testing the patched pgss, I found that the patched version\n> > > may track the statements that the original version doesn't.\n> > > Please imagine the case where the following queries are executed,\n> > > with pgss.track = top.\n> > > \n> > > PREPARE hoge AS SELECT * FROM t;\n> > > EXPLAIN EXECUTE hoge;\n> > > \n> > > The pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\n> > > in the patched version, but not in the orignal version.\n> > > \n> > > Is this problematic?\n> > \n> > Oh indeed. That's a side effect of having different the executed query\n> > and the planned query being different.\n> > \n> > I guess the question is to chose if the top level executed query of a\n> > utilty statement containing an optimisable query, should the top level\n> > planner call of that optimisable statement be considered at top level\n> > or not. I tend to think that's the correct behavior here, as this is\n> > also what would happen if a regular DML was provided. What do you\n> > think?\n> \n> TBH, not sure if that's ok yet...\n> \n> I'm now just wondering if both plan_nested_level and\n> exec_nested_level should be incremented in pgss_ProcessUtility().\n> This is just a guess, so I need more investigation about this.\n\nYeah, after a second thought I realize that my comparison was wrong. Allowing\n*any* top-level planner call when pgss.track = top would mean that we should\nalso consider all planner calls from queries executed for FK checks and such,\nwhich is definitely not the intended behavior.\n\nFTR with this patch such calls still don't get tracked, but only because those\nquery don't get a queryid assigned, not because the nesting level says so.\n\nHow about simply passing (plan_nested_level + exec_nested_level) for\npgss_enabled call in pgss_planner_hook?\n\n\n",
"msg_date": "Tue, 31 Mar 2020 08:03:07 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/03/31 15:03, Julien Rouhaud wrote:\n> On Tue, Mar 31, 2020 at 12:21:43PM +0900, Fujii Masao wrote:\n>>\n>> On 2020/03/31 3:16, Julien Rouhaud wrote:\n>>> On Mon, Mar 30, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>> While testing the patched pgss, I found that the patched version\n>>>> may track the statements that the original version doesn't.\n>>>> Please imagine the case where the following queries are executed,\n>>>> with pgss.track = top.\n>>>>\n>>>> PREPARE hoge AS SELECT * FROM t;\n>>>> EXPLAIN EXECUTE hoge;\n>>>>\n>>>> The pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\n>>>> in the patched version, but not in the orignal version.\n>>>>\n>>>> Is this problematic?\n>>>\n>>> Oh indeed. That's a side effect of having different the executed query\n>>> and the planned query being different.\n>>>\n>>> I guess the question is to chose if the top level executed query of a\n>>> utilty statement containing an optimisable query, should the top level\n>>> planner call of that optimisable statement be considered at top level\n>>> or not. I tend to think that's the correct behavior here, as this is\n>>> also what would happen if a regular DML was provided. What do you\n>>> think?\n>>\n>> TBH, not sure if that's ok yet...\n>>\n>> I'm now just wondering if both plan_nested_level and\n>> exec_nested_level should be incremented in pgss_ProcessUtility().\n>> This is just a guess, so I need more investigation about this.\n> \n> Yeah, after a second thought I realize that my comparison was wrong. Allowing\n> *any* top-level planner call when pgss.track = top would mean that we should\n> also consider all planner calls from queries executed for FK checks and such,\n> which is definitely not the intended behavior.\n\nYes. So, basically any planner activity that happens during\nthe execution phase of the statement is not tracked.\n\n> FTR with this patch such calls still don't get tracked, but only because those\n> query don't get a queryid assigned, not because the nesting level says so.\n> \n> How about simply passing (plan_nested_level + exec_nested_level) for\n> pgss_enabled call in pgss_planner_hook?\n\nLooks good to me! The comment about why this treatment is necessary only in\npgss_planner() should be added.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 31 Mar 2020 16:10:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 04:10:47PM +0900, Fujii Masao wrote:\n> \n> \n> On 2020/03/31 15:03, Julien Rouhaud wrote:\n> > On Tue, Mar 31, 2020 at 12:21:43PM +0900, Fujii Masao wrote:\n> > > \n> > > On 2020/03/31 3:16, Julien Rouhaud wrote:\n> > > > On Mon, Mar 30, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > > > \n> > > > > While testing the patched pgss, I found that the patched version\n> > > > > may track the statements that the original version doesn't.\n> > > > > Please imagine the case where the following queries are executed,\n> > > > > with pgss.track = top.\n> > > > > \n> > > > > PREPARE hoge AS SELECT * FROM t;\n> > > > > EXPLAIN EXECUTE hoge;\n> > > > > \n> > > > > The pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\n> > > > > in the patched version, but not in the orignal version.\n> > > > > \n> > > > > Is this problematic?\n> > > > \n> > > > Oh indeed. That's a side effect of having different the executed query\n> > > > and the planned query being different.\n> > > > \n> > > > I guess the question is to chose if the top level executed query of a\n> > > > utilty statement containing an optimisable query, should the top level\n> > > > planner call of that optimisable statement be considered at top level\n> > > > or not. I tend to think that's the correct behavior here, as this is\n> > > > also what would happen if a regular DML was provided. What do you\n> > > > think?\n> > > \n> > > TBH, not sure if that's ok yet...\n> > > \n> > > I'm now just wondering if both plan_nested_level and\n> > > exec_nested_level should be incremented in pgss_ProcessUtility().\n> > > This is just a guess, so I need more investigation about this.\n> > \n> > Yeah, after a second thought I realize that my comparison was wrong. Allowing\n> > *any* top-level planner call when pgss.track = top would mean that we should\n> > also consider all planner calls from queries executed for FK checks and such,\n> > which is definitely not the intended behavior.\n> \n> Yes. So, basically any planner activity that happens during\n> the execution phase of the statement is not tracked.\n> \n> > FTR with this patch such calls still don't get tracked, but only because those\n> > query don't get a queryid assigned, not because the nesting level says so.\n> > \n> > How about simply passing (plan_nested_level + exec_nested_level) for\n> > pgss_enabled call in pgss_planner_hook?\n> \n> Looks good to me! The comment about why this treatment is necessary only in\n> pgss_planner() should be added.\n\n\nYes of course. It also requires some changes in the macro to make it safe when\ncalled with an expression.\n\nv12 attached!",
"msg_date": "Tue, 31 Mar 2020 09:33:21 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On 2020/03/31 16:33, Julien Rouhaud wrote:\n> On Tue, Mar 31, 2020 at 04:10:47PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/03/31 15:03, Julien Rouhaud wrote:\n>>> On Tue, Mar 31, 2020 at 12:21:43PM +0900, Fujii Masao wrote:\n>>>>\n>>>> On 2020/03/31 3:16, Julien Rouhaud wrote:\n>>>>> On Mon, Mar 30, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>> While testing the patched pgss, I found that the patched version\n>>>>>> may track the statements that the original version doesn't.\n>>>>>> Please imagine the case where the following queries are executed,\n>>>>>> with pgss.track = top.\n>>>>>>\n>>>>>> PREPARE hoge AS SELECT * FROM t;\n>>>>>> EXPLAIN EXECUTE hoge;\n>>>>>>\n>>>>>> The pgss view returned \"PREPARE hoge AS SELECT * FROM t\"\n>>>>>> in the patched version, but not in the orignal version.\n>>>>>>\n>>>>>> Is this problematic?\n>>>>>\n>>>>> Oh indeed. That's a side effect of having different the executed query\n>>>>> and the planned query being different.\n>>>>>\n>>>>> I guess the question is to chose if the top level executed query of a\n>>>>> utilty statement containing an optimisable query, should the top level\n>>>>> planner call of that optimisable statement be considered at top level\n>>>>> or not. I tend to think that's the correct behavior here, as this is\n>>>>> also what would happen if a regular DML was provided. What do you\n>>>>> think?\n>>>>\n>>>> TBH, not sure if that's ok yet...\n>>>>\n>>>> I'm now just wondering if both plan_nested_level and\n>>>> exec_nested_level should be incremented in pgss_ProcessUtility().\n>>>> This is just a guess, so I need more investigation about this.\n>>>\n>>> Yeah, after a second thought I realize that my comparison was wrong. Allowing\n>>> *any* top-level planner call when pgss.track = top would mean that we should\n>>> also consider all planner calls from queries executed for FK checks and such,\n>>> which is definitely not the intended behavior.\n>>\n>> Yes. So, basically any planner activity that happens during\n>> the execution phase of the statement is not tracked.\n>>\n>>> FTR with this patch such calls still don't get tracked, but only because those\n>>> query don't get a queryid assigned, not because the nesting level says so.\n>>>\n>>> How about simply passing (plan_nested_level + exec_nested_level) for\n>>> pgss_enabled call in pgss_planner_hook?\n>>\n>> Looks good to me! The comment about why this treatment is necessary only in\n>> pgss_planner() should be added.\n> \n> \n> Yes of course. It also requires some changes in the macro to make it safe when\n> called with an expression.\n> \n> v12 attached!\n\nThanks for updating the patch! The patch looks good to me.\n\nI applied minor and cosmetic changes into the patch. Attached is\nthe updated version of the patch. Barring any objection, I'd like to\ncommit this version.\n\nBTW, the minor and cosmetic changes that I applied are, for example,\n\n- Rename pgss_planner_hook to pgss_planner for the sake of consistency.\n Other function using hook in pgss doesn't use \"_hook\" in their names, too.\n- Make pgss_planner use PG_FINALLY() instead of PG_CATCH().\n- Make PGSS_NUMKIND as the last value in enum pgssStoreKind.\n- Update the sample output in the document.\netc\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Wed, 1 Apr 2020 02:43:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Apr 01, 2020 at 02:43:10AM +0900, Fujii Masao wrote:\n>\n>\n> On 2020/03/31 16:33, Julien Rouhaud wrote:\n> >\n> > v12 attached!\n>\n> Thanks for updating the patch! The patch looks good to me.\n>\n> I applied minor and cosmetic changes into the patch. Attached is\n> the updated version of the patch. Barring any objection, I'd like to\n> commit this version.\n>\n> BTW, the minor and cosmetic changes that I applied are, for example,\n>\n> - Rename pgss_planner_hook to pgss_planner for the sake of consistency.\n> Other function using hook in pgss doesn't use \"_hook\" in their names, too.\n> - Make pgss_planner use PG_FINALLY() instead of PG_CATCH().\n> - Make PGSS_NUMKIND as the last value in enum pgssStoreKind.\n\n\n+1, and the PGSS_INVALID is also way better.\n\n\n> - Update the sample output in the document.\n> etc\n\n\nThanks a lot. It all looks good to me!\n\n\n",
"msg_date": "Tue, 31 Mar 2020 20:42:17 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/04/01 3:42, Julien Rouhaud wrote:\n> On Wed, Apr 01, 2020 at 02:43:10AM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/03/31 16:33, Julien Rouhaud wrote:\n>>>\n>>> v12 attached!\n>>\n>> Thanks for updating the patch! The patch looks good to me.\n>>\n>> I applied minor and cosmetic changes into the patch. Attached is\n>> the updated version of the patch. Barring any objection, I'd like to\n>> commit this version.\n>>\n>> BTW, the minor and cosmetic changes that I applied are, for example,\n>>\n>> - Rename pgss_planner_hook to pgss_planner for the sake of consistency.\n>> Other function using hook in pgss doesn't use \"_hook\" in their names, too.\n>> - Make pgss_planner use PG_FINALLY() instead of PG_CATCH().\n>> - Make PGSS_NUMKIND as the last value in enum pgssStoreKind.\n> \n> \n> +1, and the PGSS_INVALID is also way better.\n> \n> \n>> - Update the sample output in the document.\n>> etc\n> \n> \n> Thanks a lot. It all looks good to me!\n\nThanks for the check!\n\nI tried to pick up the names of authors and reviewers of this patch,\nfrom the past discussions. Then I'm thinking to write the followings\nin the commit log. Are there any other developers that should be\ncredited as author or reviewer?\n\nAuthor: Julien Rouhaud, Pascal Legrand, Thomas Munro, Fujii Masao\nReviewed-by: Sergei Kornilov, Tomas Vondra, Yoshikazu Imai, Haribabu Kommi, Tom Lane\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 1 Apr 2020 18:19:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/04/01 18:19, Fujii Masao wrote:\n> \n> \n> On 2020/04/01 3:42, Julien Rouhaud wrote:\n>> On Wed, Apr 01, 2020 at 02:43:10AM +0900, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/03/31 16:33, Julien Rouhaud wrote:\n>>>>\n>>>> v12 attached!\n>>>\n>>> Thanks for updating the patch! The patch looks good to me.\n>>>\n>>> I applied minor and cosmetic changes into the patch. Attached is\n>>> the updated version of the patch. Barring any objection, I'd like to\n>>> commit this version.\n>>>\n>>> BTW, the minor and cosmetic changes that I applied are, for example,\n>>>\n>>> - Rename pgss_planner_hook to pgss_planner for the sake of consistency.\n>>> Other function using hook in pgss doesn't use \"_hook\" in their names, too.\n>>> - Make pgss_planner use PG_FINALLY() instead of PG_CATCH().\n>>> - Make PGSS_NUMKIND as the last value in enum pgssStoreKind.\n>>\n>>\n>> +1, and the PGSS_INVALID is also way better.\n>>\n>>\n>>> - Update the sample output in the document.\n>>> etc\n>>\n>>\n>> Thanks a lot. It all looks good to me!\n\nFinally I pushed the patch!\nMany thanks for all involved in this patch!\n\nAs a remaining TODO item, I'm thinking that the document would need to\nbe improved. For example, previously the query was not stored in pgss\nwhen it failed. But, in v13, if pgss_planning is enabled, such a query is\nstored because the planning succeeds. Without the explanation about\nthat behavior in the document, I'm afraid that users will get confused.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 2 Apr 2020 11:32:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Fujii Masao-4 wrote\n> On 2020/04/01 18:19, Fujii Masao wrote:\n> \n> Finally I pushed the patch!\n> Many thanks for all involved in this patch!\n> \n> As a remaining TODO item, I'm thinking that the document would need to\n> be improved. For example, previously the query was not stored in pgss\n> when it failed. But, in v13, if pgss_planning is enabled, such a query is\n> stored because the planning succeeds. Without the explanation about\n> that behavior in the document, I'm afraid that users will get confused.\n> Thought?\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\nThank you all for this work and especially to Julian for its major\ncontribution !\n\nRegarding the TODO point: Yes I agree that it can be improved.\nMy proposal:\n\n\"Note that planning and execution statistics are updated only at their \nrespective end phase, and only for successfull operations.\nFor exemple executions counters of a long running SELECT query, \nwill be updated at the execution end, without showing any progress \nreport in the interval.\nOther exemple, if the statement is successfully planned but fails in \nthe execution phase, only its planning statistics are stored.\nThis may give uncorrelated plans vs calls informations.\"\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Thu, 2 Apr 2020 13:04:28 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Apr 02, 2020 at 01:04:28PM -0700, legrand legrand wrote:\n> Fujii Masao-4 wrote\n> > On 2020/04/01 18:19, Fujii Masao wrote:\n> > \n> > Finally I pushed the patch!\n> > Many thanks for all involved in this patch!\n> > \n> > As a remaining TODO item, I'm thinking that the document would need to\n> > be improved. For example, previously the query was not stored in pgss\n> > when it failed. But, in v13, if pgss_planning is enabled, such a query is\n> > stored because the planning succeeds. Without the explanation about\n> > that behavior in the document, I'm afraid that users will get confused.\n> > Thought?\n> \n> Thank you all for this work and especially to Julian for its major\n> contribution !\n\n\nThanks a lot to everyone! This was quite a long journey.\n\n\n> Regarding the TODO point: Yes I agree that it can be improved.\n> My proposal:\n> \n> \"Note that planning and execution statistics are updated only at their \n> respective end phase, and only for successfull operations.\n> For exemple executions counters of a long running SELECT query, \n> will be updated at the execution end, without showing any progress \n> report in the interval.\n> Other exemple, if the statement is successfully planned but fails in \n> the execution phase, only its planning statistics are stored.\n> This may give uncorrelated plans vs calls informations.\"\n\n\nThere are numerous reasons for lack of correlation between number of planning\nand number of execution, so I'm afraid that this will give users the false\nimpression that only failed execution can lead to that.\n\nHere's some enhancement on your proposal:\n\n\"Note that planning and execution statistics are updated only at their\nrespective end phase, and only for successful operations.\nFor example the execution counters of a long running query\nwill only be updated at the execution end, without showing any progress\nreport before that.\nSimilarly, if a statement is successfully planned but fails during\nthe execution phase, only its planning statistics will be displayed.\nPlease also note that the number of planning and number of execution aren't\nexpected to match, as the planification of a query won't always be followed by\nits execution and reciprocally.\"\n\n\n",
"msg_date": "Fri, 3 Apr 2020 09:26:28 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On 2020/04/03 16:26, Julien Rouhaud wrote:\n> On Thu, Apr 02, 2020 at 01:04:28PM -0700, legrand legrand wrote:\n>> Fujii Masao-4 wrote\n>>> On 2020/04/01 18:19, Fujii Masao wrote:\n>>>\n>>> Finally I pushed the patch!\n>>> Many thanks for all involved in this patch!\n>>>\n>>> As a remaining TODO item, I'm thinking that the document would need to\n>>> be improved. For example, previously the query was not stored in pgss\n>>> when it failed. But, in v13, if pgss_planning is enabled, such a query is\n>>> stored because the planning succeeds. Without the explanation about\n>>> that behavior in the document, I'm afraid that users will get confused.\n>>> Thought?\n>>\n>> Thank you all for this work and especially to Julian for its major\n>> contribution !\n> \n> \n> Thanks a lot to everyone! This was quite a long journey.\n> \n> \n>> Regarding the TODO point: Yes I agree that it can be improved.\n>> My proposal:\n>>\n>> \"Note that planning and execution statistics are updated only at their\n>> respective end phase, and only for successfull operations.\n>> For exemple executions counters of a long running SELECT query,\n>> will be updated at the execution end, without showing any progress\n>> report in the interval.\n>> Other exemple, if the statement is successfully planned but fails in\n>> the execution phase, only its planning statistics are stored.\n>> This may give uncorrelated plans vs calls informations.\"\n\nThanks for the proposal!\n\n> There are numerous reasons for lack of correlation between number of planning\n> and number of execution, so I'm afraid that this will give users the false\n> impression that only failed execution can lead to that.\n> \n> Here's some enhancement on your proposal:\n> \n> \"Note that planning and execution statistics are updated only at their\n> respective end phase, and only for successful operations.\n> For example the execution counters of a long running query\n> will only be updated at the execution end, without showing any progress\n> report before that.\n\nProbably since this is not the example for explaining the relationship of\nplanning and execution stats, it's better to explain this separately or just\ndrop it?\n\nWhat about the attached patch based on your proposals?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 8 Apr 2020 17:37:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Wed, Apr 08, 2020 at 05:37:27PM +0900, Fujii Masao wrote:\n> \n> \n> On 2020/04/03 16:26, Julien Rouhaud wrote:\n> > On Thu, Apr 02, 2020 at 01:04:28PM -0700, legrand legrand wrote:\n> > > Fujii Masao-4 wrote\n> > > > On 2020/04/01 18:19, Fujii Masao wrote:\n> > > > \n> > > > Finally I pushed the patch!\n> > > > Many thanks for all involved in this patch!\n> > > > \n> > > > As a remaining TODO item, I'm thinking that the document would need to\n> > > > be improved. For example, previously the query was not stored in pgss\n> > > > when it failed. But, in v13, if pgss_planning is enabled, such a query is\n> > > > stored because the planning succeeds. Without the explanation about\n> > > > that behavior in the document, I'm afraid that users will get confused.\n> > > > Thought?\n> > > \n> > > Thank you all for this work and especially to Julian for its major\n> > > contribution !\n> > \n> > \n> > Thanks a lot to everyone! This was quite a long journey.\n> > \n> > \n> > > Regarding the TODO point: Yes I agree that it can be improved.\n> > > My proposal:\n> > > \n> > > \"Note that planning and execution statistics are updated only at their\n> > > respective end phase, and only for successfull operations.\n> > > For exemple executions counters of a long running SELECT query,\n> > > will be updated at the execution end, without showing any progress\n> > > report in the interval.\n> > > Other exemple, if the statement is successfully planned but fails in\n> > > the execution phase, only its planning statistics are stored.\n> > > This may give uncorrelated plans vs calls informations.\"\n> \n> Thanks for the proposal!\n> \n> > There are numerous reasons for lack of correlation between number of planning\n> > and number of execution, so I'm afraid that this will give users the false\n> > impression that only failed execution can lead to that.\n> > \n> > Here's some enhancement on your proposal:\n> > \n> > \"Note that planning and execution statistics are updated only at their\n> > respective end phase, and only for successful operations.\n> > For example the execution counters of a long running query\n> > will only be updated at the execution end, without showing any progress\n> > report before that.\n> \n> Probably since this is not the example for explaining the relationship of\n> planning and execution stats, it's better to explain this separately or just\n> drop it?\n> \n> What about the attached patch based on your proposals?\n> \n\nThanks Fuji-san, it looks perfect to me!\n\n\n",
"msg_date": "Wed, 8 Apr 2020 11:31:20 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Fujii Masao-4 wrote\n> On 2020/04/03 16:26\n> [...]\n>> \n>> \"Note that planning and execution statistics are updated only at their\n>> respective end phase, and only for successful operations.\n>> For example the execution counters of a long running query\n>> will only be updated at the execution end, without showing any progress\n>> report before that.\n> \n> Probably since this is not the example for explaining the relationship of\n> planning and execution stats, it's better to explain this separately or\n> just\n> drop it?\n> \n> What about the attached patch based on your proposals?\n\n+1\nYour patch is perfect ;^>\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 8 Apr 2020 05:32:54 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/04/08 18:31, Julien Rouhaud wrote:\n> On Wed, Apr 08, 2020 at 05:37:27PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2020/04/03 16:26, Julien Rouhaud wrote:\n>>> On Thu, Apr 02, 2020 at 01:04:28PM -0700, legrand legrand wrote:\n>>>> Fujii Masao-4 wrote\n>>>>> On 2020/04/01 18:19, Fujii Masao wrote:\n>>>>>\n>>>>> Finally I pushed the patch!\n>>>>> Many thanks for all involved in this patch!\n>>>>>\n>>>>> As a remaining TODO item, I'm thinking that the document would need to\n>>>>> be improved. For example, previously the query was not stored in pgss\n>>>>> when it failed. But, in v13, if pgss_planning is enabled, such a query is\n>>>>> stored because the planning succeeds. Without the explanation about\n>>>>> that behavior in the document, I'm afraid that users will get confused.\n>>>>> Thought?\n>>>>\n>>>> Thank you all for this work and especially to Julian for its major\n>>>> contribution !\n>>>\n>>>\n>>> Thanks a lot to everyone! This was quite a long journey.\n>>>\n>>>\n>>>> Regarding the TODO point: Yes I agree that it can be improved.\n>>>> My proposal:\n>>>>\n>>>> \"Note that planning and execution statistics are updated only at their\n>>>> respective end phase, and only for successfull operations.\n>>>> For exemple executions counters of a long running SELECT query,\n>>>> will be updated at the execution end, without showing any progress\n>>>> report in the interval.\n>>>> Other exemple, if the statement is successfully planned but fails in\n>>>> the execution phase, only its planning statistics are stored.\n>>>> This may give uncorrelated plans vs calls informations.\"\n>>\n>> Thanks for the proposal!\n>>\n>>> There are numerous reasons for lack of correlation between number of planning\n>>> and number of execution, so I'm afraid that this will give users the false\n>>> impression that only failed execution can lead to that.\n>>>\n>>> Here's some enhancement on your proposal:\n>>>\n>>> \"Note that planning and execution statistics are updated only at their\n>>> respective end phase, and only for successful operations.\n>>> For example the execution counters of a long running query\n>>> will only be updated at the execution end, without showing any progress\n>>> report before that.\n>>\n>> Probably since this is not the example for explaining the relationship of\n>> planning and execution stats, it's better to explain this separately or just\n>> drop it?\n>>\n>> What about the attached patch based on your proposals?\n>>\n> \n> Thanks Fuji-san, it looks perfect to me!\n\nThanks for the check! Pushed!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 9 Apr 2020 12:59:43 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/04/08 21:32, legrand legrand wrote:\n> Fujii Masao-4 wrote\n>> On 2020/04/03 16:26\n>> [...]\n>>>\n>>> \"Note that planning and execution statistics are updated only at their\n>>> respective end phase, and only for successful operations.\n>>> For example the execution counters of a long running query\n>>> will only be updated at the execution end, without showing any progress\n>>> report before that.\n>>\n>> Probably since this is not the example for explaining the relationship of\n>> planning and execution stats, it's better to explain this separately or\n>> just\n>> drop it?\n>>\n>> What about the attached patch based on your proposals?\n> \n> +1\n> Your patch is perfect ;^>\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 9 Apr 2020 13:00:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, Apr 9, 2020 at 5:59 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/08 18:31, Julien Rouhaud wrote:\n> > On Wed, Apr 08, 2020 at 05:37:27PM +0900, Fujii Masao wrote:\n> >>\n> >>\n> >> On 2020/04/03 16:26, Julien Rouhaud wrote:\n> >>> On Thu, Apr 02, 2020 at 01:04:28PM -0700, legrand legrand wrote:\n> >>>> Fujii Masao-4 wrote\n> >>>>> On 2020/04/01 18:19, Fujii Masao wrote:\n> >>>>>\n> >>>>> Finally I pushed the patch!\n> >>>>> Many thanks for all involved in this patch!\n> >>>>>\n> >>>>> As a remaining TODO item, I'm thinking that the document would need to\n> >>>>> be improved. For example, previously the query was not stored in pgss\n> >>>>> when it failed. But, in v13, if pgss_planning is enabled, such a query is\n> >>>>> stored because the planning succeeds. Without the explanation about\n> >>>>> that behavior in the document, I'm afraid that users will get confused.\n> >>>>> Thought?\n> >>>>\n> >>>> Thank you all for this work and especially to Julian for its major\n> >>>> contribution !\n> >>>\n> >>>\n> >>> Thanks a lot to everyone! This was quite a long journey.\n> >>>\n> >>>\n> >>>> Regarding the TODO point: Yes I agree that it can be improved.\n> >>>> My proposal:\n> >>>>\n> >>>> \"Note that planning and execution statistics are updated only at their\n> >>>> respective end phase, and only for successfull operations.\n> >>>> For exemple executions counters of a long running SELECT query,\n> >>>> will be updated at the execution end, without showing any progress\n> >>>> report in the interval.\n> >>>> Other exemple, if the statement is successfully planned but fails in\n> >>>> the execution phase, only its planning statistics are stored.\n> >>>> This may give uncorrelated plans vs calls informations.\"\n> >>\n> >> Thanks for the proposal!\n> >>\n> >>> There are numerous reasons for lack of correlation between number of planning\n> >>> and number of execution, so I'm afraid that this will give users the false\n> >>> impression that only failed execution can lead to that.\n> >>>\n> >>> Here's some enhancement on your proposal:\n> >>>\n> >>> \"Note that planning and execution statistics are updated only at their\n> >>> respective end phase, and only for successful operations.\n> >>> For example the execution counters of a long running query\n> >>> will only be updated at the execution end, without showing any progress\n> >>> report before that.\n> >>\n> >> Probably since this is not the example for explaining the relationship of\n> >> planning and execution stats, it's better to explain this separately or just\n> >> drop it?\n> >>\n> >> What about the attached patch based on your proposals?\n> >>\n> >\n> > Thanks Fuji-san, it looks perfect to me!\n>\n> Thanks for the check! Pushed!\n\nThanks a lot Fuji-san!\n\nFor the record, the commit is available, but I didn't receive the\nusual mail, and it's also not present in the archives apparently:\nhttps://www.postgresql.org/list/pgsql-committers/since/202004090000/\n(although Amit's latest commit was delivered as expected).\n\nGiven your previous discussion with Magnus, I'm assuming that your\naddress is now allowed to post for a year. I'm not sure what went\nwrong here, so I'm adding Magnus in Cc.\n\n\n",
"msg_date": "Thu, 9 Apr 2020 15:31:31 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/04/09 22:31, Julien Rouhaud wrote:\n> On Thu, Apr 9, 2020 at 5:59 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/04/08 18:31, Julien Rouhaud wrote:\n>>> On Wed, Apr 08, 2020 at 05:37:27PM +0900, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/04/03 16:26, Julien Rouhaud wrote:\n>>>>> On Thu, Apr 02, 2020 at 01:04:28PM -0700, legrand legrand wrote:\n>>>>>> Fujii Masao-4 wrote\n>>>>>>> On 2020/04/01 18:19, Fujii Masao wrote:\n>>>>>>>\n>>>>>>> Finally I pushed the patch!\n>>>>>>> Many thanks for all involved in this patch!\n>>>>>>>\n>>>>>>> As a remaining TODO item, I'm thinking that the document would need to\n>>>>>>> be improved. For example, previously the query was not stored in pgss\n>>>>>>> when it failed. But, in v13, if pgss_planning is enabled, such a query is\n>>>>>>> stored because the planning succeeds. Without the explanation about\n>>>>>>> that behavior in the document, I'm afraid that users will get confused.\n>>>>>>> Thought?\n>>>>>>\n>>>>>> Thank you all for this work and especially to Julian for its major\n>>>>>> contribution !\n>>>>>\n>>>>>\n>>>>> Thanks a lot to everyone! This was quite a long journey.\n>>>>>\n>>>>>\n>>>>>> Regarding the TODO point: Yes I agree that it can be improved.\n>>>>>> My proposal:\n>>>>>>\n>>>>>> \"Note that planning and execution statistics are updated only at their\n>>>>>> respective end phase, and only for successfull operations.\n>>>>>> For exemple executions counters of a long running SELECT query,\n>>>>>> will be updated at the execution end, without showing any progress\n>>>>>> report in the interval.\n>>>>>> Other exemple, if the statement is successfully planned but fails in\n>>>>>> the execution phase, only its planning statistics are stored.\n>>>>>> This may give uncorrelated plans vs calls informations.\"\n>>>>\n>>>> Thanks for the proposal!\n>>>>\n>>>>> There are numerous reasons for lack of correlation between number of planning\n>>>>> and number of execution, so I'm afraid that this will give users the false\n>>>>> impression that only failed execution can lead to that.\n>>>>>\n>>>>> Here's some enhancement on your proposal:\n>>>>>\n>>>>> \"Note that planning and execution statistics are updated only at their\n>>>>> respective end phase, and only for successful operations.\n>>>>> For example the execution counters of a long running query\n>>>>> will only be updated at the execution end, without showing any progress\n>>>>> report before that.\n>>>>\n>>>> Probably since this is not the example for explaining the relationship of\n>>>> planning and execution stats, it's better to explain this separately or just\n>>>> drop it?\n>>>>\n>>>> What about the attached patch based on your proposals?\n>>>>\n>>>\n>>> Thanks Fuji-san, it looks perfect to me!\n>>\n>> Thanks for the check! Pushed!\n> \n> Thanks a lot Fuji-san!\n> \n> For the record, the commit is available, but I didn't receive the\n> usual mail, and it's also not present in the archives apparently:\n> https://www.postgresql.org/list/pgsql-committers/since/202004090000/\n> (although Amit's latest commit was delivered as expected).\n\nYes.\n\n> Given your previous discussion with Magnus, I'm assuming that your\n> address is now allowed to post for a year. I'm not sure what went\n> wrong here, so I'm adding Magnus in Cc.\n\nThanks! I also reported the issue in pgsql-www.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 9 Apr 2020 23:02:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Thanks for the excellent extension. I want to add 5 more fields to satisfy\nthe\nfollowing requirements.\n\nint subplan; /* No. of subplan in this query */\nint subquery; /* No. of subquery */\nint joincnt; /* How many relations are joined */\nbool hasagg; /* if we have agg function in this query */\nbool hasgroup; /* has group clause */\n\n\n1. Usually I want to check total_exec_time / rows to see if the query is\nmissing\n index, however aggregation/groupby case makes this rule doesn't work. so\n hasagg/hasgroup should be a good rule to filter out these queries.\n\n2. subplan is also a important clue to find out the query to turning. when\nwe\n check the slow queries with pg_stat_statements, such information maybe\n helpful as well.\n\n3. As for subquery / joincnt, actually it is just helpful for optimizer\n developer to understand the query character is running most, it doesn't\nhelp\n much for user.\n\n\nThe attached is a PoC, that is far from perfect since 1). It maintain a\nper-backend global variable query_character which is only used in\npg_stat_statements extension. 2). The 5 fields is impossible to change no\nmatter how many times it runs, so it can't be treat as Counter in nature.\nHowever I don't think the above 2 will cause big issues.\n\nI added the columns to V1_8 rather than adding a new version. this can be\nchanged at final patch.\n\nAny suggestions?\n\n\nBest Regards\nAndy Fan",
"msg_date": "Tue, 19 May 2020 10:28:37 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n> following requirements.\n>\n> int subplan; /* No. of subplan in this query */\n> int subquery; /* No. of subquery */\n> int joincnt; /* How many relations are joined */\n> bool hasagg; /* if we have agg function in this query */\n> bool hasgroup; /* has group clause */\n>\n>\n> 1. Usually I want to check total_exec_time / rows to see if the query is missing\n> index, however aggregation/groupby case makes this rule doesn't work. so\n> hasagg/hasgroup should be a good rule to filter out these queries.\n>\n> 2. subplan is also a important clue to find out the query to turning. when we\n> check the slow queries with pg_stat_statements, such information maybe\n> helpful as well.\n>\n> 3. As for subquery / joincnt, actually it is just helpful for optimizer\n> developer to understand the query character is running most, it doesn't help\n> much for user.\n>\n>\n> The attached is a PoC, that is far from perfect since 1). It maintain a\n> per-backend global variable query_character which is only used in\n> pg_stat_statements extension. 2). The 5 fields is impossible to change no\n> matter how many times it runs, so it can't be treat as Counter in nature.\n> However I don't think the above 2 will cause big issues.\n>\n> I added the columns to V1_8 rather than adding a new version. this can be\n> changed at final patch.\n>\n> Any suggestions?\n\nMost of those fields can be computed using the raw sql satements. Why\nnot adding functions like query_has_agg(querytext) to get the\ninformation from pgss stored query text instead?\n\n\n",
"msg_date": "Thu, 21 May 2020 08:49:53 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n>> following requirements.\n>>\n>> int subplan; /* No. of subplan in this query */\n>> int subquery; /* No. of subquery */\n>> int joincnt; /* How many relations are joined */\n>> bool hasagg; /* if we have agg function in this query */\n>> bool hasgroup; /* has group clause */\n>\n> Most of those fields can be computed using the raw sql satements. Why\n> not adding functions like query_has_agg(querytext) to get the\n> information from pgss stored query text instead?\n\nYeah I personally find concepts related only to the query string\nitself not something that needs to be tied to pg_stat_statements.\nWhile reading about those five new fields, I am also wondering how\nthis stuff would work with CTEs. Particularly, should the hasagg or\nhasgroup flags be set only if the most outer query satisfies a\ncondition? What if an inner query satisfies a condition but not an\nouter query? Should joincnt just be the sum of all the joins done in\nall queries, including subqueries?\n--\nMichael",
"msg_date": "Thu, 21 May 2020 16:17:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> > On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >> Thanks for the excellent extension. I want to add 5 more fields to\n> satisfy the\n> >> following requirements.\n> >>\n> >> int subplan; /* No. of subplan in this query */\n> >> int subquery; /* No. of subquery */\n> >> int joincnt; /* How many relations are joined */\n> >> bool hasagg; /* if we have agg function in this query */\n> >> bool hasgroup; /* has group clause */\n> >\n> > Most of those fields can be computed using the raw sql satements. Why\n> > not adding functions like query_has_agg(querytext) to get the\n> > information from pgss stored query text instead?\n>\n> Yeah I personally find concepts related only to the query string\n> itself not something that needs to be tied to pg_stat_statements.\n> While reading about those five new fields, I am also wondering how\n> this stuff would work with CTEs. Particularly, should the hasagg or\n> hasgroup flags be set only if the most outer query satisfies a\n> condition? What if an inner query satisfies a condition but not an\n> outer query? Should joincnt just be the sum of all the joins done in\n> all queries, including subqueries?\n>\n\nIndeed cte will bring additional concerns about the fields semantics.\nThat's another good reason to go with external functions so you can add\nextra parameters for that if needed.\n\n>\n\nLe jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz> a écrit :On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n>> following requirements.\n>>\n>> int subplan; /* No. of subplan in this query */\n>> int subquery; /* No. of subquery */\n>> int joincnt; /* How many relations are joined */\n>> bool hasagg; /* if we have agg function in this query */\n>> bool hasgroup; /* has group clause */\n>\n> Most of those fields can be computed using the raw sql satements. Why\n> not adding functions like query_has_agg(querytext) to get the\n> information from pgss stored query text instead?\n\nYeah I personally find concepts related only to the query string\nitself not something that needs to be tied to pg_stat_statements.\nWhile reading about those five new fields, I am also wondering how\nthis stuff would work with CTEs. Particularly, should the hasagg or\nhasgroup flags be set only if the most outer query satisfies a\ncondition? What if an inner query satisfies a condition but not an\nouter query? Should joincnt just be the sum of all the joins done in\nall queries, including subqueries?Indeed cte will bring additional concerns about the fields semantics. That's another good reason to go with external functions so you can add extra parameters for that if needed.",
"msg_date": "Thu, 21 May 2020 09:49:19 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, May 21, 2020 at 3:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> > On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >> Thanks for the excellent extension. I want to add 5 more fields to\n> satisfy the\n> >> following requirements.\n> >>\n> >> int subplan; /* No. of subplan in this query */\n> >> int subquery; /* No. of subquery */\n> >> int joincnt; /* How many relations are joined */\n> >> bool hasagg; /* if we have agg function in this query */\n> >> bool hasgroup; /* has group clause */\n> >\n> > Most of those fields can be computed using the raw sql satements. Why\n> > not adding functions like query_has_agg(querytext) to get the\n> > information from pgss stored query text instead?\n>\n> Yeah I personally find concepts related only to the query string\n> itself not something that needs to be tied to pg_stat_statements.\n> While reading about those five new fields, I am also wondering how\n> this stuff would work with CTEs. Particularly, should the hasagg or\n> hasgroup flags be set only if the most outer query satisfies a\n> condition? What if an inner query satisfies a condition but not an\n\nouter query? Should joincnt just be the sum of all the joins done in\n> all queries, including subqueries?\n>\n\n\nThe semantics is for overall query not for most outer query. see codes\nlike this for example:\n\nquery_characters.hasagg |= parse->hasAggs;\nquery_characters.hasgroup |= parse->groupClause != NIL;\n\n\n> Most of those fields can be computed using the raw sql satements. Why\n> not adding functions like query_has_agg(querytext) to get the\n> information from pgss stored query text instead?\n\nThat mainly because I don't want to reparse the query again.\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, May 21, 2020 at 3:17 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n>> following requirements.\n>>\n>> int subplan; /* No. of subplan in this query */\n>> int subquery; /* No. of subquery */\n>> int joincnt; /* How many relations are joined */\n>> bool hasagg; /* if we have agg function in this query */\n>> bool hasgroup; /* has group clause */\n>\n> Most of those fields can be computed using the raw sql satements. Why\n> not adding functions like query_has_agg(querytext) to get the\n> information from pgss stored query text instead?\n\nYeah I personally find concepts related only to the query string\nitself not something that needs to be tied to pg_stat_statements.\nWhile reading about those five new fields, I am also wondering how\nthis stuff would work with CTEs. Particularly, should the hasagg or\nhasgroup flags be set only if the most outer query satisfies a\ncondition? What if an inner query satisfies a condition but not an\nouter query? Should joincnt just be the sum of all the joins done in\nall queries, including subqueries? The semantics is for overall query not for most outer query. see codeslike this for example:query_characters.hasagg |= parse->hasAggs;query_characters.hasgroup |= parse->groupClause != NIL;> Most of those fields can be computed using the raw sql satements. Why> not adding functions like query_has_agg(querytext) to get the> information from pgss stored query text instead?That mainly because I don't want to reparse the query again. -- Best RegardsAndy Fan",
"msg_date": "Fri, 22 May 2020 14:02:52 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Thu, May 21, 2020 at 3:49 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz> a\n> écrit :\n>\n>> On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n>> > On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>> >> Thanks for the excellent extension. I want to add 5 more fields to\n>> satisfy the\n>> >> following requirements.\n>> >>\n>> >> int subplan; /* No. of subplan in this query */\n>> >> int subquery; /* No. of subquery */\n>> >> int joincnt; /* How many relations are joined */\n>> >> bool hasagg; /* if we have agg function in this query */\n>> >> bool hasgroup; /* has group clause */\n>> >\n>> > Most of those fields can be computed using the raw sql satements. Why\n>> > not adding functions like query_has_agg(querytext) to get the\n>> > information from pgss stored query text instead?\n>>\n>> Yeah I personally find concepts related only to the query string\n>> itself not something that needs to be tied to pg_stat_statements.\n>> ...\n>>\n>\n> Indeed cte will bring additional concerns about the fields semantics.\n> That's another good reason to go with external functions so you can add\n> extra parameters for that if needed.\n>\n>>\nThere are something more we can't get from query string easily. like:\n1. view involved. 2. subquery are pulled up so there is not subquery\nindeed. 3. sublink are pull-up or become as an InitPlan rather than\nsubPlan.\n4. joins are removed by remove_useless_joins.\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, May 21, 2020 at 3:49 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz> a écrit :On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n>> following requirements.\n>>\n>> int subplan; /* No. of subplan in this query */\n>> int subquery; /* No. of subquery */\n>> int joincnt; /* How many relations are joined */\n>> bool hasagg; /* if we have agg function in this query */\n>> bool hasgroup; /* has group clause */\n>\n> Most of those fields can be computed using the raw sql satements. Why\n> not adding functions like query_has_agg(querytext) to get the\n> information from pgss stored query text instead?\n\nYeah I personally find concepts related only to the query string\nitself not something that needs to be tied to pg_stat_statements.\n...Indeed cte will bring additional concerns about the fields semantics. That's another good reason to go with external functions so you can add extra parameters for that if needed. \n\nThere are something more we can't get from query string easily. like: 1. view involved. 2. subquery are pulled up so there is not subquery indeed. 3. sublink are pull-up or become as an InitPlan rather than subPlan. 4. joins are removed by remove_useless_joins.-- Best RegardsAndy Fan",
"msg_date": "Fri, 22 May 2020 14:10:29 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2020/05/22 15:10, Andy Fan wrote:\n> \n> \n> On Thu, May 21, 2020 at 3:49 PM Julien Rouhaud <rjuju123@gmail.com <mailto:rjuju123@gmail.com>> wrote:\n> \n> Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> a écrit :\n> \n> On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> > On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com <mailto:zhihui.fan1213@gmail.com>> wrote:\n> >> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n> >> following requirements.\n> >>\n> >> int subplan; /* No. of subplan in this query */\n> >> int subquery; /* No. of subquery */\n> >> int joincnt; /* How many relations are joined */\n> >> bool hasagg; /* if we have agg function in this query */\n> >> bool hasgroup; /* has group clause */\n> >\n> > Most of those fields can be computed using the raw sql satements. Why\n> > not adding functions like query_has_agg(querytext) to get the\n> > information from pgss stored query text instead?\n> \n> Yeah I personally find concepts related only to the query string\n> itself not something that needs to be tied to pg_stat_statements.\n> ...\n> \n> \n> Indeed cte will bring additional concerns about the fields semantics. That's another good reason to go with external functions so you can add extra parameters for that if needed.\n> \n> \n> There are something more we can't get from query string easily. like:\n> 1. view involved. 2. subquery are pulled up so there is not subquery\n> indeed. 3. sublink are pull-up or become as an InitPlan rather than subPlan.\n> 4. joins are removed by remove_useless_joins.\n\nIf we can store the plan for each statement, e.g., like pg_store_plans\nextension [1] does, rather than such partial information, which would\nbe enough for your cases?\n\nRegards,\n\n[1]\nhttp://pgstoreplans.osdn.jp/pg_store_plans.html\nhttps://github.com/ossc-db/pg_store_plans\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 22 May 2020 22:51:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, May 22, 2020 at 3:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/05/22 15:10, Andy Fan wrote:\n> >\n> >\n> > On Thu, May 21, 2020 at 3:49 PM Julien Rouhaud <rjuju123@gmail.com <mailto:rjuju123@gmail.com>> wrote:\n> >\n> > Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> a écrit :\n> >\n> > On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> > > On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com <mailto:zhihui.fan1213@gmail.com>> wrote:\n> > >> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n> > >> following requirements.\n> > >>\n> > >> int subplan; /* No. of subplan in this query */\n> > >> int subquery; /* No. of subquery */\n> > >> int joincnt; /* How many relations are joined */\n> > >> bool hasagg; /* if we have agg function in this query */\n> > >> bool hasgroup; /* has group clause */\n> > >\n> > > Most of those fields can be computed using the raw sql satements. Why\n> > > not adding functions like query_has_agg(querytext) to get the\n> > > information from pgss stored query text instead?\n> >\n> > Yeah I personally find concepts related only to the query string\n> > itself not something that needs to be tied to pg_stat_statements.\n> > ...\n> >\n> >\n> > Indeed cte will bring additional concerns about the fields semantics. That's another good reason to go with external functions so you can add extra parameters for that if needed.\n> >\n> >\n> > There are something more we can't get from query string easily. like:\n> > 1. view involved. 2. subquery are pulled up so there is not subquery\n> > indeed. 3. sublink are pull-up or become as an InitPlan rather than subPlan.\n> > 4. joins are removed by remove_useless_joins.\n>\n> If we can store the plan for each statement, e.g., like pg_store_plans\n> extension [1] does, rather than such partial information, which would\n> be enough for your cases?\n\nThat'd definitely address way more use cases. Do you know if some\nbenchmark were done to see how much overhead such an extension adds?\n\n\n",
"msg_date": "Fri, 22 May 2020 18:48:19 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": ">> If we can store the plan for each statement, e.g., like pg_store_plans\n>> extension [1] does, rather than such partial information, which would\n>> be enough for your cases?\n\n> That'd definitely address way more use cases. Do you know if some\n> benchmark were done to see how much overhead such an extension adds?\n\nHi Julien,\nDid you asked about how overhead Auto Explain adds ?\n\nThe only extension that was proposing to store plans with a decent planid \ncalculation was pg_stat_plans that is not compatible any more with recent \npg versions for years.\n\nWe all know here that pg_store_plans, pg_show_plans, (my) pg_stat_sql_plans\nuse ExplainPrintPlan through Executor Hook, and that Explain is slow ...\n\nExplain is slow because it was not designed for performances:\n1/ colname_is_unique\nsee\nhttps://www.postgresql-archive.org/Re-Explain-is-slow-with-tables-having-many-columns-td6047284.html\n\n2/ hash_create from set_rtable_names\nLook with perf top about\n do $$ declare i int; begin for i in 1..1000000 loop execute 'explain\nselect 1'; end loop end; $$;\n\nI may propose a \"minimal\" explain that only display explain's backbone and\nis much faster\nsee\nhttps://github.com/legrandlegrand/pg_stat_sql_plans/blob/perf-explain/pgssp_explain.c\n \n3/ All those extensions rebuild the explain output even with cached plan\nqueries ...\n a way to optimize this would be to build a planid during planning (using\nassociated hook)\n\n4/ All thoses extensions try to rebuild the explain plan even for trivial\nqueries/plans \nlike \"select 1\" or \" insert into t values (,,,)\" and that's not great for\nhigh transactional \napplications ...\n\nSo yes, pg_store_plans is one of the short term answers to Andy Fan needs, \nthe answer for the long term would be to help extensions to build planid and\nstore plans, \nby **adding a planid field in plannedstmt memory structure ** and/or \noptimizing explain command;o)\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 22 May 2020 12:27:31 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, May 22, 2020 at 9:27 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> >> If we can store the plan for each statement, e.g., like pg_store_plans\n> >> extension [1] does, rather than such partial information, which would\n> >> be enough for your cases?\n>\n> > That'd definitely address way more use cases. Do you know if some\n> > benchmark were done to see how much overhead such an extension adds?\n>\n> Hi Julien,\n> Did you asked about how overhead Auto Explain adds ?\n\nWell, yes but on the other hand auto_explain is by design definitely\nnot something intended to trace all queries in an OLTP environment,\nbut rather configured to catch only some long running queries, so in\nsuch cases the overhead is quite negligible.\n\n> The only extension that was proposing to store plans with a decent planid\n> calculation was pg_stat_plans that is not compatible any more with recent\n> pg versions for years.\n\nAh I see. AFAICT it's mainly missing the new node changes, but the\napproach should otherwise still work smoothly.\n\nDid you do some benchmark to compare this extension with the other\nalternatives? Assuming that there's postgres version compatible with\nall the extensions of course.\n\n> We all know here that pg_store_plans, pg_show_plans, (my) pg_stat_sql_plans\n> use ExplainPrintPlan through Executor Hook, and that Explain is slow ...\n>\n> Explain is slow because it was not designed for performances:\n> 1/ colname_is_unique\n> see\n> https://www.postgresql-archive.org/Re-Explain-is-slow-with-tables-having-many-columns-td6047284.html\n>\n> 2/ hash_create from set_rtable_names\n> Look with perf top about\n> do $$ declare i int; begin for i in 1..1000000 loop execute 'explain\n> select 1'; end loop end; $$;\n>\n> I may propose a \"minimal\" explain that only display explain's backbone and\n> is much faster\n> see\n> https://github.com/legrandlegrand/pg_stat_sql_plans/blob/perf-explain/pgssp_explain.c\n>\n> 3/ All those extensions rebuild the explain output even with cached plan\n> queries ...\n> a way to optimize this would be to build a planid during planning (using\n> associated hook)\n>\n> 4/ All thoses extensions try to rebuild the explain plan even for trivial\n> queries/plans\n> like \"select 1\" or \" insert into t values (,,,)\" and that's not great for\n> high transactional\n> applications ...\n>\n> So yes, pg_store_plans is one of the short term answers to Andy Fan needs,\n> the answer for the long term would be to help extensions to build planid and\n> store plans,\n> by **adding a planid field in plannedstmt memory structure ** and/or\n> optimizing explain command;o)\n\nI'd be in favor of adding a planid and using the same approach as\npg_store_plans.\n\n\n",
"msg_date": "Sat, 23 May 2020 08:33:32 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Fri, May 22, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/05/22 15:10, Andy Fan wrote:\n> >\n> >\n> > On Thu, May 21, 2020 at 3:49 PM Julien Rouhaud <rjuju123@gmail.com\n> <mailto:rjuju123@gmail.com>> wrote:\n> >\n> > Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> a écrit :\n> >\n> > On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> > > On Tue, May 19, 2020 at 4:29 AM Andy Fan <\n> zhihui.fan1213@gmail.com <mailto:zhihui.fan1213@gmail.com>> wrote:\n> > >> Thanks for the excellent extension. I want to add 5 more\n> fields to satisfy the\n> > >> following requirements.\n> > >>\n> > >> int subplan; /* No. of subplan in this query */\n> > >> int subquery; /* No. of subquery */\n> > >> int joincnt; /* How many relations are joined */\n> > >> bool hasagg; /* if we have agg function in this query */\n> > >> bool hasgroup; /* has group clause */\n> > >\n> > > Most of those fields can be computed using the raw sql\n> satements. Why\n> > > not adding functions like query_has_agg(querytext) to get the\n> > > information from pgss stored query text instead?\n> >\n> > Yeah I personally find concepts related only to the query string\n> > itself not something that needs to be tied to pg_stat_statements.\n> > ...\n> >\n> >\n> > Indeed cte will bring additional concerns about the fields\n> semantics. That's another good reason to go with external functions so you\n> can add extra parameters for that if needed.\n> >\n> >\n> > There are something more we can't get from query string easily. like:\n> > 1. view involved. 2. subquery are pulled up so there is not subquery\n> > indeed. 3. sublink are pull-up or become as an InitPlan rather than\n> subPlan.\n> > 4. joins are removed by remove_useless_joins.\n>\n> If we can store the plan for each statement, e.g., like pg_store_plans\n> extension [1] does, rather than such partial information, which would\n> be enough for your cases?\n>\n> That would be helpful if I can search the interested data from it. Oracle\nhas\nv$sql_plan, where every node in the plan has its own record, so it is easy\nto search.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, May 22, 2020 at 9:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/05/22 15:10, Andy Fan wrote:\n> \n> \n> On Thu, May 21, 2020 at 3:49 PM Julien Rouhaud <rjuju123@gmail.com <mailto:rjuju123@gmail.com>> wrote:\n> \n> Le jeu. 21 mai 2020 à 09:17, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> a écrit :\n> \n> On Thu, May 21, 2020 at 08:49:53AM +0200, Julien Rouhaud wrote:\n> > On Tue, May 19, 2020 at 4:29 AM Andy Fan <zhihui.fan1213@gmail.com <mailto:zhihui.fan1213@gmail.com>> wrote:\n> >> Thanks for the excellent extension. I want to add 5 more fields to satisfy the\n> >> following requirements.\n> >>\n> >> int subplan; /* No. of subplan in this query */\n> >> int subquery; /* No. of subquery */\n> >> int joincnt; /* How many relations are joined */\n> >> bool hasagg; /* if we have agg function in this query */\n> >> bool hasgroup; /* has group clause */\n> >\n> > Most of those fields can be computed using the raw sql satements. Why\n> > not adding functions like query_has_agg(querytext) to get the\n> > information from pgss stored query text instead?\n> \n> Yeah I personally find concepts related only to the query string\n> itself not something that needs to be tied to pg_stat_statements.\n> ...\n> \n> \n> Indeed cte will bring additional concerns about the fields semantics. That's another good reason to go with external functions so you can add extra parameters for that if needed.\n> \n> \n> There are something more we can't get from query string easily. like:\n> 1. view involved. 2. subquery are pulled up so there is not subquery\n> indeed. 3. sublink are pull-up or become as an InitPlan rather than subPlan.\n> 4. joins are removed by remove_useless_joins.\n\nIf we can store the plan for each statement, e.g., like pg_store_plans\nextension [1] does, rather than such partial information, which would\nbe enough for your cases?That would be helpful if I can search the interested data from it. Oracle hasv$sql_plan, where every node in the plan has its own record, so it is easy to search.-- Best RegardsAndy Fan",
"msg_date": "Tue, 26 May 2020 19:49:06 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "[ blast from the past department ]\n\nFujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Finally I pushed the patch!\n> Many thanks for all involved in this patch!\n\nIt turns out that the regression test outputs from this patch are\nunstable under debug_discard_caches (nee CLOBBER_CACHE_ALWAYS).\nYou can easily check this in HEAD or v14, with something along\nthe lines of\n\n$ cd ~/pgsql/contrib/pg_stat_statements\n$ echo \"debug_discard_caches = 1\" >/tmp/temp_config\n$ TEMP_CONFIG=/tmp/temp_config make check\n\nand what you will get is a diff like this:\n\n SELECT query, plans, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n query | plans | calls | rows \n...\n- PREPARE prep1 AS SELECT COUNT(*) FROM test | 2 | 4 | 4\n+ PREPARE prep1 AS SELECT COUNT(*) FROM test | 4 | 4 | 4\n\nThe reason we didn't detect this long since is that the buildfarm\nclient script fails to run \"make check\" for contrib modules that\nare marked NO_INSTALLCHECK, so that pg_stat_statements (among\nothers) has received precisely zero buildfarm testing. Buildfarm\nmember sifaka is running an unreleased version of the script that\nfixes that oversight, and when I experimented with turning on\ndebug_discard_caches, I got this failure, as shown at [1].\n\nThe cause of the failure of course is that cache clobbering includes\nplan cache clobbering, so that the prepared statement's plan is\nremade each time it's used, not only twice as the test expects.\nHowever, remembering that cache flushes can happen for other reasons,\nit's my guess that this test case would prove unstable in the buildfarm\neven without considering the CLOBBER_CACHE_ALWAYS members. For example,\na background autovacuum hitting the \"test\" table at just the right time\nwould result in extra planning. We haven't seen that because the\nbuildfarm's not running this test, but that's about to change.\n\nSo AFAICS this test is inherently unstable and there is no code bug\nto be fixed. We could drop the \"plans\" column from this query, or\nprint something approximate like \"plans > 0 AND plans <= calls\".\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2021-07-24%2023%3A53%3A52\n\n\n",
"msg_date": "Sun, 25 Jul 2021 12:03:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sun, Jul 25, 2021 at 12:03:25PM -0400, Tom Lane wrote:\n\n> The cause of the failure of course is that cache clobbering includes\n> plan cache clobbering, so that the prepared statement's plan is\n> remade each time it's used, not only twice as the test expects.\n> However, remembering that cache flushes can happen for other reasons,\n> it's my guess that this test case would prove unstable in the buildfarm\n> even without considering the CLOBBER_CACHE_ALWAYS members. For example,\n> a background autovacuum hitting the \"test\" table at just the right time\n> would result in extra planning. We haven't seen that because the\n> buildfarm's not running this test, but that's about to change.\n\nIndeed.\n\n> So AFAICS this test is inherently unstable and there is no code bug\n> to be fixed. We could drop the \"plans\" column from this query, or\n> print something approximate like \"plans > 0 AND plans <= calls\".\n> Thoughts?\n\nI think we should go with the latter. Checking for a legit value, even if it's\na bit imprecise is still better than nothing.\n\nWould it be worth to split the query for the prepared statement row vs the rest\nto keep the full \"plans\" coverage when possible?\n\n\n",
"msg_date": "Mon, 26 Jul 2021 00:36:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Jul 25, 2021 at 12:03:25PM -0400, Tom Lane wrote:\n>> So AFAICS this test is inherently unstable and there is no code bug\n>> to be fixed. We could drop the \"plans\" column from this query, or\n>> print something approximate like \"plans > 0 AND plans <= calls\".\n>> Thoughts?\n\n> I think we should go with the latter. Checking for a legit value, even if it's\n> a bit imprecise is still better than nothing.\n\n> Would it be worth to split the query for the prepared statement row vs the rest\n> to keep the full \"plans\" coverage when possible?\n\n+1, the same thought occurred to me later. Also, if we're making\nit specific to the one PREPARE example, we could get away with\nchecking \"plans >= 2 AND plans <= calls\", with a comment like\n\"we expect at least one replan event, but there could be more\".\n\nDo you want to prepare a patch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 12:59:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Le lun. 26 juil. 2021 à 00:59, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sun, Jul 25, 2021 at 12:03:25PM -0400, Tom Lane wrote:\n>\n\n> > Would it be worth to split the query for the prepared statement row vs\n> the rest\n> > to keep the full \"plans\" coverage when possible?\n>\n> +1, the same thought occurred to me later. Also, if we're making\n> it specific to the one PREPARE example, we could get away with\n> checking \"plans >= 2 AND plans <= calls\", with a comment like\n> \"we expect at least one replan event, but there could be more\".\n\n\n> Do you want to prepare a patch?\n>\n\nSure, I will work on that tomorrow!\n\n>\n\nLe lun. 26 juil. 2021 à 00:59, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Jul 25, 2021 at 12:03:25PM -0400, Tom Lane wrote:\n\n> Would it be worth to split the query for the prepared statement row vs the rest\n> to keep the full \"plans\" coverage when possible?\n\n+1, the same thought occurred to me later. Also, if we're making\nit specific to the one PREPARE example, we could get away with\nchecking \"plans >= 2 AND plans <= calls\", with a comment like\n\"we expect at least one replan event, but there could be more\".\n\nDo you want to prepare a patch?Sure, I will work on that tomorrow!",
"msg_date": "Mon, 26 Jul 2021 01:08:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\nOn 7/25/21 12:03 PM, Tom Lane wrote:\n>\n> So AFAICS this test is inherently unstable and there is no code bug\n> to be fixed. We could drop the \"plans\" column from this query, or\n> print something approximate like \"plans > 0 AND plans <= calls\".\n> Thoughts?\n>\n\nIs that likely to tell us anything very useful? I suppose it's really\njust a check against insane values. Since the test is unstable it's hard\nto do more than that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 25 Jul 2021 18:25:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 7/25/21 12:03 PM, Tom Lane wrote:\n>> So AFAICS this test is inherently unstable and there is no code bug\n>> to be fixed. We could drop the \"plans\" column from this query, or\n>> print something approximate like \"plans > 0 AND plans <= calls\".\n>> Thoughts?\n\n> Is that likely to tell us anything very useful?\n\nThe variant suggested downthread (\"plans >= 2 AND plans <= calls\" for the\nPREPARE entry only) seems like it's still reasonably useful. At least it\ncan verify that a replan has occurred and been counted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 18:46:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 01:08:08AM +0800, Julien Rouhaud wrote:\n> Le lun. 26 juil. 2021 � 00:59, Tom Lane <tgl@sss.pgh.pa.us> a �crit :\n> \n> > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > > On Sun, Jul 25, 2021 at 12:03:25PM -0400, Tom Lane wrote:\n> >\n> \n> > > Would it be worth to split the query for the prepared statement row vs\n> > the rest\n> > > to keep the full \"plans\" coverage when possible?\n> >\n> > +1, the same thought occurred to me later. Also, if we're making\n> > it specific to the one PREPARE example, we could get away with\n> > checking \"plans >= 2 AND plans <= calls\", with a comment like\n> > \"we expect at least one replan event, but there could be more\".\n> \n> \n> > Do you want to prepare a patch?\n> >\n> \n> Sure, I will work on that tomorrow!\n\nI attach a patch that splits the test and add a comment explaining the\nboundaries for the new query.\n\nChecked with and without forced invalidations.",
"msg_date": "Mon, 26 Jul 2021 09:36:21 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> I attach a patch that splits the test and add a comment explaining the\n> boundaries for the new query.\n> Checked with and without forced invalidations.\n\nPushed with a little cosmetic fooling-about.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Jul 2021 23:26:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "On Sun, Jul 25, 2021 at 11:26:02PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > I attach a patch that splits the test and add a comment explaining the\n> > boundaries for the new query.\n> > Checked with and without forced invalidations.\n> \n> Pushed with a little cosmetic fooling-about.\n\nThanks!\n\n\n",
"msg_date": "Mon, 26 Jul 2021 11:32:20 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
},
{
"msg_contents": "\n\nOn 2021/07/26 12:32, Julien Rouhaud wrote:\n> On Sun, Jul 25, 2021 at 11:26:02PM -0400, Tom Lane wrote:\n>> Julien Rouhaud <rjuju123@gmail.com> writes:\n>>> I attach a patch that splits the test and add a comment explaining the\n>>> boundaries for the new query.\n>>> Checked with and without forced invalidations.\n>>\n>> Pushed with a little cosmetic fooling-about.\n> \n> Thanks!\n\nThanks a lot!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 26 Jul 2021 12:35:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Planning counters in pg_stat_statements (using pgss_store)"
}
] |
[
{
"msg_contents": "tl;dr: I'd like to teach btrees to support returning ordered results\nwhere the ordering is only a suffix of the index keys and the query\nincludes a scalar array op qual on the prefix key of the index.\n\nSuppose we have the following schema:\n\nCREATE TABLE foos(bar_fk integer, created_at timestamp);\nCREATE INDEX index_foos_on_bar_fk_and_created_at ON foos(bar_fk, created_at);\n\nand seed it with the following:\n\nINSERT INTO foos (bar_fk, created_at)\nSELECT i % 1000, now() - (random() * '5 years'::interval)\nFROM generate_series(1, 500000) t(i);\n\nthen execute the following query:\n\nSELECT *\nFROM foos\nWHERE bar_fk IN (1, 2, 3)\nORDER BY created_at\nLIMIT 50;\n\ncurrently we get a query plan (I've disabled bitmap scans for this test) like:\n\n Limit (cost=5197.26..5197.38 rows=50 width=12) (actual\ntime=2.212..2.222 rows=50 loops=1)\n -> Sort (cost=5197.26..5201.40 rows=1657 width=12) (actual\ntime=2.211..2.217 rows=50 loops=1)\n Sort Key: created_at\n Sort Method: top-N heapsort Memory: 27kB\n -> Index Only Scan using index_foos_on_bar_fk_and_created_at\non foos (cost=0.42..5142.21 rows=1657 width=12) (actual\ntime=0.025..1.736 rows=1500 loops=1)\n Index Cond: (bar_fk = ANY ('{1,2,3}'::integer[]))\n Heap Fetches: 1500\n Planning time: 0.137 ms\n Execution time: 2.255 ms\n\nNote that the index scan (or bitmap scan) nodes return all 1500 rows\nmatching `bar_fk IN (1,2,3)`. After all rows are returned, that total\nset is ordered, and finally the LIMIT is applied. While only 50 rows\nwere requested, 30x that were fetched from the heap.\n\nI believe it is possible to use the index\n`index_foos_on_bar_fk_and_created_at` to fulfill both the `bar_fk IN\n(1,2,3)` qualifier and (at least partially; more on that later) the\nordering `ORDER BY created_at` while fetching fewer than all rows\nmatching the qualifier.\n\nAreas of extension: (given index `(a, b, c)`) include `a = 1 and b in\n(...) order by c` and `a in (...) and b = 1 order by c` (and further\nsimilar derivations with increasing numbers of equality quals).\n\nNote: Another (loosely) related problem is that sorting can't\ncurrently take advantage of cases where an index provides a partial\n(prefix of requested pathkeys) ordering.\n\nProposal 1:\n\nGeneral idea: teach btrees to apply LIMIT internally so that we bound\nthe number of rows fetched and sorted to the <number of array op\nelements> * <limit>.\n\nPros:\n\n- Presumably simpler to implement.\n- Ought to be faster (then master) in virtually all cases (or at least\npenalty in worst case is extremely small).\n\nCons:\n\n- Doesn't capture all of the theoretical value. For example, for array\nof size 100, limit 25, and 100 values per array key, the current code\nfetches 10,000 values, this proposal would fetch 2,500, and the best\ncase is 25. In this sample scenario we fetch 25% of the original\ntuples, but we also fetch 100x the theoretical minimum required.\n- Index scan node has to learn about limit (this feels pretty dirty).\n- Still needs a sort node.\n\nProposal 2:\n\nGeneral idea: find the \"first\" (or last depending on sort order) index\ntuple for each array op element. Sort those index tuples by the suffix\nkeys. Return tuples from the first of these sorted index tuple\nlocations until we find one that's past the next ordered index tuple.\nContinue this round-robin approach until we exhaust all tuples.\n\nPros:\n\n- Best case is significantly improved over proposal 1.\n- Always fetches the minimal number of tuples required by the limit.\n- When ordered values are not evenly distributed should be even faster\n(just continue to pull tuples for array value with lowest order\nvalue).\n- Index scan node remains unaware of limit.\n- Doesn't need a sort node.\n\nCons:\n\n- Presumably more difficult to implement.\n- Degenerate cases: when ordered values are evenly distributed we may\nhave to re-search from the top of the tree for each tuple (so worst\ncase is when few tuples match within each prefix).\n\n---\n\nQuestions:\n\n- Do we need a new index access method for this? Or do we shoehorn it\ninto the existing index scan. (Perhaps answer depends on which\nstrategy chosen?)\n- Similarly do we want a new scan node type? (This brings up the same\nkinds of questions in the skip scan discussion about multiplying node\ntypes.)\n- Is holding a pin on multiple index pages acceptable?\n- Or do we avoid that at the cost of most frequent searches from the\ntop of the tree?\n- If we have to search from the top of the tree, then does the \"many\nduplicates\" case become degenerate (to put it another way, how do we\nsearch to proper next tuple in a non-unique index)?\n\nStatus:\nI've begun work on proposal 2 as I believe it shows the most potential\ngain (though it also has higher complexity). I don't have a patch in a\nstate worth showing yet, but I wanted to start the discussion on the\ndesign considerations while I continue work on the patch.\n\n- James Coleman\n\n",
"msg_date": "Sat, 29 Dec 2018 19:00:25 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using Btree to Provide Sorting on Suffix Keys with LIMIT"
},
{
"msg_contents": "On Sun, 30 Dec 2018 at 13:00, James Coleman <jtc331@gmail.com> wrote:\n> Note that the index scan (or bitmap scan) nodes return all 1500 rows\n> matching `bar_fk IN (1,2,3)`. After all rows are returned, that total\n> set is ordered, and finally the LIMIT is applied. While only 50 rows\n> were requested, 30x that were fetched from the heap.\n>\n> I believe it is possible to use the index\n> `index_foos_on_bar_fk_and_created_at` to fulfill both the `bar_fk IN\n> (1,2,3)` qualifier and (at least partially; more on that later) the\n> ordering `ORDER BY created_at` while fetching fewer than all rows\n> matching the qualifier.\n>\n> Areas of extension: (given index `(a, b, c)`) include `a = 1 and b in\n> (...) order by c` and `a in (...) and b = 1 order by c` (and further\n> similar derivations with increasing numbers of equality quals).\n\nI don't quite understand this the above paragraph, but I assume this\nwould be possible to do with some new index am routine which allowed\nmultiple different values for the initial key. Probably execution for\nsomething like this could be handled by having 1 IndexScanDesc per\ninitial key. A scan would have to scan all of those for the first\ntuple and return the lowest order key. Subsequent scans would fetch\nthe next tuple for the IndexScanDesc used previously then again,\nreturn the lowest order tuple. There's some binary heap code and\nexamples in nodeMergeAppend.c about how that could be done fairly\nefficiently.\n\nThe hard part about that would be knowing when to draw the line. If\nthere was 1000's of initial keys then some other method might be\nbetter. Probably the planner would have to estimate which method was\nbest. There are also issues if there are multiple prefix keys as you'd\nneed to scan a cartesian product of the keys, which would likely get\nout of hand quickly with 2 or more initial keys.\n\nThere were discussions and a patch for a planner-level implementation\nof which could likely assist with this in [1]. I'm not sure if this\nparticular case was handled in the patch. I believe it was more\nintended for queries such as: SELECT ... FROM t WHERE a = 1 OR b = 2\nand could transform this into something more along the lines of:\nSELECT .. FROM t WHERE a = 1 UNION SELECT ... FROM t WHERE b = 1, and\nusing the table's ctid to uniquify the rows. You could get away with\nsomething similar but use UNION ALL instead. You don't need UNION\nsince your \"OR\" is on the same column, meaning there can be no\noverlapping rows.\n\nSomething like:\n\n(SELECT * FROM foos WHERE bar_fk = 1 LIMIT 50)\nUNION ALL\n(SELECT * FROM foos WHERE bar_fk = 2 LIMIT 50)\nUNION ALL\n(SELECT * FROM foos WHERE bar_fk = 3 LIMIT 50)\nORDER BY created_at LIMIT 50;\n\n> Note: Another (loosely) related problem is that sorting can't\n> currently take advantage of cases where an index provides a partial\n> (prefix of requested pathkeys) ordering.\n\nThere has been a patch [2] around for about 4 years now that does\nthis. I'm unsure of the current status, other than not yet committed.\n\n[1] https://www.postgresql.org/message-id/flat/7f70bd5a-5d16-e05c-f0b4-2fdfc8873489%40BlueTreble.com\n[2] https://www.postgresql.org/message-id/flat/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sun, 30 Dec 2018 15:50:34 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Using Btree to Provide Sorting on Suffix Keys with LIMIT"
},
{
"msg_contents": "On Sat, Dec 29, 2018 at 9:50 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Sun, 30 Dec 2018 at 13:00, James Coleman <jtc331@gmail.com> wrote:\n> > Note that the index scan (or bitmap scan) nodes return all 1500 rows\n> > matching `bar_fk IN (1,2,3)`. After all rows are returned, that total\n> > set is ordered, and finally the LIMIT is applied. While only 50 rows\n> > were requested, 30x that were fetched from the heap.\n> >\n> > I believe it is possible to use the index\n> > `index_foos_on_bar_fk_and_created_at` to fulfill both the `bar_fk IN\n> > (1,2,3)` qualifier and (at least partially; more on that later) the\n> > ordering `ORDER BY created_at` while fetching fewer than all rows\n> > matching the qualifier.\n> >\n> > Areas of extension: (given index `(a, b, c)`) include `a = 1 and b in\n> > (...) order by c` and `a in (...) and b = 1 order by c` (and further\n> > similar derivations with increasing numbers of equality quals).\n>\n> I don't quite understand this the above paragraph, but I assume this\n> would be possible to do with some new index am routine which allowed\n> multiple different values for the initial key. Probably execution for\n> something like this could be handled by having 1 IndexScanDesc per\n> initial key. A scan would have to scan all of those for the first\n> tuple and return the lowest order key. Subsequent scans would fetch\n> the next tuple for the IndexScanDesc used previously then again,\n> return the lowest order tuple. There's some binary heap code and\n> examples in nodeMergeAppend.c about how that could be done fairly\n> efficiently.\n\nMostly I was pointing out that the simple case (scalar array op qual\non first index column and order by second index column) isn't the only\npotentially interesting one; we'd also want to handle, for example, a\n3 column index with a equality qual on the first, an array op on the\nsecond, and order by the third. And then as you note later we could\nalso theoretically do this for multiple array op quals.\n\nThanks for the pointer to nodeMergeAppend.c; I'll look at that to see\nif it sparks any ideas.\n\nI'm intrigued by the idea of having multiple IndexScanDesc in the\nnode. My current route had been to include an array of BTScanPos in\nBTScanOpaqueData and work within the same scan. Do you think that a\nnew index am targeting multiple initial values would be a better route\nthan improving the existing native array handling in\nnbtree.c/nbtutil.c? It seems to me that fitting it into the existing\ncode gives us the greater potential usefulness but with more effort to\nmaintain the existing efficiencies there.\n\nIt sounds like multiple pinned pages (if you suggest multiple\nIndexScanDesc) for the same index in the same node is acceptable? We\nshould still not be locking multiple pages at once, so I don't think\nthere's risk of deadlock, but I wasn't sure if there were specific\nexpectations about the number of pinned pages in a single relation at\na given time.\n\n> The hard part about that would be knowing when to draw the line. If\n> there was 1000's of initial keys then some other method might be\n> better. Probably the planner would have to estimate which method was\n> best. There are also issues if there are multiple prefix keys as you'd\n> need to scan a cartesian product of the keys, which would likely get\n> out of hand quickly with 2 or more initial keys.\n\nAgreed. I expect the costing here to both report a higher startup cost\n(since it has to look at one index tuple per array element up front)\nand higher per-tuple cost (since we might have to re-search), but if\nthere are a very large (e.g., millions) number of rows and a small\nLIMIT then it's hard to imagine this not being the better option even\nup to a large number of keys (though at some point memory becomes a\nconcern also).\n\n> There were discussions and a patch for a planner-level implementation\n> of which could likely assist with this in [1]. I'm not sure if this\n> particular case was handled in the patch. I believe it was more\n> intended for queries such as: SELECT ... FROM t WHERE a = 1 OR b = 2\n> and could transform this into something more along the lines of:\n> SELECT .. FROM t WHERE a = 1 UNION SELECT ... FROM t WHERE b = 1, and\n> using the table's ctid to uniquify the rows. You could get away with\n> something similar but use UNION ALL instead. You don't need UNION\n> since your \"OR\" is on the same column, meaning there can be no\n> overlapping rows.\n>\n> Something like:\n>\n> (SELECT * FROM foos WHERE bar_fk = 1 LIMIT 50)\n> UNION ALL\n> (SELECT * FROM foos WHERE bar_fk = 2 LIMIT 50)\n> UNION ALL\n> (SELECT * FROM foos WHERE bar_fk = 3 LIMIT 50)\n> ORDER BY created_at LIMIT 50;\n\nThis sounds effectively like a way to do my first proposal. In theory\nI think both are valuable and potentially complementary, so I'll read\nup on that one also.\n\n> > Note: Another (loosely) related problem is that sorting can't\n> > currently take advantage of cases where an index provides a partial\n> > (prefix of requested pathkeys) ordering.\n>\n> There has been a patch [2] around for about 4 years now that does\n> this. I'm unsure of the current status, other than not yet committed.\n\nDoh, I should have linked to that; I've been following the incremental\nsort patch for a while (and submitted a test-case review) since it\nsolves some significant problems for us.\n\n- James Coleman\n\n",
"msg_date": "Sun, 30 Dec 2018 09:29:43 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using Btree to Provide Sorting on Suffix Keys with LIMIT"
},
{
"msg_contents": "On Sat, Dec 29, 2018 at 6:50 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> > Areas of extension: (given index `(a, b, c)`) include `a = 1 and b in\n> > (...) order by c` and `a in (...) and b = 1 order by c` (and further\n> > similar derivations with increasing numbers of equality quals).\n>\n> I don't quite understand this the above paragraph, but I assume this\n> would be possible to do with some new index am routine which allowed\n> multiple different values for the initial key.\n\nI'm confused about James' use of the term \"new index am\". I guess he\njust meant new support function or something?\n\n> > Note: Another (loosely) related problem is that sorting can't\n> > currently take advantage of cases where an index provides a partial\n> > (prefix of requested pathkeys) ordering.\n>\n> There has been a patch [2] around for about 4 years now that does\n> this. I'm unsure of the current status, other than not yet committed.\n>\n> [1] https://www.postgresql.org/message-id/flat/7f70bd5a-5d16-e05c-f0b4-2fdfc8873489%40BlueTreble.com\n> [2] https://www.postgresql.org/message-id/flat/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com\n\nI can see why you'd mention these two, but I also expected you to\nmention the skip scan project, since that involves pushing down\nknowledge about how the index is to be accessed to the index am (at\nleast, I assume that it does), and skipping leading attributes to use\nthe sort order from a suffix attribute. Actually, the partial sort\nidea that you linked to is more or less a dual of skip scan, at least\nto my mind (the former *extends* the sort order by adding a suffix\ntie-breaker, while the latter *skips* a leading attribute to get to an\ninteresting suffix attribute).\n\nThe way James constructed his example suggested that there'd be some\nkind of natural locality, that we'd expect to be able to take\nadvantage of at execution time when the new strategy is favorable. I'm\nnot sure if that was intended -- James? I think it might help James to\nconstruct a more obviously realistic/practical motivating example. I'm\nperfectly willing to believe that this idea would help his real world\nqueries, and having an example that can easily be played with is\nhelpful in other ways. But I'd like to know why this idea is important\nis in detail, since I think that it would help me to place it in the\nwider landscape of ideas that are like this. Placing it in that wider\nlandscape, and figuring out next steps at a high level seem to be the\nproblem right now.\n\n\n--\nPeter Geoghegan\n\n",
"msg_date": "Thu, 10 Jan 2019 16:52:31 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Using Btree to Provide Sorting on Suffix Keys with LIMIT"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 6:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sat, Dec 29, 2018 at 6:50 PM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n> > > Areas of extension: (given index `(a, b, c)`) include `a = 1 and b in\n> > > (...) order by c` and `a in (...) and b = 1 order by c` (and further\n> > > similar derivations with increasing numbers of equality quals).\n> >\n> > I don't quite understand this the above paragraph, but I assume this\n> > would be possible to do with some new index am routine which allowed\n> > multiple different values for the initial key.\n>\n> I'm confused about James' use of the term \"new index am\". I guess he\n> just meant new support function or something?\n\nThanks for responding.\n\nI was wondering if a new index access method would be the preferable\nway to do this such as how the skip scan patch does (I was taking some\nideas from that one).\n\n> > > Note: Another (loosely) related problem is that sorting can't\n> > > currently take advantage of cases where an index provides a partial\n> > > (prefix of requested pathkeys) ordering.\n> >\n> > There has been a patch [2] around for about 4 years now that does\n> > this. I'm unsure of the current status, other than not yet committed.\n> >\n> > [1] https://www.postgresql.org/message-id/flat/7f70bd5a-5d16-e05c-f0b4-2fdfc8873489%40BlueTreble.com\n> > [2] https://www.postgresql.org/message-id/flat/CAPpHfds1waRZ=NOmueYq0sx1ZSCnt+5QJvizT8ndT2=etZEeAQ@mail.gmail.com\n>\n> I can see why you'd mention these two, but I also expected you to\n> mention the skip scan project, since that involves pushing down\n> knowledge about how the index is to be accessed to the index am (at\n> least, I assume that it does), and skipping leading attributes to use\n> the sort order from a suffix attribute. Actually, the partial sort\n> idea that you linked to is more or less a dual of skip scan, at least\n> to my mind (the former *extends* the sort order by adding a suffix\n> tie-breaker, while the latter *skips* a leading attribute to get to an\n> interesting suffix attribute).\n\nYes, I'd been looking at the skip scan patch but didn't mention it. A\nlot of my initial email was from my initial brainstorming notes on the\ntopic, and I should have cleaned it up a bit better before sending it.\n\n> The way James constructed his example suggested that there'd be some\n> kind of natural locality, that we'd expect to be able to take\n> advantage of at execution time when the new strategy is favorable. I'm\n> not sure if that was intended -- James? ...\n\nI'm not sure what you mean by \"natural locality\"; I believe that the\nartificial data I've constructed is actually somewhat close to worst\ncase for what I'm proposing. Evenly distributed (this is random, but\nin this case I think that's close enough) data will realize the\nsmallest possible gains (and be the most likely to represent a\nregression with few enough rows in each group) because it is the case\nwhere we'd have have the most overhead of rotating among scan keys.\n\n> ... I think it might help James to\n> construct a more obviously realistic/practical motivating example. I'm\n> perfectly willing to believe that this idea would help his real world\n> queries, and having an example that can easily be played with is\n> helpful in other ways. But I'd like to know why this idea is important\n> is in detail, since I think that it would help me to place it in the\n> wider landscape of ideas that are like this. Placing it in that wider\n> landscape, and figuring out next steps at a high level seem to be the\n> problem right now.\n\nI'll attempt to describe a more real world scenario: suppose we have a\nschema like:\n\nusers(id serial primary key)\norders(id serial primary key, user_id integer, created_at timestamp)\n\nAnd wanted to find the most recent N orders for a specific group of\nusers (e.g., in a report or search). Your query might look like:\n\nSELECT *\nFROM orders\nWHERE orders.user_id IN (1, 2, 3)\nORDER BY orders.created_at DESC\nLIMIT 25\n\nCurrently an index on orders(user_id, created_at) will be used for\nthis query, but only to satisfy the scalar array op qual. Then all\nmatching orders (say, years worth) will be fetched, a sort node will\nsort all of those results, and then a limit node will take the top N.\n\nGeneralized the problem is something like \"find the top N rows across\na group of foreign keys\" (though saying foreign keys probably is too\nspecific).\n\nBut under the scheme I'm proposing that same index would be able to\nprovide both the filter and guarantee ordering as well.\n\nDoes that more real-world-ish example help place the usefulness of this?\n\nI think this goes beyond increasing the usefulness of indexes by\nrequiring less specific indexes (incremental sort does this), but\nrather allows the index to support a kind of query you can't currently\n(as far as I'm aware) can't express in a performant way at call\ncurrently (other than a complex recursive cte or in some subset of\ncases a bunch of union statements -- one per array entry).\n\nJames Coleman\n\n",
"msg_date": "Fri, 18 Jan 2019 14:15:23 -0600",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using Btree to Provide Sorting on Suffix Keys with LIMIT"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 12:15 PM James Coleman <jtc331@gmail.com> wrote:\n> I'll attempt to describe a more real world scenario: suppose we have a\n> schema like:\n>\n> users(id serial primary key)\n> orders(id serial primary key, user_id integer, created_at timestamp)\n>\n> And wanted to find the most recent N orders for a specific group of\n> users (e.g., in a report or search). Your query might look like:\n>\n> SELECT *\n> FROM orders\n> WHERE orders.user_id IN (1, 2, 3)\n> ORDER BY orders.created_at DESC\n> LIMIT 25\n>\n> Currently an index on orders(user_id, created_at) will be used for\n> this query, but only to satisfy the scalar array op qual. Then all\n> matching orders (say, years worth) will be fetched, a sort node will\n> sort all of those results, and then a limit node will take the top N.\n>\n> Generalized the problem is something like \"find the top N rows across\n> a group of foreign keys\" (though saying foreign keys probably is too\n> specific).\n>\n> But under the scheme I'm proposing that same index would be able to\n> provide both the filter and guarantee ordering as well.\n>\n> Does that more real-world-ish example help place the usefulness of this?\n\nYes. It didn't make much sense back in 2019, but I understand what you\nmeant now, I think. The latest version of my ScalarArrayOpExpr patch\n(v2) can execute queries like this efficiently:\n\nhttps://postgr.es/m/CAH2-WzkEyBU9UQM-5GWPcB=WEShAUKcJdvgFuqVHuPuO-iYW0Q@mail.gmail.com\n\nNote that your example is similar to the test case from the latest\nupdate on the thread. The test case from Benoit Tigeot, that appears\nhere:\n\nhttps://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d?permalink_comment_id=4690491#gistcomment-4690491\n\nYou seemed to want to use an index that started with user_id/bar_fk.\nBut I think you'd have to have an index on \"created_at DESC, user_id\".\nIt could work the other way around with your suggested index, for a\nquery written to match -- \"ORDER BY user_id, created_at DESC\".\n\nWith an index on \"created_at DESC, user_id\", you'd be able to\nefficiently execute your limit query. The index scan could only\nterminate when it found (say) 25 matching tuples, so you might still\nhave to scan quite a few index pages. But, you wouldn't have to do\nheap access to eliminate non-matches (with or without the VM being\nset) -- you could eliminate all of those non-matches using true SAOP\nindex quals, that don't need to operate on known visible rows.\n\nThis is possible with the patch, despite the fact that the user_id\ncolumn is a low-order column (so this isn't one of the cases where\nit's useful to \"skip\"). Avoiding heap hits just to eliminate\nnon-matching rows on user_id is what really matters here, though --\nnot skipping. It would be helpful if you could confirm this\nunderstanding, though.\n\nThanks\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 19 Sep 2023 22:04:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Using Btree to Provide Sorting on Suffix Keys with LIMIT"
}
] |
[
{
"msg_contents": "Hi all\n\nAs mentioned here, there has been a discussion about $subject and the\nfact that it may be rather useless:\nhttps://www.postgresql.org/message-id/21150.1546010167@sss.pgh.pa.us\n\n--disable-strong-random is also untested in the buildfarm.\n\nAttached is a patch to clean up the code, which removes all the code\nspecific to random generation for backends (no more shmem code paths\nand such), as well as the pg_frontend_random() and\npg_backend_random(). Thoughts or opinions?\n\nThanks,\n--\nMichael",
"msg_date": "Sun, 30 Dec 2018 15:32:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Removing --disable-strong-random from the code"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Attached is a patch to clean up the code, which removes all the code\n> specific to random generation for backends (no more shmem code paths\n> and such), as well as the pg_frontend_random() and\n> pg_backend_random(). Thoughts or opinions?\n\nHah, I was just about to work on that myself --- glad I didn't get\nto it quite yet. A couple of thoughts:\n\n1. Surely there's documentation about --disable-strong-random\nto clean up too?\n\n2. I wonder whether it's worth adding this to port.h:\n\n extern bool pg_strong_random(void *buf, size_t len);\n+/* pg_backend_random used to be a wrapper for pg_strong_random */\n+#define pg_backend_random pg_strong_random\n\nto prevent unnecessary breakage in extensions that might be depending\non pg_backend_random.\n\n3. Didn't look, but the MSVC build code might need a tweak too\nnow that pg_strong_random.o is built-always rather than conditional?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 01:45:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 01:45:42AM -0500, Tom Lane wrote:\n> Hah, I was just about to work on that myself --- glad I didn't get\n> to it quite yet. A couple of thoughts:\n> \n> 1. Surely there's documentation about --disable-strong-random\n> to clean up too?\n\nOops, I forgot to grep on this one. Removed from my tree.\n\n> 2. I wonder whether it's worth adding this to port.h:\n> \n> extern bool pg_strong_random(void *buf, size_t len);\n> +/* pg_backend_random used to be a wrapper for pg_strong_random */\n> +#define pg_backend_random pg_strong_random\n> \n> to prevent unnecessary breakage in extensions that might be depending\n> on pg_backend_random.\n\nSure, that makes sense. Added.\n\n> 3. Didn't look, but the MSVC build code might need a tweak too\n> now that pg_strong_random.o is built-always rather than conditional?\n\nThere is nothing needed here as pg_strong_random.c has always been\nincluded into @pgportfiles as we assumed that Windows would always\nhave a random source.\n--\nMichael",
"msg_date": "Sun, 30 Dec 2018 16:15:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 04:15:49PM +0900, Michael Paquier wrote:\n> On Sun, Dec 30, 2018 at 01:45:42AM -0500, Tom Lane wrote:\n>> Hah, I was just about to work on that myself --- glad I didn't get\n>> to it quite yet. A couple of thoughts:\n>> \n>> 1. Surely there's documentation about --disable-strong-random\n>> to clean up too?\n> \n> Oops, I forgot to grep on this one. Removed from my tree.\n> \n>> 2. I wonder whether it's worth adding this to port.h:\n>> \n>> extern bool pg_strong_random(void *buf, size_t len);\n>> +/* pg_backend_random used to be a wrapper for pg_strong_random */\n>> +#define pg_backend_random pg_strong_random\n>> \n>> to prevent unnecessary breakage in extensions that might be depending\n>> on pg_backend_random.\n> \n> Sure, that makes sense. Added.\n> \n>> 3. Didn't look, but the MSVC build code might need a tweak too\n>> now that pg_strong_random.o is built-always rather than conditional?\n> \n> There is nothing needed here as pg_strong_random.c has always been\n> included into @pgportfiles as we assumed that Windows would always\n> have a random source.\n\nAnd attached is an updated patch with all those fixes included. Any\nthoughts or opinions?\n--\nMichael",
"msg_date": "Sun, 30 Dec 2018 23:37:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> And attached is an updated patch with all those fixes included. Any\n> thoughts or opinions?\n\ncontrib/pgcrypto has some variant expected-files for the no-strong-random\ncase that could be removed now.\n\nBackendRandomLock should be removed, too.\n\nSince pg_strong_random is declared to take \"void *\", the places that\ncast arguments to \"char *\" could be simplified. (I guess that's a\nhangover from the rather random decision to make pg_backend_random\ntake char *?)\n\nThe wording for pgcrypto's PXE_NO_RANDOM error,\n\n {PXE_NO_RANDOM, \"No strong random source\"},\n\nperhaps needs to be changed --- maybe \"Failed to generate strong random bits\"?\n\nNot the fault of this patch, but surely this bit in pgcrypto's\npad_eme_pkcs1_v15()\n\n if (!pg_strong_random((char *) p, 1))\n {\n px_memset(buf, 0, res_len);\n px_free(buf);\n break;\n }\n\nis insane, because the \"break\" makes it fall into code that will continue\nto scribble on \"buf\". I think the \"break\" needs to be \"return\nPXE_NO_RANDOM\", and probably we'd better back-patch that as a bug fix.\n(I'm also failing to see the point of that px_memset before freeing the\nbuffer --- at this point, it contains no sensitive data, surely.)\n\nLGTM otherwise.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 11:47:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "I wrote:\n> LGTM otherwise.\n\nOh, one more thought: the removal of the --disable-strong-random\ndocumentation stanza means there's no explanation of what to do\nto build on platforms without /dev/urandom. Perhaps something\nlike this in installation.sgml:\n\n <para>\n- You need <productname>OpenSSL</productname>, if you want to support\n- encrypted client connections. The minimum required version is\n- 0.9.8.\n+ You need <productname>OpenSSL</productname> if you want to support\n+ encrypted client connections. <productname>OpenSSL</productname>\n+ is also required for random number generation on platforms that\n+ do not have <filename>/dev/urandom</filename> (except Windows).\n+ The minimum required version is 0.9.8.\n </para>\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 11:56:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 11:56:48AM -0500, Tom Lane wrote:\n> Oh, one more thought: the removal of the --disable-strong-random\n> documentation stanza means there's no explanation of what to do\n> to build on platforms without /dev/urandom. Perhaps something\n> like this in installation.sgml:\n> \n> <para>\n> - You need <productname>OpenSSL</productname>, if you want to support\n> - encrypted client connections. The minimum required version is\n> - 0.9.8.\n> + You need <productname>OpenSSL</productname> if you want to support\n> + encrypted client connections. <productname>OpenSSL</productname>\n> + is also required for random number generation on platforms that\n> + do not have <filename>/dev/urandom</filename> (except Windows).\n> + The minimum required version is 0.9.8.\n> </para>\n\nOkay, I have included something among those lines.\n--\nMichael",
"msg_date": "Mon, 31 Dec 2018 10:00:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "On Sun, Dec 30, 2018 at 11:47:03AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > And attached is an updated patch with all those fixes included. Any\n> > thoughts or opinions?\n> \n> contrib/pgcrypto has some variant expected-files for the no-strong-random\n> case that could be removed now.\n> \n> BackendRandomLock should be removed, too.\n\nDone and done.\n\n> Since pg_strong_random is declared to take \"void *\", the places that\n> cast arguments to \"char *\" could be simplified. (I guess that's a\n> hangover from the rather random decision to make pg_backend_random\n> take char *?)\n\nDone.\n\n> The wording for pgcrypto's PXE_NO_RANDOM error,\n> \n> {PXE_NO_RANDOM, \"No strong random source\"},\n> \n> perhaps needs to be changed --- maybe \"Failed to generate strong\n> random bits\"?\n\nOkay, changed this way. I looked previously at that description but\nlet it as-is. \n\n> Not the fault of this patch, but surely this bit in pgcrypto's\n> pad_eme_pkcs1_v15()\n> \n> if (!pg_strong_random((char *) p, 1))\n> {\n> px_memset(buf, 0, res_len);\n> px_free(buf);\n> break;\n> }\n> \n> is insane, because the \"break\" makes it fall into code that will continue\n> to scribble on \"buf\". I think the \"break\" needs to be \"return\n> PXE_NO_RANDOM\", and probably we'd better back-patch that as a bug fix.\n> (I'm also failing to see the point of that px_memset before freeing the\n> buffer --- at this point, it contains no sensitive data, surely.)\n\nGood catch. As far as I understand this code, the message is not\nincluded yet and random bytes are just added to avoid having 0 in the\npadding. So I agree that the memset is not really meaningful to\nhave on the whole buffer. I can take care of that as well, and of\ncourse you get the credits. If you want to commit and back-patch the\nfix yourself, please feel free to do so.\n\nI am attaching an updated patch. I'll do an extra pass on it in the\nnext couple of days and commit if there is nothing. The diff stats\nare nice:\n32 files changed, 60 insertions(+), 1181 deletions(-)\n\nThanks a lot for the reviews!\n--\nMichael",
"msg_date": "Mon, 31 Dec 2018 10:20:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 10:20:28AM +0900, Michael Paquier wrote:\n> On Sun, Dec 30, 2018 at 11:47:03AM -0500, Tom Lane wrote:\n>> Not the fault of this patch, but surely this bit in pgcrypto's\n>> pad_eme_pkcs1_v15()\n>> \n>> if (!pg_strong_random((char *) p, 1))\n>> {\n>> px_memset(buf, 0, res_len);\n>> px_free(buf);\n>> break;\n>> }\n>> \n>> is insane, because the \"break\" makes it fall into code that will continue\n>> to scribble on \"buf\". I think the \"break\" needs to be \"return\n>> PXE_NO_RANDOM\", and probably we'd better back-patch that as a bug fix.\n>> (I'm also failing to see the point of that px_memset before freeing the\n>> buffer --- at this point, it contains no sensitive data, surely.)\n> \n> Good catch. As far as I understand this code, the message is not\n> included yet and random bytes are just added to avoid having 0 in the\n> padding. So I agree that the memset is not really meaningful to\n> have on the whole buffer. I can take care of that as well, and of\n> course you get the credits. If you want to commit and back-patch the\n> fix yourself, please feel free to do so.\n\nI have fixed this one and back-patched down to 10. In what has been\ncommitted I have kept the memset which is a logic present since\ne94dd6a back from 2005. On my second lookup, the logic is correct\nwithout it, still it felt safer to keep it.\n--\nMichael",
"msg_date": "Tue, 1 Jan 2019 10:55:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing --disable-strong-random from the code"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 10:20:28AM +0900, Michael Paquier wrote:\n> I am attaching an updated patch. I'll do an extra pass on it in the\n> next couple of days and commit if there is nothing. The diff stats\n> are nice:\n> 32 files changed, 60 insertions(+), 1181 deletions(-)\n\nAnd committed.\n--\nMichael",
"msg_date": "Tue, 1 Jan 2019 20:41:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing --disable-strong-random from the code"
}
] |
[
{
"msg_contents": "I have developed a patch that unifies the various ad hoc logging\n(message printing, error printing) systems used throughout the\ncommand-line programs.\n\nExamples:\n\n\n- fprintf(stderr, _(\"%s: could not open file \\\"%s\\\" for writing: %s\\n\"),\n- progname, path, strerror(errno));\n+ pg_log_error(\"could not open file \\\"%s\\\" for writing: %m\", path);\n\n\n- if (debug)\n- fprintf(stderr,\n- _(\"%s: file \\\"%s\\\" would be removed\\n\"),\n- progname, WALFilePath);\n+ pg_log_debug(\"file \\\"%s\\\" would be removed\", WALFilePath);\n\n\nFeatures:\n\n- Program name is automatically prefixed.\n\n- Message string does not end with newline. This removes a common\nsource of inconsistencies and omissions.\n\n- Additionally, a final newline is automatically stripped, simplifying\nuse of PQerrorMessage() etc., another common source of mistakes.\n\n- I converted error message strings to use %m where possible. (I had\noriginally intended to implement %m here like elog used to do, but that\nwas thankfully already done elsewhere.)\n\n- As a result of the above several points, more translatable message\nstrings can be shared between different components and between frontends\nand backend, without gratuitous punctuation or whitespace differences.\n\n- There is support for setting a \"log level\". This is not meant to be\nuser-facing, but can be used internally to implement debug or verbose\nmodes, as in the above example.\n\n- Lazy argument evaluation, so no significant overhead if logging at\nsome level is disabled.\n\n- Bonus: Some color in the messages, similar to gcc and clang. Export\nPG_COLOR=auto to try it out. The colors are currently hardcoded, so\nsome configuration there might be added.\n\n- Common files (common/, fe_utils/, etc.) can handle logging much more\nsimply by just using one API without worrying too much about the context\nof the calling program, requiring callbacks, or having to pass\n\"progname\" around everywhere.\n\n\nSoft goals:\n\n- Reduces vertical space use and visual complexity of error reporting in\nthe source code.\n\n- Encourages more deliberate classification of messages. For example,\nin some cases it wasn't clear without analyzing the surrounding code\nwhether a message was meant as an error or just an info.\n\n- Concepts and terms are vaguely aligned with popular logging frameworks\nsuch as log4j and Python logging.\n\n- Future possibilities. Maybe something like log_line_prefix or\ndifferent log formats could be added. Just a theory right now, but this\nwould make it easier.\n\n\nNon-goals/out of scope:\n\n- Flow control. This is all just about printing stuff out. Nothing\naffects program flow (e.g., fatal exits). The uses are just too varied\nto do that. Some existing code had wrappers that do some kind of\nprint-and-exit, and I adapted those. It didn't seem worth going any\nfurther.\n\n\nIt's not fully complete but most of it works well. I didn't do\npg_upgrade and pg_ctl yet. pg_dump has some remaining special cases to\nwork through. I tried to keep the output mostly the same, but there is\na lot of historical baggage to unwind and special cases to consider, and\nI might not always have succeeded. One significant change is that\npg_rewind used to write all error messages to stdout. That is now\nchanged to stderr.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 30 Dec 2018 17:07:37 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Unified logging system for command-line programs"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I have developed a patch that unifies the various ad hoc logging\n> (message printing, error printing) systems used throughout the\n> command-line programs.\n\nI've not read the patch in any detail, but +1 for making this more\nuniform.\n\n> - Common files (common/, fe_utils/, etc.) can handle logging much more\n> simply by just using one API without worrying too much about the context\n> of the calling program, requiring callbacks, or having to pass\n> \"progname\" around everywhere.\n\nIt seems like a shame that src/common files still need to have\n#ifdef FRONTEND variant code to deal with frontend vs. backend\nconventions. I wonder how hard it would be to layer some subset of\nereport() functionality on top of what you have here, so as to get\nrid of those #ifdef stanzas.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 14:45:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 2018-Dec-30, Tom Lane wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > I have developed a patch that unifies the various ad hoc logging\n> > (message printing, error printing) systems used throughout the\n> > command-line programs.\n> \n> I've not read the patch in any detail, but +1 for making this more\n> uniform.\n\nAgreed, and the compactness is a good bonus too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 31 Dec 2018 12:21:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Hi,\n\nOn 2018-12-30 14:45:23 -0500, Tom Lane wrote:\n> I wonder how hard it would be to layer some subset of\n> ereport() functionality on top of what you have here, so as to get\n> rid of those #ifdef stanzas.\n\n+many. I think we should aim to unify the use (in contrast to the\nimplementation) of logging as much as possible, rather than having a\nseparate API for it for client programs. Not just because that facilitates\ncode reuse in frontend programs, but also because that's one less thing to\nlearn when getting started with PG.\n\nFurther down the line I think we should also port the PG_CATCH logic to\nclient programs.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 31 Dec 2018 07:55:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Ah, one more thing -- there's a patch by Marina Polyakova (in CC) to\nmake pgbench logging more regular. Maybe that stuff should be\nconsidered now too. I'm not saying to patch pgbench in this commit, but\nrather to have pgbench in mind while discussing the API. I think the\nlast version of that was here:\n\nhttps://postgr.es/m/a1bd32671a6777b78dd67b95eb68ff82@postgrespro.ru\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 31 Dec 2018 16:36:35 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 30/12/2018 20:45, Tom Lane wrote:\n> It seems like a shame that src/common files still need to have\n> #ifdef FRONTEND variant code to deal with frontend vs. backend\n> conventions. I wonder how hard it would be to layer some subset of\n> ereport() functionality on top of what you have here, so as to get\n> rid of those #ifdef stanzas.\n\nThe patch does address that in some places:\n\n@@ -39,12 +45,7 @@ pgfnames(const char *path)\n dir = opendir(path);\n if (dir == NULL)\n {\n-#ifndef FRONTEND\n- elog(WARNING, \"could not open directory \\\"%s\\\": %m\", path);\n-#else\n- fprintf(stderr, _(\"could not open directory \\\"%s\\\": %s\\n\"),\n- path, strerror(errno));\n-#endif\n+ pg_log_warning(\"could not open directory \\\"%s\\\": %m\", path);\n return NULL;\n }\n\nIt's worth noting that less than 5 files are of concern for this, so\ncreating a more elaborate system would probably be more code than you'd\nsave at the other end.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 14:15:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 31/12/2018 16:55, Andres Freund wrote:\n> I think we should aim to unify the use (in contrast to the\n> implementation) of logging as much as possible, rather than having a\n> separate API for it for client programs.\n\nI opted against doing that, for mainly two reasons: One, I think the\nereport() API is too verbose for this purpose, an invocation is usually\ntwo to three lines. My goal was to make logging smaller and more\ncompact. Two, I think tying error reporting to flow control does not\nalways work well and leads to bad code and a bad user experience.\nRelatedly, rewriting all the frontend programs to exception style would\nend up being a 10x project to rewrite everything for no particular\nbenefit. Going from 8 or so APIs to 2 is already an improvement, I\nthink. If someone wants to try going further, it can be considered, but\nit would be an entirely different project.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 14:28:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-03 14:28:51 +0100, Peter Eisentraut wrote:\n> On 31/12/2018 16:55, Andres Freund wrote:\n> > I think we should aim to unify the use (in contrast to the\n> > implementation) of logging as much as possible, rather than having a\n> > separate API for it for client programs.\n> \n> I opted against doing that, for mainly two reasons: One, I think the\n> ereport() API is too verbose for this purpose, an invocation is usually\n> two to three lines.\n\nWell, then elog() could be used.\n\n\n> My goal was to make logging smaller and more\n> compact. Two, I think tying error reporting to flow control does not\n> always work well and leads to bad code and a bad user experience.\n\nNot sure I can buy that, given that we seem to be doing quite OK in the backend.\n\n\n> Relatedly, rewriting all the frontend programs to exception style would\n> end up being a 10x project to rewrite everything for no particular\n> benefit. Going from 8 or so APIs to 2 is already an improvement, I\n> think. If someone wants to try going further, it can be considered, but\n> it would be an entirely different project.\n\nWhy would it be 10x the effort, if you already touch all the relevant\nlog invocations? This'll just mean that the same lines will\nmechanically need to be changed again.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 3 Jan 2019 10:03:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 03/01/2019 19:03, Andres Freund wrote:\n>> My goal was to make logging smaller and more\n>> compact. Two, I think tying error reporting to flow control does not\n>> always work well and leads to bad code and a bad user experience.\n> \n> Not sure I can buy that, given that we seem to be doing quite OK in the backend.\n\nConsider the numerous places where we do elog(LOG) for an *error*\nbecause we don't want to jump away.\n\n>> Relatedly, rewriting all the frontend programs to exception style would\n>> end up being a 10x project to rewrite everything for no particular\n>> benefit. Going from 8 or so APIs to 2 is already an improvement, I\n>> think. If someone wants to try going further, it can be considered, but\n>> it would be an entirely different project.\n> \n> Why would it be 10x the effort,\n\nBecause you would have to rewrite all the programs to handle elog(ERROR)\njumping somewhere else.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 19:54:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 03/01/2019 19:03, Andres Freund wrote:\n>>> Relatedly, rewriting all the frontend programs to exception style would\n>>> end up being a 10x project to rewrite everything for no particular\n>>> benefit. Going from 8 or so APIs to 2 is already an improvement, I\n>>> think. If someone wants to try going further, it can be considered, but\n>>> it would be an entirely different project.\n\n>> Why would it be 10x the effort,\n\n> Because you would have to rewrite all the programs to handle elog(ERROR)\n> jumping somewhere else.\n\nFWIW, this argument has nothing to do with what I was actually\nproposing. I envisioned that we'd have a wrapper in which\nnon-error ereports() map directly onto what you're calling\npg_log_debug, pg_log_warning, etc, while ereport(ERROR) has the\neffect of writing a message and then calling exit(1). We would\nuse ereport(ERROR) in exactly the places where we're now writing\na message and calling exit(1). No change at all in program\nflow control, but an opportunity to consolidate code in places\nthat are currently doing this sort of thing:\n\n#ifndef FRONTEND\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not open file \\\"%s\\\" for reading: %m\",\n ControlFilePath)));\n#else\n {\n fprintf(stderr, _(\"%s: could not open file \\\"%s\\\" for reading: %s\\n\"),\n progname, ControlFilePath, strerror(errno));\n exit(EXIT_FAILURE);\n }\n#endif\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 03 Jan 2019 16:01:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 03/01/2019 22:01, Tom Lane wrote:\n> I envisioned that we'd have a wrapper in which\n> non-error ereports() map directly onto what you're calling\n> pg_log_debug, pg_log_warning, etc,\n\nMy code does that, but the other way around. (It's easier that way than\nto unpack ereport() invocations.)\n\n> while ereport(ERROR) has the\n> effect of writing a message and then calling exit(1).\n\nThe problem is that in majority of cases the FRONTEND code, as it is\nwritten today, doesn't want to exit() after an error.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 22:38:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 03/01/2019 22:01, Tom Lane wrote:\n>> while ereport(ERROR) has the\n>> effect of writing a message and then calling exit(1).\n\n> The problem is that in majority of cases the FRONTEND code, as it is\n> written today, doesn't want to exit() after an error.\n\nRight, so for that you'd use ereport(WARNING) or LOG or whatever.\n\nWe'd probably need a bit of care about which ereport levels produce\nexactly what output, but I don't think that's insurmountable. We\ndo not need all the backend-side message levels to exist for frontend.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 03 Jan 2019 17:03:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-03 17:03:43 -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > On 03/01/2019 22:01, Tom Lane wrote:\n> >> while ereport(ERROR) has the\n> >> effect of writing a message and then calling exit(1).\n> \n> > The problem is that in majority of cases the FRONTEND code, as it is\n> > written today, doesn't want to exit() after an error.\n> \n> Right, so for that you'd use ereport(WARNING) or LOG or whatever.\n\nOr we could just add an ERROR variant that doesn't exit. Years back\nI'd proposed that we make the log level a bitmask, but it could also\njust be something like CALLSITE_ERROR or something roughly along those\nlines. There's a few cases in backend code where that'd be beneficial\ntoo.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 3 Jan 2019 14:08:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "I think this patch is a nice improvement!\n\nOn Jan 3, 2019, at 2:08 PM, Andres Freund <andres@anarazel.de> wrote:\n> Or we could just add an ERROR variant that doesn't exit. Years back\n> I'd proposed that we make the log level a bitmask, but it could also\n> just be something like CALLSITE_ERROR or something roughly along those\n> lines. There's a few cases in backend code where that'd be beneficial\n> too.\n\nI think the logging system can also be applied on pg_regress. Perhaps even\nfor the external frontend applications?\n\nThe patch cannot be applied directly on HEAD. So I patched it on top of \n60d99797bf. When I call pg_log_error() in initdb, I see\n\nProgram received signal SIGSEGV, Segmentation fault.\n__strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62\n62 ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or directory.\n(gdb) bt\n#0 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62\n#1 0x0000555555568f96 in dopr.constprop ()\n#2 0x0000555555569ddb in pg_vsnprintf ()\n#3 0x0000555555564236 in pg_log_generic ()\n#4 0x000055555555c240 in main ()\n\nI'm not sure what would be causing this behavior. I would appreciate\nreferences or docs for testing and debugging patches more efficiently.\nNow I'm having difficulties loading symbols of initdb in gdb.\n\nThank you,\nDonald Dong\n\n",
"msg_date": "Wed, 9 Jan 2019 20:57:59 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 10/01/2019 05:57, Donald Dong wrote:\n> I think the logging system can also be applied on pg_regress. Perhaps even\n> for the external frontend applications?\n\nCould be done, yes. A bit at a time. ;-)\n\n> The patch cannot be applied directly on HEAD. So I patched it on top of \n> 60d99797bf.\n\nHere is an updated patch with the merge conflicts of my own design\nresolved. No functionality changes.\n\n> When I call pg_log_error() in initdb, I see\n> \n> Program received signal SIGSEGV, Segmentation fault.\n> __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62\n> 62 ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or directory.\n> (gdb) bt\n> #0 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62\n> #1 0x0000555555568f96 in dopr.constprop ()\n> #2 0x0000555555569ddb in pg_vsnprintf ()\n> #3 0x0000555555564236 in pg_log_generic ()\n> #4 0x000055555555c240 in main ()\n\nWhat do you mean exactly by \"I call pg_log_error()\"? The existing calls\nin initdb clearly work, at least some of them, that is covered by the\ntest suite. Are you adding new calls?\n\n> I'm not sure what would be causing this behavior. I would appreciate\n> references or docs for testing and debugging patches more efficiently.\n> Now I'm having difficulties loading symbols of initdb in gdb.\n\nThe above looks like you'd probably get a better insight by compiling\nwith -O0 or some other lower optimization setting.\n\nThere is also this:\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 11 Jan 2019 18:14:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "\n> On Jan 11, 2019, at 9:14 AM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n>> The patch cannot be applied directly on HEAD. So I patched it on top of \n>> 60d99797bf.\n> \n> Here is an updated patch with the merge conflicts of my own design\n> resolved. No functionality changes.\n> \n>> When I call pg_log_error() in initdb, I see\n>> \n>> Program received signal SIGSEGV, Segmentation fault.\n>> __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62\n>> 62 ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or directory.\n>> (gdb) bt\n>> #0 __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:62\n>> #1 0x0000555555568f96 in dopr.constprop ()\n>> #2 0x0000555555569ddb in pg_vsnprintf ()\n>> #3 0x0000555555564236 in pg_log_generic ()\n>> #4 0x000055555555c240 in main ()\n> \n> What do you mean exactly by \"I call pg_log_error()\"? The existing calls\n> in initdb clearly work, at least some of them, that is covered by the\n> test suite. Are you adding new calls?\n\nThank you. I did add a new call for my local testing. There are no more errors\nafter re-applying the patch on master.\n\n>> I'm not sure what would be causing this behavior. I would appreciate\n>> references or docs for testing and debugging patches more efficiently.\n>> Now I'm having difficulties loading symbols of initdb in gdb.\n> \n> The above looks like you'd probably get a better insight by compiling\n> with -O0 or some other lower optimization setting.\n> \n> There is also this:\n> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD\n\nThank you for the reference. That's very helpful!\n\n\nI noticed in some places such as\n\n pg_log_error(\"no data directory specified\");\n fprintf(stderr,\n _(\"You must identify the directory where the data for this database system\\n\"\n ...\n\nand\n\n pg_log_warning(\"enabling \\\"trust\\\" authentication for local connections\");\n fprintf(stderr, _(\"You can change this by editing pg_hba.conf or using the option -A, or\\n\"\n \"--auth-local and --auth-host, the next time you run initdb.\\n\"));\n\n, pg_log does not completely replace fprintf. Would it be better to use pg_log\nso the logging level can also filter these messages?\n\n\n",
"msg_date": "Fri, 11 Jan 2019 16:39:04 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Here is an updated patch. I've finished the functionality to the point\nwhere I'm content with it. I fixed up some of the remaining special\ncases in pg_dump that I hadn't sorted out last time. I also moved the\nscattered setvbuf(stderr, ...) handling (for Windows) into a central\nplace. Colors can now be configured, too.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 22 Feb 2019 09:39:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Hello,\n\nOn 22.02.2019 11:39, Peter Eisentraut wrote:\n> Here is an updated patch. I've finished the functionality to the point\n> where I'm content with it. I fixed up some of the remaining special\n> cases in pg_dump that I hadn't sorted out last time. I also moved the\n> scattered setvbuf(stderr, ...) handling (for Windows) into a central\n> place. Colors can now be configured, too.\nI played with the patch and with coloring of an output. It works neat. \nOn thing I noticed that some messages may have double log level. For \nexample:\n\n$ psql test\npsql: fatal: FATAL: database \"test\" does not exist\n\nIt is becase psql appends its own level and appends the message from a \nserver (including servers log level). I don't think that it is nasty, \nbut it may confuse someone. Notice that without the patch the output is:\n\n$ psql test\npsql: FATAL: database \"test\" does not exist\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Wed, 13 Mar 2019 14:36:00 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-22 09:39:59 +0100, Peter Eisentraut wrote:\n> Here is an updated patch. I've finished the functionality to the point\n> where I'm content with it. I fixed up some of the remaining special\n> cases in pg_dump that I hadn't sorted out last time. I also moved the\n> scattered setvbuf(stderr, ...) handling (for Windows) into a central\n> place. Colors can now be configured, too.\n> \n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n> From 3e9aadf00ab582fed132e45c5745b1c38a4f59c9 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Fri, 22 Feb 2019 09:18:55 +0100\n> Subject: [PATCH v3] Unified logging system for command-line programs\n> \n> This unifies the various ad hoc logging (message printing, error\n> printing) systems used throughout the command-line programs.\n\nI'm unhappy about this being committed. I don't think there was\nterribly much buyin for this amount of duplicated infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 1 Apr 2019 11:31:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 2019-04-01 20:31, Andres Freund wrote:\n> I'm unhappy about this being committed. I don't think there was\n> terribly much buyin for this amount of duplicated infrastructure.\n\nWhat duplicated infrastructure?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Apr 2019 20:48:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 2019-04-01 20:48:41 +0200, Peter Eisentraut wrote:\n> On 2019-04-01 20:31, Andres Freund wrote:\n> > I'm unhappy about this being committed. I don't think there was\n> > terribly much buyin for this amount of duplicated infrastructure.\n> \n> What duplicated infrastructure?\n\nA written upthread, I think this should have had a uniform interface\nwith elog.h, and probably even share some code between the two. This is\ngoing in the wrong direction, making it harder, not easier, to share\ncode between frontend and backend. While moving around as much code as\nwe'd have had to do if we'd gone to error reporting compatible with\nelog.h.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 1 Apr 2019 11:55:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On Mon, Apr 01, 2019 at 11:55:09AM -0700, Andres Freund wrote:\n> A written upthread, I think this should have had a uniform interface\n> with elog.h, and probably even share some code between the two. This is\n> going in the wrong direction, making it harder, not easier, to share\n> code between frontend and backend. While moving around as much code as\n> we'd have had to do if we'd gone to error reporting compatible with\n> elog.h.\n\nLike Andres, I am a bit disappointed that this stuff is not reducing\nthe amount of diff code with ifdef FRONTEND in src/common/. This\nactually adds more complexity than the original code in a couple of\nplaces, like this one which is less than nice:\n+#ifndef FRONTEND\n+#define pg_log_warning(...) elog(WARNING, __VA_ARGS__)\n+#else\n+#include \"fe_utils/logging.h\"\n+#endif\n--\nMichael",
"msg_date": "Tue, 2 Apr 2019 12:05:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 2019-04-01 20:55, Andres Freund wrote:\n> A written upthread, I think this should have had a uniform interface\n> with elog.h, and probably even share some code between the two.\n\nThe purpose of this patch was to consolidate the existing zoo of logging\nroutines in the frontend programs. That has meaningful benefits. There\nis hardly any new code; most of the code was just consolidated from\nexisting scattered code.\n\nIf someone wants to take it further and consolidate that with the\nbackend logging infrastructure, they are free to propose a patch for\nconsideration. Surely the present patch can only help, since it already\nmakes the call sites uniform, which would presumably have to be done\nanyway. However, there is no prototype or even detailed design sketch\nlet alone someone committing to implement such a project. So it seems\nunreasonable to block other meaningful improvements in adjacent areas in\nthe meantime.\n\n> This is\n> going in the wrong direction, making it harder, not easier, to share\n> code between frontend and backend.\n\nI don't think anything has changed in that respect. If there is reason\nto believe that code that uses fprintf() is easier to share with the\nbackend than alternatives, then nothing is standing in the way of\ncontinuing to use that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Apr 2019 14:13:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 2019-04-02 05:05, Michael Paquier wrote:\n> I am a bit disappointed that this stuff is not reducing\n> the amount of diff code with ifdef FRONTEND in src/common/.\n\nThat wasn't the purpose of the patch. If you have a concrete proposal\nfor how to do what you describe, it would surely be welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Apr 2019 14:26:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "I don't much like the code that does\n\n pg_log_error(\"%s\", something);\n\nbecause then the string \"%s\" is marked for translation. Maybe we should\nconsider a variant that takes a straight string literal instead of a\nsprintf-style fmt to avoid this problem. We'd do something like\n\n pg_log_error_v(something);\n\nwhich does not call _() within.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Apr 2019 16:56:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I don't much like the code that does\n> pg_log_error(\"%s\", something);\n\n> because then the string \"%s\" is marked for translation.\n\nUh, surely we've got hundreds of instances of that in the system already?\n\n> Maybe we should\n> consider a variant that takes a straight string literal instead of a\n> sprintf-style fmt to avoid this problem. We'd do something like\n> pg_log_error_v(something);\n> which does not call _() within.\n\nWhat it looks like that's doing is something similar to appendPQExpBuffer\nversus appendPQExpBufferStr, ie, just skipping the overhead of sprintf\nformat processing when you don't need it. The implications for\ntranslatability or not are unobvious, so I'm afraid this would result\nin confusion and missed translations.\n\nI'm not necessarily against some idea like this, but how do we\nseparate \"translatability\" from \"sprintf formatting\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2019 16:17:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
},
{
"msg_contents": "On 2019-Apr-02, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I don't much like the code that does\n> > pg_log_error(\"%s\", something);\n> \n> > because then the string \"%s\" is marked for translation.\n> \n> Uh, surely we've got hundreds of instances of that in the system already?\n\nActually, we don't have that many, but there are more than I remembered\nthere being -- my memory was telling me that I had erradicated them all\nin commit 55a70a023c3d but that's sadly misinformed. Seeing this (and\nalso because the API would become nastier than I thought it would), I'll\nleave this stuff be for now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Apr 2019 17:48:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unified logging system for command-line programs"
}
] |
[
{
"msg_contents": "contain_leaked_vars_walker asserts the following about MinMaxExpr:\n\n ...\n case T_MinMaxExpr:\n ...\n\n /*\n * We know these node types don't contain function calls; but\n * something further down in the node tree might.\n */\n break;\n\nNow, the idea that it \"doesn't contain a function call\" is nonsense,\nbecause the node will invoke the btree comparison function for the\ndatatype of its arguments. So this coding amounts to an undocumented\nassumption that every non-cross-type btree comparison function is\nleakproof.\n\nA quick catalog query finds 15 counterexamples just among the\nbuilt-in datatypes:\n\nselect p.oid::regprocedure from pg_proc p, pg_amproc a, pg_opfamily f\nwhere p.oid=a.amproc and f.oid=a.amprocfamily and opfmethod=403 and not proleakproof and amproclefttype=amprocrighttype and amprocnum=1;\n\n bpcharcmp(character,character)\n btarraycmp(anyarray,anyarray)\n btbpchar_pattern_cmp(character,character)\n btoidvectorcmp(oidvector,oidvector)\n btrecordcmp(record,record)\n btrecordimagecmp(record,record)\n bttext_pattern_cmp(text,text)\n bttextcmp(text,text)\n enum_cmp(anyenum,anyenum)\n jsonb_cmp(jsonb,jsonb)\n numeric_cmp(numeric,numeric)\n pg_lsn_cmp(pg_lsn,pg_lsn)\n range_cmp(anyrange,anyrange)\n tsquery_cmp(tsquery,tsquery)\n tsvector_cmp(tsvector,tsvector)\n\nso this assumption is, on its face, wrong.\n\nIn practice it might be all right, because it's hard to see a reason why\na btree comparison function would ever throw an error except for internal\nfailures, which are probably outside the scope of leakproofness guarantees\nanyway. Nonetheless, if we didn't mark these functions as leakproof,\nwhy not?\n\nI think that we should either change contain_leaked_vars_walker to\nexplicitly verify leakproofness of the comparison function, or decide\nthat it's project policy that btree comparison functions are leakproof,\nand change the markings on those (and their associated operators).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 30 Dec 2018 13:24:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "This thread duplicates https://postgr.es/m/flat/16539.1431472961%40sss.pgh.pa.us\n\nOn Sun, Dec 30, 2018 at 01:24:02PM -0500, Tom Lane wrote:\n> So this coding amounts to an undocumented\n> assumption that every non-cross-type btree comparison function is\n> leakproof.\n\n> select p.oid::regprocedure from pg_proc p, pg_amproc a, pg_opfamily f\n> where p.oid=a.amproc and f.oid=a.amprocfamily and opfmethod=403 and not proleakproof and amproclefttype=amprocrighttype and amprocnum=1;\n> \n> bpcharcmp(character,character)\n> btarraycmp(anyarray,anyarray)\n> btbpchar_pattern_cmp(character,character)\n> btoidvectorcmp(oidvector,oidvector)\n> btrecordcmp(record,record)\n> btrecordimagecmp(record,record)\n> bttext_pattern_cmp(text,text)\n> bttextcmp(text,text)\n> enum_cmp(anyenum,anyenum)\n> jsonb_cmp(jsonb,jsonb)\n> numeric_cmp(numeric,numeric)\n> pg_lsn_cmp(pg_lsn,pg_lsn)\n> range_cmp(anyrange,anyrange)\n> tsquery_cmp(tsquery,tsquery)\n> tsvector_cmp(tsvector,tsvector)\n> \n> so this assumption is, on its face, wrong.\n> \n> In practice it might be all right, because it's hard to see a reason why\n> a btree comparison function would ever throw an error except for internal\n> failures, which are probably outside the scope of leakproofness guarantees\n> anyway. Nonetheless, if we didn't mark these functions as leakproof,\n> why not?\n\npg_lsn_cmp() and btoidvectorcmp() surely could advertise leakproofness. I'm not\nsure about enum_cmp(), numeric_cmp(), tsquery_cmp() or tsvector_cmp(). I can't\nthink of a reason those would leak, though. btrecordcmp() and other polymorphic\ncmp functions can fail:\n\n create type boxrec as (a box); select '(\"(1,1),(0,0)\")'::boxrec = '(\"(1,1),(0,0)\")'::boxrec;\n => ERROR: could not identify an equality operator for type box\n\nThe documentation says, \"a function which throws an error message for some\nargument values but not others ... is not leakproof.\" I would be comfortable\namending that to allow the \"could not identify an equality operator\" error,\nbecause that error follows from type specifics, not value specifics.\n\nbttextcmp() and other varstr_cmp() callers fall afoul of the same restriction\nwith their \"could not convert string to UTF-16\" errors\n(https://postgr.es/m/CADyhKSXPwrUv%2B9LtqPAQ_gyZTv4hYbr2KwqBxcs6a3Vee1jBLQ%40mail.gmail.com).\nLeaking the binary fact that an unspecified string contains an unspecified rare\nUnicode character is not a serious leak, however. Also, those errors would be a\nsubstantial usability impediment if they happened much in practice; you couldn't\nindex affected values.\n\n> I think that we should either change contain_leaked_vars_walker to\n> explicitly verify leakproofness of the comparison function, or decide\n> that it's project policy that btree comparison functions are leakproof,\n> and change the markings on those (and their associated operators).\n\nEither of those solutions sounds fine. Like last time, I'll vote for explicitly\nverifying leakproofness.\n\n",
"msg_date": "Mon, 31 Dec 2018 12:25:51 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "On Mon, 31 Dec 2018 at 12:26, Noah Misch <noah@leadboat.com> wrote:\n\n>\n> bttextcmp() and other varstr_cmp() callers fall afoul of the same\n> restriction\n> with their \"could not convert string to UTF-16\" errors\n> (\n> https://postgr.es/m/CADyhKSXPwrUv%2B9LtqPAQ_gyZTv4hYbr2KwqBxcs6a3Vee1jBLQ%40mail.gmail.com\n> ).\n> Leaking the binary fact that an unspecified string contains an unspecified\n> rare\n> Unicode character is not a serious leak, however. Also, those errors\n> would be a\n> substantial usability impediment if they happened much in practice; you\n> couldn't\n> index affected values.\n>\n>\nI'm confused. What characters cannot be represented in UTF-16?\n\nOn Mon, 31 Dec 2018 at 12:26, Noah Misch <noah@leadboat.com> wrote:\nbttextcmp() and other varstr_cmp() callers fall afoul of the same restriction\nwith their \"could not convert string to UTF-16\" errors\n(https://postgr.es/m/CADyhKSXPwrUv%2B9LtqPAQ_gyZTv4hYbr2KwqBxcs6a3Vee1jBLQ%40mail.gmail.com).\nLeaking the binary fact that an unspecified string contains an unspecified rare\nUnicode character is not a serious leak, however. Also, those errors would be a\nsubstantial usability impediment if they happened much in practice; you couldn't\nindex affected values.\nI'm confused. What characters cannot be represented in UTF-16?",
"msg_date": "Mon, 31 Dec 2018 12:40:23 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> This thread duplicates https://postgr.es/m/flat/16539.1431472961%40sss.pgh.pa.us\n\nAh, so it does. Not sure why that fell off the radar without getting\nfixed; possibly because it was right before PGCon.\n\n> pg_lsn_cmp() and btoidvectorcmp() surely could advertise leakproofness.\n\nAgreed; I'll go fix those.\n\n> I'm not sure about enum_cmp(), numeric_cmp(), tsquery_cmp() or\n> tsvector_cmp(). I can't think of a reason those would leak, though.\n\nI've not looked at the last three, but enum_cmp can potentially report the\nvalue of an input, if it fails to find a matching pg_enum record:\n\n enum_tup = SearchSysCache1(ENUMOID, ObjectIdGetDatum(arg1));\n if (!HeapTupleIsValid(enum_tup))\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_BINARY_REPRESENTATION),\n errmsg(\"invalid internal value for enum: %u\",\n arg1)));\n\nand there are similar error reports inside compare_values_of_enum().\nWhether that amounts to an interesting security leak is debatable.\nIt's hard to see how an attacker could arrange for those to fail,\nmuch less do so in a way that would reveal a value he didn't know\nalready.\n\n> btrecordcmp() and other polymorphic\n> cmp functions can fail:\n> create type boxrec as (a box); select '(\"(1,1),(0,0)\")'::boxrec = '(\"(1,1),(0,0)\")'::boxrec;\n> => ERROR: could not identify an equality operator for type box\n> The documentation says, \"a function which throws an error message for some\n> argument values but not others ... is not leakproof.\" I would be comfortable\n> amending that to allow the \"could not identify an equality operator\" error,\n> because that error follows from type specifics, not value specifics.\n\nI think the real issue with btrecordcmp, btarraycmp, etc, is that\nthey invoke other type-specific comparison functions. Therefore,\nwe cannot mark them leakproof unless every type-specific comparison\nfunction is leakproof. So we're right back at the policy question.\n\n> bttextcmp() and other varstr_cmp() callers fall afoul of the same restriction\n> with their \"could not convert string to UTF-16\" errors\n> (https://postgr.es/m/CADyhKSXPwrUv%2B9LtqPAQ_gyZTv4hYbr2KwqBxcs6a3Vee1jBLQ%40mail.gmail.com).\n> Leaking the binary fact that an unspecified string contains an unspecified rare\n> Unicode character is not a serious leak, however. Also, those errors would be a\n> substantial usability impediment if they happened much in practice; you couldn't\n> index affected values.\n\nYeah. I think that there might be a usability argument for marking\ntextcmp and related operators as leakproof despite this theoretical\nleak condition, because not marking them puts a large practical\nconstraint on what conditions we can optimize. However, that\ndiscussion just applies narrowly to the string data types; it is\nindependent of what we want to say the general policy is.\n\n>> I think that we should either change contain_leaked_vars_walker to\n>> explicitly verify leakproofness of the comparison function, or decide\n>> that it's project policy that btree comparison functions are leakproof,\n>> and change the markings on those (and their associated operators).\n\n> Either of those solutions sounds fine. Like last time, I'll vote for explicitly\n> verifying leakproofness.\n\nYeah, I'm leaning in that direction as well. Other than comparisons\ninvolving strings, it's not clear that we'd gain much from insisting\non leakproofness in general, and it seems like it might be rather a\nlarge burden to put on add-on data types.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 31 Dec 2018 12:58:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Mon, 31 Dec 2018 at 12:26, Noah Misch <noah@leadboat.com> wrote:\n>> bttextcmp() and other varstr_cmp() callers fall afoul of the same\n>> restriction with their \"could not convert string to UTF-16\" errors\n\n> I'm confused. What characters cannot be represented in UTF-16?\n\nWhat's actually being reported there is failure of Windows'\nMultiByteToWideChar function. Probable causes could include\ninvalid data (not valid UTF8), or conditions such as out-of-memory\nwhich might have nothing at all to do with the input.\n\nThere are similar, equally nonspecific, error messages in the\nnon-Windows code path.\n\nIn principle, an attacker might be able to find out the existence\nof extremely long strings in a column by noting out-of-memory\nfailures in this code, but that doesn't seem like a particularly\ninteresting information leak ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 31 Dec 2018 13:08:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> bttextcmp() and other varstr_cmp() callers fall afoul of the same\n >> restriction with their \"could not convert string to UTF-16\" errors\n >> (https://postgr.es/m/CADyhKSXPwrUv%2B9LtqPAQ_gyZTv4hYbr2KwqBxcs6a3Vee1jBLQ%40mail.gmail.com).\n >> Leaking the binary fact that an unspecified string contains an\n >> unspecified rare Unicode character is not a serious leak, however.\n >> Also, those errors would be a substantial usability impediment if\n >> they happened much in practice; you couldn't index affected values.\n\n Tom> Yeah. I think that there might be a usability argument for marking\n Tom> textcmp and related operators as leakproof despite this\n Tom> theoretical leak condition, because not marking them puts a large\n Tom> practical constraint on what conditions we can optimize. However,\n Tom> that discussion just applies narrowly to the string data types; it\n Tom> is independent of what we want to say the general policy is.\n\nI think that's not even a theoretical leak; the documentation for\nMultiByteToWideChar does not indicate any way in which it can return an\nerror for the specific parameters we pass to it. In particular we do not\ntell it to return errors for invalid input characters.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Mon, 31 Dec 2018 18:22:02 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Either of those solutions sounds fine. Like last time, I'll vote for explicitly\n> > verifying leakproofness.\n> \n> Yeah, I'm leaning in that direction as well. Other than comparisons\n> involving strings, it's not clear that we'd gain much from insisting\n> on leakproofness in general, and it seems like it might be rather a\n> large burden to put on add-on data types.\n\nWhile I'd actually like it if we required leakproofness for what we\nship, I agree that we shouldn't blindly assume that add-on data types\nare always leakproof and that then requires that we explicitly verify\nit. Perhaps an argument can be made that there are some cases where\nwhat we ship can't or shouldn't be leakproof for usability but, ideally,\nthose would be relatively rare exceptions that don't impact common\nuse-cases.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 2 Jan 2019 08:16:42 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> Either of those solutions sounds fine. Like last time, I'll vote for explicitly\n>>> verifying leakproofness.\n\n>> Yeah, I'm leaning in that direction as well. Other than comparisons\n>> involving strings, it's not clear that we'd gain much from insisting\n>> on leakproofness in general, and it seems like it might be rather a\n>> large burden to put on add-on data types.\n\n> While I'd actually like it if we required leakproofness for what we\n> ship, I agree that we shouldn't blindly assume that add-on data types\n> are always leakproof and that then requires that we explicitly verify\n> it. Perhaps an argument can be made that there are some cases where\n> what we ship can't or shouldn't be leakproof for usability but, ideally,\n> those would be relatively rare exceptions that don't impact common\n> use-cases.\n\nSeems like we have consensus that MinMaxExpr should verify leakproofness\nrather than just assume it, so I'll go fix that.\n\nWhat's your opinion on the question of whether to try to make text_cmp\net al leakproof? I notice that texteq/textne are (correctly) marked\nleakproof, so perhaps the usability issue isn't as pressing as I first\nthought; but it remains true that there are fairly common cases where\nthe current marking is going to impede optimization. I also realized\nthat in the wake of 586b98fdf, we have to remove the leakproofness\nmarking of name_cmp and name inequality comparisons --- which I did\nat d01e75d68, but that's potentially a regression in optimizability\nof catalog queries, so it's not very nice.\n\nAlso, I believe that Peter's work towards making text equality potentially\ncollation-sensitive will destroy the excuse for marking texteq/textne\nleakproof if we're going to be 100% rigid about that, and that would be\na very big regression.\n\nSo I'd like to get to a point where we're comfortable marking these\nfunctions leakproof despite the possibility of corner-case failures.\nWe could just decide that the existing failure cases in varstr_cmp are\nnot usefully exploitable for information leakage, or perhaps we could\ndumb down the error messages some more to make them even less so.\nIt'd also be nice to have some articulatable policy that encompasses\na choice like this.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 02 Jan 2019 14:47:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Noah Misch <noah@leadboat.com> writes:\n> >>> Either of those solutions sounds fine. Like last time, I'll vote for explicitly\n> >>> verifying leakproofness.\n> \n> >> Yeah, I'm leaning in that direction as well. Other than comparisons\n> >> involving strings, it's not clear that we'd gain much from insisting\n> >> on leakproofness in general, and it seems like it might be rather a\n> >> large burden to put on add-on data types.\n> \n> > While I'd actually like it if we required leakproofness for what we\n> > ship, I agree that we shouldn't blindly assume that add-on data types\n> > are always leakproof and that then requires that we explicitly verify\n> > it. Perhaps an argument can be made that there are some cases where\n> > what we ship can't or shouldn't be leakproof for usability but, ideally,\n> > those would be relatively rare exceptions that don't impact common\n> > use-cases.\n> \n> Seems like we have consensus that MinMaxExpr should verify leakproofness\n> rather than just assume it, so I'll go fix that.\n> \n> What's your opinion on the question of whether to try to make text_cmp\n> et al leakproof? I notice that texteq/textne are (correctly) marked\n> leakproof, so perhaps the usability issue isn't as pressing as I first\n> thought; but it remains true that there are fairly common cases where\n> the current marking is going to impede optimization. I also realized\n> that in the wake of 586b98fdf, we have to remove the leakproofness\n> marking of name_cmp and name inequality comparisons --- which I did\n> at d01e75d68, but that's potentially a regression in optimizability\n> of catalog queries, so it's not very nice.\n\nWell, as mentioned, I'd really be happier to have more things be\nleakproof, when they really are leakproof. What we've done in some\nplaces, and I'm not sure how practical this is elsewhere, is to show\ndata when we know the user is allowed to see it anyway, to aide in\ndebugging and such (I'm thinking here specifically of\nBuildIndexValueDescription(), which will just return a NULL in the case\nwhere the user doesn't have permission to view all of the columns\ninvolved). As these are error cases, we're generally happy to consider\nspending a bit of extra time to figure that out, is there something\nsimilar we could do for these cases where we'd really like to report\nuseful information to the user, but only if we think they're probably\nallowed to see it?\n\n> Also, I believe that Peter's work towards making text equality potentially\n> collation-sensitive will destroy the excuse for marking texteq/textne\n> leakproof if we're going to be 100% rigid about that, and that would be\n> a very big regression.\n\nThat could be a serious problem, I agree.\n\n> So I'd like to get to a point where we're comfortable marking these\n> functions leakproof despite the possibility of corner-case failures.\n> We could just decide that the existing failure cases in varstr_cmp are\n> not usefully exploitable for information leakage, or perhaps we could\n> dumb down the error messages some more to make them even less so.\n> It'd also be nice to have some articulatable policy that encompasses\n> a choice like this.\n\nI'd rather not say \"well, these are mostly leakproof and therefore it's\ngood enough\" unless those corner-case failures you're referring to are\nreally \"this system call isn't documented to ever fail in a way we can't\nhandle, but somehow it did and we're blowing up because of it.\"\n\nAt least, in the cases where we're actually leaking knowledge that we\nshouldn't be. If what we're leaking is some error being returned where\nall we're returning is an error code and not the actual data then that\ndoesn't seem like it's really much of a leak to me..? I'm just glancing\nthrough varstr_cmp and perhaps I'm missing something but it seems like\neverywhere we're returning an error, at least from there, it's an error\ncode of some kind being returned and not the data that was passed in to\nthe function. I didn't spend a lot of time hunting through it though.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 2 Jan 2019 15:59:50 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> So I'd like to get to a point where we're comfortable marking these\n>> functions leakproof despite the possibility of corner-case failures.\n>> We could just decide that the existing failure cases in varstr_cmp are\n>> not usefully exploitable for information leakage, or perhaps we could\n>> dumb down the error messages some more to make them even less so.\n>> It'd also be nice to have some articulatable policy that encompasses\n>> a choice like this.\n\n> I'd rather not say \"well, these are mostly leakproof and therefore it's\n> good enough\" unless those corner-case failures you're referring to are\n> really \"this system call isn't documented to ever fail in a way we can't\n> handle, but somehow it did and we're blowing up because of it.\"\n\nWell, that's pretty much what we've got here.\n\n1. As Noah noted, every failure case in varstr_cmp is ideally a can't\nhappen case, since if it could happen on valid data then that data\ncouldn't be put into a btree index.\n\n2. AFAICS, all the error messages in question just report that a system\noperation failed, with some errno string or equivalent; none of the\noriginal data is reported. (Obviously, we'd want to add comments\ndiscouraging people from changing that ...)\n\nConceivably, an attacker could learn the length of some long string\nby noting a palloc failure report --- but we've already accepted an\nequivalent hazard in texteq or byteaeq, I believe, and anyway it's\npretty hard to believe that an attacker could control such failures\nwell enough to weaponize it.\n\nSo the question boils down to whether you think that somebody could\ninfer something else useful about the contents of a string from\nthe strerror (or ICU u_errorName()) summary of a system function\nfailure. This seems like a pretty thin argument to begin with,\nand then the presumed difficulty of making such a failure happen\nrepeatably makes it even harder to credit as a useful information\nleak.\n\nSo I'm personally satisfied that we could mark text_cmp et al as\nleakproof, but I'm not sure how we define a project policy that\nsupports such a determination.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 02 Jan 2019 16:48:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> So I'd like to get to a point where we're comfortable marking these\n> >> functions leakproof despite the possibility of corner-case failures.\n> >> We could just decide that the existing failure cases in varstr_cmp are\n> >> not usefully exploitable for information leakage, or perhaps we could\n> >> dumb down the error messages some more to make them even less so.\n> >> It'd also be nice to have some articulatable policy that encompasses\n> >> a choice like this.\n> \n> > I'd rather not say \"well, these are mostly leakproof and therefore it's\n> > good enough\" unless those corner-case failures you're referring to are\n> > really \"this system call isn't documented to ever fail in a way we can't\n> > handle, but somehow it did and we're blowing up because of it.\"\n> \n> Well, that's pretty much what we've got here.\n\nGood. Those all almost certainly fall under the category of 'covert\nchannels' and provided they're low bandwidth and hard to control, as\nseems to be the case here, then I believe we can accept them. I'm\nafraid there isn't really any hard-and-fast definition that could be\nused as a basis for a project policy around this, unfortunately. We\ncertainly shouldn't be returning direct data from the heap or indexes as\npart of error messages in leakproof functions, and we should do our best\nto ensure that anything from system calls we make also don't, but\nstrerror-like results or the error codes themselves should be fine.\n\n> 1. As Noah noted, every failure case in varstr_cmp is ideally a can't\n> happen case, since if it could happen on valid data then that data\n> couldn't be put into a btree index.\n\nThat's certainly a good point.\n\n> 2. AFAICS, all the error messages in question just report that a system\n> operation failed, with some errno string or equivalent; none of the\n> original data is reported. (Obviously, we'd want to add comments\n> discouraging people from changing that ...)\n\nAgreed, we should definitely add comments here (and, really, in any\nother cases where we need to be thinking about similar issues..).\n\n> Conceivably, an attacker could learn the length of some long string\n> by noting a palloc failure report --- but we've already accepted an\n> equivalent hazard in texteq or byteaeq, I believe, and anyway it's\n> pretty hard to believe that an attacker could control such failures\n> well enough to weaponize it.\n\nRight, that's a low bandwidth covert channel and as such should be\nacceptable.\n\n> So the question boils down to whether you think that somebody could\n> infer something else useful about the contents of a string from\n> the strerror (or ICU u_errorName()) summary of a system function\n> failure. This seems like a pretty thin argument to begin with,\n> and then the presumed difficulty of making such a failure happen\n> repeatably makes it even harder to credit as a useful information\n> leak.\n> \n> So I'm personally satisfied that we could mark text_cmp et al as\n> leakproof, but I'm not sure how we define a project policy that\n> supports such a determination.\n\nI'm not sure how to formalize such a policy either though perhaps we\ncould discuss specific \"don't do this\" things and have a light\ndicussion about what covert channel are.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 3 Jan 2019 09:14:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Is MinMaxExpr really leakproof?"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile messing around with slot code I noticed that the SQL functions for\nconsuming/moving logical replication slots only move restart_lsn up to\npreviously consumed position and not to currently consumed position. The\nreason for that is that restart_lsn is not moved forward unless new\nvalue is smaller that current confirmed_lsn of the slot. But we only\nupdate confirmed_lsn of the slot at the end of the SQL functions so we\ncan only move restart_lsn up to the position we reached on previous\ncall. Same is true for catalog_xmin.\n\nThis does not really hurt much functionality wise but it means that\nevery record is needlessly processed twice as we always restart from\nposition that was reached 2 calls of the function ago and that we keep\nolder catalog_xmin than necessary which can potentially affect system\ncatalog bloat.\n\nThis affects both the pg_logical_slot_get_[binary_]changes and\npg_replication_slot_advance.\n\nAttached patch improves things by adding call to move the slot's\nrestart_lsn and catalog_xmin to the last serialized snapshot position\nright after we update the confirmed_lsn.\n\n-- \n Petr Jelinek http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 30 Dec 2018 20:27:51 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Moving slot restart_lsn/catalog_xmin more eagerly from SQL functions"
},
{
"msg_contents": "On 30/12/2018 20:27, Petr Jelinek wrote:\n> Hi,\n> \n> while messing around with slot code I noticed that the SQL functions for\n> consuming/moving logical replication slots only move restart_lsn up to\n> previously consumed position and not to currently consumed position. The\n> reason for that is that restart_lsn is not moved forward unless new\n> value is smaller that current confirmed_lsn of the slot. But we only\n> update confirmed_lsn of the slot at the end of the SQL functions so we\n> can only move restart_lsn up to the position we reached on previous\n> call. Same is true for catalog_xmin.\n> \n> This does not really hurt much functionality wise but it means that\n> every record is needlessly processed twice as we always restart from\n> position that was reached 2 calls of the function ago and that we keep\n> older catalog_xmin than necessary which can potentially affect system\n> catalog bloat.\n> \n> This affects both the pg_logical_slot_get_[binary_]changes and\n> pg_replication_slot_advance.\n> \n> Attached patch improves things by adding call to move the slot's\n> restart_lsn and catalog_xmin to the last serialized snapshot position\n> right after we update the confirmed_lsn.\n> \n\nMeh, and it has copy-paste issue. Fixed in new attachment.\n\n-- \n Petr Jelinek http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 30 Dec 2018 20:31:34 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving slot restart_lsn/catalog_xmin more eagerly from SQL\n functions"
}
] |
[
{
"msg_contents": "Hi,\n\nAs Andres has mentioned over at minimal decoding on standby thread [1],\nthat functionality can be used to add simple worker which periodically\nsynchronizes the slot state from the primary to a standby.\n\nAttached patch is rough implementation of such worker. It's nowhere near\ncommittable in the current state, it servers primarily two purposes - to\nhave something over what we can agree on the approach (and if we do,\nserve as base for that) and to demonstrate that the patch in [1] can\nindeed be used for this functionality. All this means that this patch\ndepends on the [1] to work.\n\nThe approach chosen by me is to change the logical replication launcher\nto run also on a standby and to support new type of worker which is\nstarted on a standby for the slot synchronization. The new worker\n(slotsync) is responsible for periodically fetching information from the\nprimary server and moving slots on the standby forward (using the fast\nforwarding functionality added in PG11) based on that. There is one\nworker per database (logical slots are per database, walrcv_exec needs\ndb connection, etc). I had to add new replication command for listing\nslots so that the launcher can check which databases on the upstream\nactually have slots and start the slotsync only for those. The second\npatch in the series just adds ability to filter which slots are actually\nsynchronized.\n\nThis approach should be eventually portable to logical replication as\nwell. The only difference there is that we need to be able to map lsns\nof the publisher to the lsns of the subscriber. We already do that in\napply so that should be doable, I don't have that as goal for first\nversion of the feature though.\n\nThe basic functionality seems to be working pretty well, however there\nare several discussion points and unfinished parts:\n\na) Do we want to automatically create and drop slots when they get\ncreated on the primary? Currently the patch does auto-create but does\nnot auto-drop yet. There is no way to signal that slot was dropped so I\ndon't see straightforward way to differentiate between slots that have\nbeen dropped on master and those that only exist on standby. I guess if\nwe added the second feature with slot list as well we could drop\nanything on that list that's not on primary...\nb) The slot creation is somewhat interesting. The slot might be created\nwhile standby does not have wal for existing slots on primary because\nthey are behind of standby. We solve it by creating ephemeral slot and\nwait for the primary slot to pass it's lsn before persisting it\n(similarly to when we are trying to build initial snapshot). This seems\nreasonable to me but the coding could use another pair of eyes there.\nc) With the periodical start/stop (for the move) of the decoding on the\nslot, the logging of every start of decoding context is pretty\nannoying/spammy, we should probably tune that down.\nd) The launcher integration needs improvement - add worker kind rather\nthan guessing from values of dbid, subid and relid and do decisions\nbased on that. Also the interfaces for manipulating the workers should\nprobably use LogicalRepWorkerId rather than above mentioned parameters\nand guessing everywhere.\ne) We probably should support synchronizing physical slots as well\n(currently we only sync logical slots). But that should be easy provided\nwe don't mind that logical replication launcher is somewhat misnomer then...\nf) Maybe walreceiver or startup should signal these new workers if\nenough data is processed, so it's not purely time based. But I think\nthat kind of optimization can be left for later.\n\nAlso (these are pretty pointless until we agree that this is the right\napproach):\n- there is no documentation update yet\n- there are no TAP tests yet\n- the recheck timer might need GUC\n\n[1]\nhttps://www.postgresql.org/message-id/20181212204154.nsxf3gzqv3gesl32@alap3.anarazel.de\n\n-- \n Petr Jelinek http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 30 Dec 2018 22:23:08 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Synchronizing slots from primary to standby"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 10:23 AM Petr Jelinek\n<petr.jelinek@2ndquadrant.com> wrote:\n> As Andres has mentioned over at minimal decoding on standby thread [1],\n> that functionality can be used to add simple worker which periodically\n> synchronizes the slot state from the primary to a standby.\n>\n> Attached patch is rough implementation of such worker. It's nowhere near\n> committable in the current state, it servers primarily two purposes - to\n> have something over what we can agree on the approach (and if we do,\n> serve as base for that) and to demonstrate that the patch in [1] can\n> indeed be used for this functionality. All this means that this patch\n> depends on the [1] to work.\n\nHi Petr,\n\nDo I understand correctly that this depends on the \"logical decoding\non standby\" patch, but that isn't in the Commitfest? Seems like an\noversight, since that thread has a recently posted v11 patch that\napplies OK, and there was recent review. You patches no longer apply\non top though. Would it make sense to post a patch set here including\nlogical-decoding-on-standby_v11.patch + your two patches (rebased),\nsince this is currently marked as \"Needs review\"?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 22:28:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Synchronizing slots from primary to standby"
}
] |
[
{
"msg_contents": "To support logical decoding for zheap operations, we need a way to\nensure zheap tuples can be registered as change streams. One idea\ncould be that we make ReorderBufferChange aware of another kind of\ntuples as well, something like this:\n\n@@ -100,6 +123,20 @@ typedef struct ReorderBufferChange\n ReorderBufferTupleBuf *newtuple;\n } tp;\n+ struct\n+ {\n+ /* relation that has been changed */\n+ RelFileNode relnode;\n+\n+ /* no previously reassembled toast chunks are necessary anymore */\n+ bool clear_toast_afterwards;\n+\n+ /* valid for DELETE || UPDATE */\n+ ReorderBufferZTupleBuf *oldtuple;\n+ /* valid for INSERT || UPDATE */\n+ ReorderBufferZTupleBuf *newtuple;\n+ } ztp;\n+\n\n\n+/* an individual zheap tuple, stored in one chunk of memory */\n+typedef struct ReorderBufferZTupleBuf\n+{\n..\n+ /* tuple header, the interesting bit for users of logical decoding */\n+ ZHeapTupleData tuple;\n..\n+} ReorderBufferZTupleBuf;\n\nApart from this, we need to define different decode functions for\nzheap operations as the WAL data is different for heap and zheap, so\nsame functions can't be used to decode.\n\nI have written a very hacky version to support zheap Insert operation\nbased on the above idea. If we want to go with this approach, we\nmight need a better way to represent a different type of tuple in\nReorderBufferChange.\n\nThe yet another approach could be that in the decode functions after\nforming zheap tuples from WAL, we can convert them to heap tuples. I\nhave not tried that, so not sure if it can work, but it seems to me if\nwe can avoid tuple conversion overhead, it will be good.\n\nThis email is primarily to discuss about how the logical decoding for\nbasic DML operations (Insert/Update/Delete) will work in zheap. We\nmight need some special mechanism to deal with sub-transactions as\nzheap doesn't generate a transaction id for sub-transactions, but we\ncan discuss that separately.\n\nThoughts?\n\nNote - This patch is based on pluggable_zheap branch\n(https://github.com/anarazel/postgres-pluggable-storage)\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 31 Dec 2018 09:56:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Logical decoding for operations on zheap tables"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 9:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> To support logical decoding for zheap operations, we need a way to\n> ensure zheap tuples can be registered as change streams. One idea\n> could be that we make ReorderBufferChange aware of another kind of\n> tuples as well, something like this:\n>\n..\n>\n> Apart from this, we need to define different decode functions for\n> zheap operations as the WAL data is different for heap and zheap, so\n> same functions can't be used to decode.\n>\n> I have written a very hacky version to support zheap Insert operation\n> based on the above idea.\n>\n\nI went ahead and tried to implement the decoding for Delete operation\nas well based on the above approach and the result is attached.\n\n>\n> The yet another approach could be that in the decode functions after\n> forming zheap tuples from WAL, we can convert them to heap tuples. I\n> have not tried that, so not sure if it can work, but it seems to me if\n> we can avoid tuple conversion overhead, it will be good.\n>\n\nWhile implementing the decoding for delete operation, I noticed that\nthe main changes required are to write a decode operation and\nadditional WAL (like old tuple) which anyway is required even if we\npursue this approach, so I think it might be better to with the\napproach where we don't need tuple conversion (aka something similar\nto what is done in attached patch).\n\nNote - This patch is based on pluggable-zheap branch\n(https://github.com/anarazel/postgres-pluggable-storage)\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Jan 2019 09:28:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "Hi,\n\nOn 2018-12-31 09:56:48 +0530, Amit Kapila wrote:\n> To support logical decoding for zheap operations, we need a way to\n> ensure zheap tuples can be registered as change streams. One idea\n> could be that we make ReorderBufferChange aware of another kind of\n> tuples as well, something like this:\n> \n> @@ -100,6 +123,20 @@ typedef struct ReorderBufferChange\n> ReorderBufferTupleBuf *newtuple;\n> } tp;\n> + struct\n> + {\n> + /* relation that has been changed */\n> + RelFileNode relnode;\n> +\n> + /* no previously reassembled toast chunks are necessary anymore */\n> + bool clear_toast_afterwards;\n> +\n> + /* valid for DELETE || UPDATE */\n> + ReorderBufferZTupleBuf *oldtuple;\n> + /* valid for INSERT || UPDATE */\n> + ReorderBufferZTupleBuf *newtuple;\n> + } ztp;\n> +\n> \n> \n> +/* an individual zheap tuple, stored in one chunk of memory */\n> +typedef struct ReorderBufferZTupleBuf\n> +{\n> ..\n> + /* tuple header, the interesting bit for users of logical decoding */\n> + ZHeapTupleData tuple;\n> ..\n> +} ReorderBufferZTupleBuf;\n> \n> Apart from this, we need to define different decode functions for\n> zheap operations as the WAL data is different for heap and zheap, so\n> same functions can't be used to decode.\n\nI'm very strongly opposed to that. We shouldn't have expose every\npossible storage method to output plugins, that'll make extensibility\na farce. I think we'll either have to re-form a HeapTuple or decide\nto bite the bullet and start exposing tuples via slots.\n\n\n> This email is primarily to discuss about how the logical decoding for\n> basic DML operations (Insert/Update/Delete) will work in zheap. We\n> might need some special mechanism to deal with sub-transactions as\n> zheap doesn't generate a transaction id for sub-transactions, but we\n> can discuss that separately.\n\nSubtransactions seems to be the hardest part besides the tuple format\nissue, so I think we should discuss that very soon.\n\n> +/*\n> + * Write zheap's INSERT to the output stream.\n> + */\n> +void\n> +logicalrep_write_zinsert(StringInfo out, Relation rel, ZHeapTuple newtuple)\n> +{\n> +\tpq_sendbyte(out, 'I');\t\t/* action INSERT */\n> +\n> +\tAssert(rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT ||\n> +\t\t rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL ||\n> +\t\t rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX);\n> +\n> +\t/* use Oid as relation identifier */\n> +\tpq_sendint32(out, RelationGetRelid(rel));\n> +\n> +\tpq_sendbyte(out, 'N');\t\t/* new tuple follows */\n> +\t//logicalrep_write_tuple(out, rel, newtuple);\n> +}\n\nObviously we need to do better - I don't think we should have\ntuple-specific replication messages.\n\n\n> /*\n> * Write relation description to the output stream.\n> */\n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index 23466bade2..70fb5e2934 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -393,6 +393,19 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change)\n> \t\t\t\tchange->data.tp.oldtuple = NULL;\n> \t\t\t}\n> \t\t\tbreak;\n> +\t\tcase REORDER_BUFFER_CHANGE_ZINSERT:\n\nThis really needs to be undistinguishable from normal CHANGE_INSERT...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 3 Jan 2019 10:00:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "On 2019-Jan-03, Andres Freund wrote:\n\n> > Apart from this, we need to define different decode functions for\n> > zheap operations as the WAL data is different for heap and zheap, so\n> > same functions can't be used to decode.\n> \n> I'm very strongly opposed to that. We shouldn't have expose every\n> possible storage method to output plugins, that'll make extensibility\n> a farce. I think we'll either have to re-form a HeapTuple or decide\n> to bite the bullet and start exposing tuples via slots.\n\nHmm, without looking at the patches, I agree that the tuples should be\ngiven as slots to the logical decoding interface. I wonder if we need a\nfurther function in the TTS interface to help decoding, or is the\n\"getattr\" stuff sufficient.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 15:13:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-03 15:13:42 -0300, Alvaro Herrera wrote:\n> On 2019-Jan-03, Andres Freund wrote:\n> \n> > > Apart from this, we need to define different decode functions for\n> > > zheap operations as the WAL data is different for heap and zheap, so\n> > > same functions can't be used to decode.\n> > \n> > I'm very strongly opposed to that. We shouldn't have expose every\n> > possible storage method to output plugins, that'll make extensibility\n> > a farce. I think we'll either have to re-form a HeapTuple or decide\n> > to bite the bullet and start exposing tuples via slots.\n> \n> Hmm, without looking at the patches, I agree that the tuples should be\n> given as slots to the logical decoding interface. I wonder if we need a\n> further function in the TTS interface to help decoding, or is the\n> \"getattr\" stuff sufficient.\n\nWhat precisely do you mean with \"getattr stuff\"? I'd assume that you'd\nnormally do a slot_getallattrs() and then access tts_values/nulls\ndirectly. I don't think there's anything missing in the slot interface\nitself, but using slots probably would require some careful\nconsiderations around memory management, possibly a decoding specific\nslot implementation even.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 3 Jan 2019 10:23:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "On 2019-Jan-03, Andres Freund wrote:\n\n> Hi,\n> \n> On 2019-01-03 15:13:42 -0300, Alvaro Herrera wrote:\n\n> > Hmm, without looking at the patches, I agree that the tuples should be\n> > given as slots to the logical decoding interface. I wonder if we need a\n> > further function in the TTS interface to help decoding, or is the\n> > \"getattr\" stuff sufficient.\n> \n> What precisely do you mean with \"getattr stuff\"? I'd assume that you'd\n> normally do a slot_getallattrs() and then access tts_values/nulls\n> directly.\n\nAh, yeah, you deform the tuple first and then access the arrays\ndirectly, right. I was just agreeing with your point that forming a\nheaptuple only to have logical decoding grab individual attrs from there\ndidn't sound terribly optimal.\n\n> I don't think there's anything missing in the slot interface itself,\n> but using slots probably would require some careful considerations\n> around memory management, possibly a decoding specific slot\n> implementation even.\n\nA specific slot implementation sounds like more work than I was\nenvisioning. Can't we just \"pin\" a slot to a memory context or\nsomething like that, to keep it alive until decoding is done with it?\nIt seems useful to avoid creating another copy of the tuple in memory\n(which we would need if, if I understand you correctly, we need to form\nthe tuple under a different slot implementation from whatever the origin\nis).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 15:38:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "On Thu, Jan 3, 2019 at 11:30 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2018-12-31 09:56:48 +0530, Amit Kapila wrote:\n> > To support logical decoding for zheap operations, we need a way to\n> > ensure zheap tuples can be registered as change streams. One idea\n> > could be that we make ReorderBufferChange aware of another kind of\n> > tuples as well, something like this:\n> >\n> > @@ -100,6 +123,20 @@ typedef struct ReorderBufferChange\n> > ReorderBufferTupleBuf *newtuple;\n> > } tp;\n> > + struct\n> > + {\n> > + /* relation that has been changed */\n> > + RelFileNode relnode;\n> > +\n> > + /* no previously reassembled toast chunks are necessary anymore */\n> > + bool clear_toast_afterwards;\n> > +\n> > + /* valid for DELETE || UPDATE */\n> > + ReorderBufferZTupleBuf *oldtuple;\n> > + /* valid for INSERT || UPDATE */\n> > + ReorderBufferZTupleBuf *newtuple;\n> > + } ztp;\n> > +\n> >\n> >\n> > +/* an individual zheap tuple, stored in one chunk of memory */\n> > +typedef struct ReorderBufferZTupleBuf\n> > +{\n> > ..\n> > + /* tuple header, the interesting bit for users of logical decoding */\n> > + ZHeapTupleData tuple;\n> > ..\n> > +} ReorderBufferZTupleBuf;\n> >\n> > Apart from this, we need to define different decode functions for\n> > zheap operations as the WAL data is different for heap and zheap, so\n> > same functions can't be used to decode.\n>\n> I'm very strongly opposed to that. We shouldn't have expose every\n> possible storage method to output plugins, that'll make extensibility\n> a farce. I think we'll either have to re-form a HeapTuple or decide\n> to bite the bullet and start exposing tuples via slots.\n>\n\nTo be clear, you are against exposing different format of tuples to\nplugins, not having different decoding routines for other storage\nengines, because later part is unavoidable due to WAL format. Now,\nabout tuple format, I guess it would be a lot better if we expose via\nslots, but won't that make existing plugins to change the way they\ndecode the tuple, maybe that is okay? OTOH, re-forming the heap tuple\nhas a cost which might be okay for the time being or first version,\nbut eventually, we want to avoid that. The other reason why I\nrefrained from tuple conversion was that I was not sure if we anywhere\nrely on the transaction information in the tuple during decode\nprocess, because that will be tricky to mimic, but I guess we don't\ncheck that.\n\nThe only point for exposing a different tuple format via plugin was a\nperformance which I think can be addressed if we expose via slots. I\ndon't want to take up exposing slots instead of tuples for plugins as\npart of this project and I think if we want to go with that, it is\nbetter done as part of pluggable API?\n\n>\n> > This email is primarily to discuss about how the logical decoding for\n> > basic DML operations (Insert/Update/Delete) will work in zheap. We\n> > might need some special mechanism to deal with sub-transactions as\n> > zheap doesn't generate a transaction id for sub-transactions, but we\n> > can discuss that separately.\n>\n> Subtransactions seems to be the hardest part besides the tuple format\n> issue, so I think we should discuss that very soon.\n>\n\nAgreed, I am going to look at that part next.\n\n>\n> > /*\n> > * Write relation description to the output stream.\n> > */\n> > diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> > index 23466bade2..70fb5e2934 100644\n> > --- a/src/backend/replication/logical/reorderbuffer.c\n> > +++ b/src/backend/replication/logical/reorderbuffer.c\n> > @@ -393,6 +393,19 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change)\n> > change->data.tp.oldtuple = NULL;\n> > }\n> > break;\n> > + case REORDER_BUFFER_CHANGE_ZINSERT:\n>\n> This really needs to be undistinguishable from normal CHANGE_INSERT...\n>\n\nSure, it will be if we decide to either re-form heap tuple or expose via slots.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 4 Jan 2019 08:54:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-04 08:54:34 +0530, Amit Kapila wrote:\n> On Thu, Jan 3, 2019 at 11:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2018-12-31 09:56:48 +0530, Amit Kapila wrote:\n> > > To support logical decoding for zheap operations, we need a way to\n> > > ensure zheap tuples can be registered as change streams. One idea\n> > > could be that we make ReorderBufferChange aware of another kind of\n> > > tuples as well, something like this:\n> > >\n> > > @@ -100,6 +123,20 @@ typedef struct ReorderBufferChange\n> > > ReorderBufferTupleBuf *newtuple;\n> > > } tp;\n> > > + struct\n> > > + {\n> > > + /* relation that has been changed */\n> > > + RelFileNode relnode;\n> > > +\n> > > + /* no previously reassembled toast chunks are necessary anymore */\n> > > + bool clear_toast_afterwards;\n> > > +\n> > > + /* valid for DELETE || UPDATE */\n> > > + ReorderBufferZTupleBuf *oldtuple;\n> > > + /* valid for INSERT || UPDATE */\n> > > + ReorderBufferZTupleBuf *newtuple;\n> > > + } ztp;\n> > > +\n> > >\n> > >\n> > > +/* an individual zheap tuple, stored in one chunk of memory */\n> > > +typedef struct ReorderBufferZTupleBuf\n> > > +{\n> > > ..\n> > > + /* tuple header, the interesting bit for users of logical decoding */\n> > > + ZHeapTupleData tuple;\n> > > ..\n> > > +} ReorderBufferZTupleBuf;\n> > >\n> > > Apart from this, we need to define different decode functions for\n> > > zheap operations as the WAL data is different for heap and zheap, so\n> > > same functions can't be used to decode.\n> >\n> > I'm very strongly opposed to that. We shouldn't have expose every\n> > possible storage method to output plugins, that'll make extensibility\n> > a farce. I think we'll either have to re-form a HeapTuple or decide\n> > to bite the bullet and start exposing tuples via slots.\n> >\n> \n> To be clear, you are against exposing different format of tuples to\n> plugins, not having different decoding routines for other storage\n> engines, because later part is unavoidable due to WAL format.\n\nCorrect.\n\n\n> Now,\n> about tuple format, I guess it would be a lot better if we expose via\n> slots, but won't that make existing plugins to change the way they\n> decode the tuple, maybe that is okay?\n\nI think one-off API changes are ok. What I'm strictly against is\nprimarily that output plugins will have to deal with more and more\ndifferent tuple formats.\n\n\n> OTOH, re-forming the heap tuple\n> has a cost which might be okay for the time being or first version,\n> but eventually, we want to avoid that.\n\nRight.\n\n\n> The other reason why I refrained from tuple conversion was that I\n> was not sure if we anywhere rely on the transaction information in\n> the tuple during decode process, because that will be tricky to\n> mimic, but I guess we don't check that.\n\nShouldn't be necessary - in fact, most of that information isn't in\nthe heap wal records in the first place.\n\n\n> The only point for exposing a different tuple format via plugin was a\n> performance which I think can be addressed if we expose via slots. I\n> don't want to take up exposing slots instead of tuples for plugins as\n> part of this project and I think if we want to go with that, it is\n> better done as part of pluggable API?\n\nNo, I don't think it makes sense to address this is as part of\npluggable storage. That patchset is already way too invasive and\nlarge.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 3 Jan 2019 19:30:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "On Fri, Jan 4, 2019 at 9:01 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-01-04 08:54:34 +0530, Amit Kapila wrote:\n> > The only point for exposing a different tuple format via plugin was a\n> > performance which I think can be addressed if we expose via slots. I\n> > don't want to take up exposing slots instead of tuples for plugins as\n> > part of this project and I think if we want to go with that, it is\n> > better done as part of pluggable API?\n>\n> No, I don't think it makes sense to address this is as part of\n> pluggable storage. That patchset is already way too invasive and\n> large.\n>\n\nFair enough. I think that for now (and maybe for the first version\nthat can be committed) we might want to use heap tuple format. There\nwill be some overhead but I think code-wise, things will be simpler.\nI have prototyped it for Insert and Delete operations of zheap and the\nonly thing that is required are new decode functions, see the attached\npatch. I have done very minimal testing of this patch as this is just\nto show you and others the direction we are taking (w.r.t tuple\nformat) to support logical decoding in zheap.\n\nThanks for the feedback, further thoughts are welcome!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 12 Jan 2019 17:02:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
},
{
"msg_contents": "On Sat, Jan 12, 2019 at 5:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Fair enough. I think that for now (and maybe for the first version\n> that can be committed) we might want to use heap tuple format. There\n> will be some overhead but I think code-wise, things will be simpler.\n> I have prototyped it for Insert and Delete operations of zheap and the\n> only thing that is required are new decode functions, see the attached\n> patch. I have done very minimal testing of this patch as this is just\n> to show you and others the direction we are taking (w.r.t tuple\n> format) to support logical decoding in zheap.\n+ */\n+ zhtup = DecodeXLogZTuple(tupledata, datalen);\n+ reloid = RelidByRelfilenode(change->data.tp.relnode.spcNode,\n+ change->data.tp.relnode.relNode);\n+ relation = RelationIdGetRelation(reloid);\n\nWe need to start a transaction for fetching the relation if it's a\nwalsender process. I have fixed this issue in the patch and also\nimplemented decode functions for zheap update and multi-insert.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 22 Jan 2019 10:07:30 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical decoding for operations on zheap tables"
}
] |
[
{
"msg_contents": "Dear All\r\nI put a printf statement in the \"main.c\" code and built it. Later when I tried to execute INITDB, I got the following error\r\n\r\nThe program \"postgres\" was found by xxxx but was not the same version as initdb.Check your installation\r\n\r\nAfter some analysis, I figured out that this error is being generated because \"ret\" code from \"PG_CTL.c\" is returning a non zero return code while it is comparing the line and versionstr in \"exec.c\". It looks like while reading the line in \"pipe_read_line\" method, it is concatenating the printf statement with the postgres version(postgres (PostgreSQL)xxxxx).\r\n\r\nI thought this probably is a defect and maybe the buffer needs to be flushed out before reading it in \"pipe_read_line\" method. Before doing further investigation and putting a possible fix, I thought to check with this group if it is worth putting the effort.\r\n\r\nThanks\r\nRajib\r\n\r\n\n\n\n\n\n\n\n\n\n\nDear All\nI put a printf statement in the \"main.c\" code and built it. Later when I tried to execute INITDB, I got the following error\n \nThe program \"postgres\" was found by xxxx but was not the same version as initdb.Check your installation\n \nAfter some analysis, I figured out that this error is being generated because \"ret\" code from \"PG_CTL.c\" is returning a non zero return code while it is comparing the line and versionstr in \"exec.c\". It looks like while reading the line in \"pipe_read_line\"\r\nmethod, it is concatenating the printf statement with the postgres version(postgres (PostgreSQL)xxxxx). \n \nI thought this probably is a defect and maybe the buffer needs to be flushed out before reading it in \"pipe_read_line\" method. Before doing further investigation and putting a possible fix, I thought to check with this group if it is worth putting the\r\neffort.\n \nThanks\nRajib",
"msg_date": "Mon, 31 Dec 2018 04:43:01 +0000",
"msg_from": "Rajib Deb <Rajib_Deb@infosys.com>",
"msg_from_op": true,
"msg_subject": "Error while executing initdb..."
},
{
"msg_contents": "On Mon, 31 Dec 2018 at 17:43, Rajib Deb <Rajib_Deb@infosys.com> wrote:\n>\n> Dear All\n> I put a printf statement in the \"main.c\" code and built it. Later when I tried to execute INITDB, I got the following error\n>\n> The program \"postgres\" was found by xxxx but was not the same version as initdb.Check your installation\n>\n> After some analysis, I figured out that this error is being generated because \"ret\" code from \"PG_CTL.c\" is returning a non zero return code while it is comparing the line and versionstr in \"exec.c\". It looks like while reading the line in \"pipe_read_line\" method, it is concatenating the printf statement with the postgres version(postgres (PostgreSQL)xxxxx).\n>\n> I thought this probably is a defect and maybe the buffer needs to be flushed out before reading it in \"pipe_read_line\" method. Before doing further investigation and putting a possible fix, I thought to check with this group if it is worth putting the effort.\n\n From looking at the code, it appears what happens is that\nfind_other_exec() calls \"postgres -V\" and reads the first line of the\noutput, so probably what's going on is you're printing out your\nadditional line even when postgres is called with -V or --version...\nWould it not be easier just not to do that?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 1 Jan 2019 02:04:13 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Error while executing initdb..."
}
] |
[
{
"msg_contents": "Hi!\n\nI'd like to propose a small patch for make_ctags script. It checks if ctags \nutility is intalled or not. If not it reports an error and advises to install \nctags.\n\nThis will make life of a person that uses ctags first time in his life a bit \neasier.\n\nI use command -v to detect if ctags command exists. It is POSIX standard and I \nhope it exist in all shells.",
"msg_date": "Mon, 31 Dec 2018 19:04:08 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "[PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "В письме от понедельник, 31 декабря 2018 г. 19:04:08 MSK пользователь Nikolay \nShaplov написал:\n\n> I'd like to propose a small patch for make_ctags script. It checks if ctags\n> utility is intalled or not. If not it reports an error and advises to\n> install ctags.\nOups. I've misplaced '&' character :-)\n\nHere is the right version",
"msg_date": "Mon, 31 Dec 2018 19:19:39 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 07:19:39PM +0300, Nikolay Shaplov wrote:\n> В письме от понедельник, 31 декабря 2018 г. 19:04:08 MSK пользователь Nikolay \n> Shaplov написал:\n> \n>> I'd like to propose a small patch for make_ctags script. It checks if ctags\n>> utility is intalled or not. If not it reports an error and advises to\n>> install ctags.\n>\n> Oups. I've misplaced '&' character :-)\n\nNot sure if that's something worse bothering about, but you could do\nthe same in src/tools/make_etags.\n--\nMichael",
"msg_date": "Tue, 1 Jan 2019 11:24:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "В письме от вторник, 1 января 2019 г. 11:24:11 MSK пользователь Michael \nPaquier написал:\n\n> Not sure if that's something worse bothering about, but you could do\n> the same in src/tools/make_etags.\nGood idea. Done.\n\n(I did not do it in the first place because I do not use etags and can't \nproperly check it, but really if some files are created, then everything should \nbe working well. This is good enough check :-) )",
"msg_date": "Tue, 01 Jan 2019 19:44:04 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "On 01/01/2019 17:44, Nikolay Shaplov wrote:\n> +if [ ! $(command -v ctags) ]\n> +then\n> + echo \"'ctags' utility is not found\" 1>&2\n> + echo \"Please install 'ctags' to run make_ctags\" 1>&2\n> + exit 1\n> +fi\n\nThis assumes that the ctags and etags programs are part of packages of\nthe same name. I don't think that is always the case.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 2 Jan 2019 15:03:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 01/01/2019 17:44, Nikolay Shaplov wrote:\n>> +if [ ! $(command -v ctags) ]\n>> +then\n>> + echo \"'ctags' utility is not found\" 1>&2\n>> + echo \"Please install 'ctags' to run make_ctags\" 1>&2\n>> + exit 1\n>> +fi\n\n> This assumes that the ctags and etags programs are part of packages of\n> the same name. I don't think that is always the case.\n\nIn fact, that's demonstrably not so: on my RHEL6 and Fedora boxes,\n/usr/bin/etags isn't owned by any package, because it's a symlink\nmanaged by the \"alternatives\" system. It points to /usr/bin/etags.emacs\nwhich is owned by the emacs-common package. So dropping the advice\nabout how to fix the problem seems like a good plan.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 02 Jan 2019 11:35:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "On Wed, Jan 02, 2019 at 11:35:46AM -0500, Tom Lane wrote:\n> In fact, that's demonstrably not so: on my RHEL6 and Fedora boxes,\n> /usr/bin/etags isn't owned by any package, because it's a symlink\n> managed by the \"alternatives\" system. It points to /usr/bin/etags.emacs\n> which is owned by the emacs-common package. So dropping the advice\n> about how to fix the problem seems like a good plan.\n\n+1, let's keep it simple. I would just use \"ctags/etags not found\"\nas error message.\n--\nMichael",
"msg_date": "Thu, 3 Jan 2019 10:03:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "В письме от четверг, 3 января 2019 г. 10:03:53 MSK пользователь Michael \nPaquier написал:\n> On Wed, Jan 02, 2019 at 11:35:46AM -0500, Tom Lane wrote:\n> > In fact, that's demonstrably not so: on my RHEL6 and Fedora boxes,\n> > /usr/bin/etags isn't owned by any package, because it's a symlink\n> > managed by the \"alternatives\" system. It points to /usr/bin/etags.emacs\n> > which is owned by the emacs-common package. So dropping the advice\n> > about how to fix the problem seems like a good plan.\n> \n> +1, let's keep it simple. I would just use \"ctags/etags not found\"\n> as error message.\n\nActually I was trying to say \"Please install 'ctags' [utility] to run \nmake_ctags\". But if all of you read it as \"Please install 'ctags' [package] to \nrun make_ctags\", then it is really better to drop the advice.\n\nSo I removed it. See the patch.",
"msg_date": "Thu, 03 Jan 2019 14:15:11 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "On 03/01/2019 12:15, Nikolay Shaplov wrote:\n>> +1, let's keep it simple. I would just use \"ctags/etags not found\"\n>> as error message.\n> \n> Actually I was trying to say \"Please install 'ctags' [utility] to run \n> make_ctags\". But if all of you read it as \"Please install 'ctags' [package] to \n> run make_ctags\", then it is really better to drop the advice.\n> \n> So I removed it. See the patch.\n\nA few more comments.\n\nI don't know how portable command -v is. Some systems have a /bin/sh\nthat is pre-POSIX. Same with $(...).\n\nIf etags is not installed, the current script prints\n\n xargs: etags: No such file or directory\n\nI don't see the need to do more than that, especially if it makes the\nscript twice as long.\n\n(Personally, I'd recommend removing make_etags altogether and using GNU\nGlobal for Emacs.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 3 Jan 2019 12:52:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "В письме от четверг, 3 января 2019 г. 12:52:36 MSK пользователь Peter \nEisentraut написал:\n\n> >> +1, let's keep it simple. I would just use \"ctags/etags not found\"\n> >> as error message.\n> > \n> > Actually I was trying to say \"Please install 'ctags' [utility] to run\n> > make_ctags\". But if all of you read it as \"Please install 'ctags'\n> > [package] to run make_ctags\", then it is really better to drop the\n> > advice.\n> > \n> > So I removed it. See the patch.\n> \n> A few more comments.\n> \n> I don't know how portable command -v is. Some systems have a /bin/sh\n> that is pre-POSIX. Same with $(...).\nDo you know how to obtain such a shell in Debian? I have dash for sh, and it \nknows both commands -v and $(). And I have no idea how to get more simple one. \nDo you have one?\n\nDo you know the way how to check if shell is pre-POSIX and just disable the \ncheck in this case.\n\nOr can you offer some another check that will satisfy you as a potential user \nof pre-POSIX shell? The check that will somehow report that ctags _executable_ \nfile is missing. \n\n> If etags is not installed, the current script prints\n> \n> xargs: etags: No such file or directory\n\nmake_ctags prints\n\n xargs: ctags: No such file or directory\n sort: cannot read: tags: No such file or directory\n\nFor me it is not good enough error message, it says it can't find some ctags|\netags file. But says nothing that is is an utility, that is missing...\n \nSo I would try to find better way to report that ctags utility is missing.\n\nPS Vitus, I added you to CC, because I know that you are quite good in bash \nscripting, may be you would give some good ideas I do not have.\n\n\n",
"msg_date": "Sun, 06 Jan 2019 15:42:20 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "Nikolay Shaplov <dhyan@nataraj.su> writes:\n> В письме от четверг, 3 января 2019 г. 12:52:36 MSK пользователь Peter \n> Eisentraut написал:\n>> I don't know how portable command -v is. Some systems have a /bin/sh\n>> that is pre-POSIX. Same with $(...).\n\n> Do you know how to obtain such a shell in Debian?\n\nTBH, when I first saw this patch, I had the same reaction as Peter,\nie I wondered how portable this was. However, upon investigation:\n\n1. \"command -v <something>\" is specified by Single Unix Spec v2,\nwhich we've considered as our baseline portability requirement\nfor a good long time now.\n\n2. Even my pet dinosaur HPUX 10.20 box recognizes it. I do not\nbelieve anybody working on PG these days is using something older.\n\n3. These scripts aren't part of any build or runtime process,\nthey're only useful for development. We've long felt that it's\nokay to have higher requirements for development environments\nthan for production. Besides, do you really think anybody's\ngoing to be doing PG v12+ development on a box with a pre-SUSv2\nshell and a C99 compiler?\n\nWe need not get into the question of whether $(...) is portable,\nbecause the way it's being used is not: if command -v does not\nfind the target command, it prints nothing, so that at least\nsome systems will do this:\n\n$ if [ ! $(command -v notetags) ]\n> then \n> echo not found \n> fi\nksh: test: argument expected\n\n(I'm not very sure why bash fails to act that way, actually.\n\"!\" with nothing after it shouldn't be valid syntax for test(1),\nyou'd think.)\n\nThe correct way to code this is to depend on the exit code,\nnot the text output:\n\nif command -v etags >/dev/null\nthen\n : ok\nelse\n echo etags not found\n exit 1\nfi\n\nWe could alternatively try to use \"which\" in the same way,\nbut I'm dubious that it's more portable than \"command\".\n(AFAICT, \"which\" is *not* in POSIX.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 06 Jan 2019 12:16:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "\nOn 1/6/19 12:16 PM, Tom Lane wrote:\n>\n> The correct way to code this is to depend on the exit code,\n> not the text output:\n>\n> if command -v etags >/dev/null\n> then\n> : ok\n> else\n> echo etags not found\n> exit 1\n> fi\n\n\nmore succinctly,\n\n\n command -v etags >/dev/null || { echo etags not found; exit 1;}\n\n \n\n\n> We could alternatively try to use \"which\" in the same way,\n> but I'm dubious that it's more portable than \"command\".\n> (AFAICT, \"which\" is *not* in POSIX.)\n>\n> \t\t\t\n\n\nIndeed. I know I have some systems where it's lacking.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 6 Jan 2019 17:50:36 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "В письме от воскресенье, 6 января 2019 г. 17:50:36 MSK пользователь Andrew \nDunstan написал:\n\n> > The correct way to code this is to depend on the exit code,\n> > not the text output:\n> > \n> > if command -v etags >/dev/null\n> > then\n> > : ok\n> > else\n> > echo etags not found\n> > exit 1\n> > fi\n> \n> more succinctly,\n> command -v etags >/dev/null || { echo etags not found; exit 1;}\n\nIf it is good enough for you, then is is good for me for sure...\nImported it to the patch.",
"msg_date": "Mon, 07 Jan 2019 20:42:35 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
},
{
"msg_contents": "Nikolay Shaplov <dhyan@nataraj.su> writes:\n> [ check-for-ctags-in-make_ctags_v5.diff ]\n\nPushed with minor editorialization on the wording of the error messages.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 13 Jan 2019 13:34:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] check for ctags utility in make_ctags"
}
] |
[
{
"msg_contents": "When \"make check TEMP_CONFIG=<(echo break_me=on)\" spawns a postmaster that\nfails startup, we detect that with \"pg_regress: postmaster did not respond\nwithin 60 seconds\". pg_regress has a kill(postmaster_pid, 0) intended to\ndetect this case faster. Since kill(ZOMBIE-PID, 0) succeeds[1], that test is\nineffective. The fix, attached, is to instead test waitpid(), like pg_ctl's\nwait_for_postmaster() does.\n\n[1] Search for \"zombie\" in\nhttp://pubs.opengroup.org/onlinepubs/9699919799/functions/kill.html",
"msg_date": "Mon, 31 Dec 2018 12:29:22 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "pg_regress: promptly detect failed postmaster startup"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> When \"make check TEMP_CONFIG=<(echo break_me=on)\" spawns a postmaster that\n> fails startup, we detect that with \"pg_regress: postmaster did not respond\n> within 60 seconds\". pg_regress has a kill(postmaster_pid, 0) intended to\n> detect this case faster. Since kill(ZOMBIE-PID, 0) succeeds[1], that test is\n> ineffective.\n\nOoops.\n\n> The fix, attached, is to instead test waitpid(), like pg_ctl's\n> wait_for_postmaster() does.\n\n+1. This leaves postmaster_pid as a dangling pointer, but since\nwe just exit immediately, that seems fine. (If we continued, and\narrived at the \"kill(postmaster_pid, SIGKILL)\" below, it would not\nbe fine.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 31 Dec 2018 13:51:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: promptly detect failed postmaster startup"
}
] |
[
{
"msg_contents": "In our query logs I saw:\n\npostgres=# SELECT log_time, session_id, session_line, left(message,99), left(query,99) FROM postgres_log WHERE error_severity='ERROR' AND message NOT LIKE 'cancel%';\n-[ RECORD 1 ]+----------------------------------------------------------------------------------------------------\nlog_time | 2018-12-31 15:39:11.917-05\nsession_id | 5c2a7e6f.1fa4\nsession_line | 1\nleft | dsa_area could not attach to segment\nleft | SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array\n-[ RECORD 2 ]+----------------------------------------------------------------------------------------------------\nlog_time | 2018-12-31 15:39:11.917-05\nsession_id | 5c2a7e6f.1fa3\nsession_line | 4\nleft | dsa_area could not attach to segment\nleft | SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array\n\nThe full query + plan is:\n\n|ts=# explain SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types\n|FROM queued_alters qa\n|JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped\n|JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped\n|WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid\n|GROUP BY 1,2\n|ORDER BY\n|parent LIKE 'unused%', -- Do them last\n|regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\3\\5') DESC,\n| regexp_replace(colcld.child, '.*_', '') DESC\n|LIMIT 1;\n\n|QUERY PLAN\n|Limit (cost=67128.06..67128.06 rows=1 width=307)\n| -> Sort (cost=67128.06..67137.84 rows=3912 width=307)\n| Sort Key: (((qa_1.parent)::text ~~ 'unused%'::text)), (regexp_replace((((pg_attribute.attrelid)::regclass)::text), '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$'::text, '\\3\\5'::text)) DESC, (regexp_replace((((pg_attribute.attrelid)::regclass)::text), '.*_'::text, ''::text)) DESC\n| -> GroupAggregate (cost=66893.34..67108.50 rows=3912 width=307)\n| Group Key: (((pg_attribute.attrelid)::regclass)::text), qa_1.parent\n| -> Sort (cost=66893.34..66903.12 rows=3912 width=256)\n| Sort Key: (((pg_attribute.attrelid)::regclass)::text), qa_1.parent\n| -> Gather (cost=40582.28..66659.91 rows=3912 width=256)\n| Workers Planned: 2\n| -> Parallel Hash Join (cost=39582.28..65268.71 rows=1630 width=256)\n| Hash Cond: (((to_regclass((qa_1.child)::text))::oid = pg_attribute.attrelid) AND (colpar.attname = pg_attribute.attname))\n| Join Filter: (colpar.atttypid <> pg_attribute.atttypid)\n| -> Nested Loop (cost=0.43..25614.89 rows=11873 width=366)\n| -> Parallel Append (cost=0.00..12.00 rows=105 width=292)\n| -> Parallel Seq Scan on queued_alters_child qa_1 (cost=0.00..11.47 rows=147 width=292)\n| -> Parallel Seq Scan on queued_alters qa (cost=0.00..0.00 rows=1 width=292)\n| -> Index Scan using pg_attribute_relid_attnum_index on pg_attribute colpar (cost=0.43..242.70 rows=114 width=78)\n| Index Cond: ((attrelid = (to_regclass((qa_1.parent)::text))::oid) AND (attnum > 0))\n| Filter: (NOT attisdropped)\n| -> Parallel Hash (cost=33651.38..33651.38 rows=395365 width=72)\n| -> Parallel Seq Scan on pg_attribute (cost=0.00..33651.38 rows=395365 width=72)\n| Filter: ((NOT attisdropped) AND (attnum > 0))\n|\n\nqueued_alters is usually empty, and looks like it would've last been nonempty on 2018-12-10.\n\nts=# \\d queued_alters\n Table \"public.queued_alters\"\n Column | Type | Collation | Nullable | Default\n--------+-----------------------+-----------+----------+---------\n child | character varying(64) | | |\n parent | character varying(64) | | |\nIndexes:\n \"queued_alters_child_parent_key\" UNIQUE CONSTRAINT, btree (child, parent)\nNumber of child tables: 1 (Use \\d+ to list them.)\n\nI found this other log at that time:\n 2018-12-31 15:39:11.918-05 | 30831 | 5bf38e71.786f | 5 | background worker \"parallel worker\" (PID 8100) exited with exit code 1\n\nWhich is the postmaster, or its PID in any case..\n\n[pryzbyj@telsasoft-db ~]$ ps -wwwf 30831\nUID PID PPID C STIME TTY STAT TIME CMD\npostgres 30831 1 0 Nov19 ? S 62:44 /usr/pgsql-11/bin/postmaster -D /var/lib/pgsql/11/data\n\npostgres=# SELECT log_time, pid, session_line, left(message,99) FROM postgres_log WHERE session_id='5bf38e71.786f' ;\n log_time | pid | session_line | left\n----------------------------+-------+--------------+-------------------------------------------------------------------------\n 2018-12-31 15:39:11.918-05 | 30831 | 5 | background worker \"parallel worker\" (PID 8100) exited with exit code 1\n 2018-12-31 15:39:11.935-05 | 30831 | 6 | background worker \"parallel worker\" (PID 8101) exited with exit code 1\n 2018-12-31 16:40:47.42-05 | 30831 | 7 | background worker \"parallel worker\" (PID 7239) exited with exit code 1\n 2018-12-31 16:40:47.42-05 | 30831 | 8 | background worker \"parallel worker\" (PID 7240) exited with exit code 1\n 2018-12-31 16:41:00.151-05 | 30831 | 9 | background worker \"parallel worker\" (PID 7371) exited with exit code 1\n 2018-12-31 16:41:00.151-05 | 30831 | 10 | background worker \"parallel worker\" (PID 7372) exited with exit code 1\n 2018-12-31 16:41:04.024-05 | 30831 | 11 | background worker \"parallel worker\" (PID 7451) exited with exit code 1\n 2018-12-31 16:41:04.024-05 | 30831 | 12 | background worker \"parallel worker\" (PID 7450) exited with exit code 1\n 2018-12-31 16:41:23.845-05 | 30831 | 13 | background worker \"parallel worker\" (PID 7658) exited with exit code 1\n 2018-12-31 16:41:23.845-05 | 30831 | 14 | background worker \"parallel worker\" (PID 7659) exited with exit code 1\n 2018-12-31 16:43:58.854-05 | 30831 | 15 | background worker \"parallel worker\" (PID 10825) exited with exit code 1\n 2018-12-31 16:43:58.854-05 | 30831 | 16 | background worker \"parallel worker\" (PID 10824) exited with exit code 1\n\nI seem to be missing logs for session_lines 1-4 , which would've been rotated\nto oblivion if older than 24h.\n\nI found these:\nhttps://www.postgresql.org/message-id/flat/CAPa4P2YzgRTBHHRx2KAPUO1_xkmqQgT2xP0tS_e%3DphWvNzWdkA%40mail.gmail.com#4a1a94bc71d4c99c01babc759fe5b6ec\nhttps://www.postgresql.org/message-id/CAEepm=0Mv9BigJPpribGQhnHqVGYo2+kmzekGUVJJc9Y_ZVaYA@mail.gmail.com\n\nWhich appears to have been commited so I think this error is not unexpected.\n\n|commit fd7c0fa732d97a4b4ebb58730e6244ea30d0a618\n|Author: Robert Haas <rhaas@postgresql.org>\n|Date: Mon Dec 18 12:17:37 2017 -0500\n|\n|Fix crashes on plans with multiple Gather (Merge) nodes.\n\nI will try to reproduce and provide bt...but all I can think to do is run a\ntight loop around that query and hope something else comes by and tickles it a\nfew more times.\n\nHappy Grecian newyear.\n\nJustin\n\n",
"msg_date": "Mon, 31 Dec 2018 16:17:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi Justin,\n\nOn Tue, Jan 1, 2019 at 11:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> dsa_area could not attach to segment\n\n /*\n * If we are reached by dsa_free or dsa_get_address,\nthere must be at\n * least one object allocated in the referenced\nsegment. Otherwise,\n * their caller has a double-free or access-after-free\nbug, which we\n * have no hope of detecting. So we know it's safe to\naccess this\n * array slot without holding a lock; it won't change\nunderneath us.\n * Furthermore, we know that we can see the latest\ncontents of the\n * slot, as explained in check_for_freed_segments, which those\n * functions call before arriving here.\n */\n handle = area->control->segment_handles[index];\n\n /* It's an error to try to access an unused slot. */\n if (handle == DSM_HANDLE_INVALID)\n elog(ERROR,\n \"dsa_area could not attach to a\nsegment that has been freed\");\n\n segment = dsm_attach(handle);\n if (segment == NULL)\n elog(ERROR, \"dsa_area could not attach to segment\");\n\nHmm. We observed a valid handle, but then couldn't attach to it,\nwhich could indicate that the value we saw was stale (and the theory\nstated above has a hole?), or the segment was in the process of being\nfreed and we have a use-after-free problem leading to this race, or\nsomething else along those lines. If you can reproduce this on a dev\nsystem, it'd be good to see the backtrace and know which of several\ncall paths led here, perhaps by changing that error to PANIC. I'll\ntry that too.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Thu, 3 Jan 2019 12:29:40 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "I finally reproduced this with core..\n\nFor some reason I needed to write assert() rather than elog(PANIC), otherwise\nit failed with ERROR and no core..\n\n@@ -1741,4 +1743,5 @@ get_segment_by_index(dsa_area *area, dsa_segment_index index)\n segment = dsm_attach(handle);\n+ assert (segment != NULL);\n if (segment == NULL)\n- elog(ERROR, \"dsa_area could not attach to segment\");\n+ elog(PANIC, \"dsa_area could not attach to segment\");\n if (area->mapping_pinned)\n\nOn Mon, Dec 03, 2018 at 11:45:00AM +1300, Thomas Munro wrote: \n> If anyone can reproduce this problem with a debugger, it'd be \n> interesting to see the output of dsa_dump(area), and \n> FreePageManagerDump(segment_map->fpm).\n\nLooks like this will take some work, is it ok if I make a coredump available to\nyou ? I'm not sure how sensitive it is to re/compilation, but I'm using PG11.1\ncompiled locally on centos6.\n\n/var/log/postgresql/postgresql-2019-02-05_111730.log-< 2019-02-05 11:17:31.372 EST >LOG: background worker \"parallel worker\" (PID 17110) was terminated by signal 6: Aborted\n/var/log/postgresql/postgresql-2019-02-05_111730.log:< 2019-02-05 11:17:31.372 EST >DETAIL: Failed process was running: SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\3\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\n\n(gdb) bt\n#0 0x00000037b9c32495 in raise () from /lib64/libc.so.6\n#1 0x00000037b9c33c75 in abort () from /lib64/libc.so.6\n#2 0x00000037b9c2b60e in __assert_fail_base () from /lib64/libc.so.6\n#3 0x00000037b9c2b6d0 in __assert_fail () from /lib64/libc.so.6\n#4 0x00000000008c4a72 in get_segment_by_index (area=0x2788440, index=<value optimized out>) at dsa.c:1744\n#5 0x00000000008c58e9 in get_best_segment (area=0x2788440, npages=8) at dsa.c:1995\n#6 0x00000000008c6c99 in dsa_allocate_extended (area=0x2788440, size=32768, flags=0) at dsa.c:703\n#7 0x000000000064c6fe in ExecParallelHashTupleAlloc (hashtable=0x27affb0, size=104, shared=0x7ffc6b5cfc48) at nodeHash.c:2837\n#8 0x000000000064cb92 in ExecParallelHashTableInsert (hashtable=0x27affb0, slot=<value optimized out>, hashvalue=423104953) at nodeHash.c:1693\n#9 0x000000000064cf17 in MultiExecParallelHash (node=0x27a1ed8) at nodeHash.c:288\n#10 MultiExecHash (node=0x27a1ed8) at nodeHash.c:112\n#11 0x000000000064e1f8 in ExecHashJoinImpl (pstate=0x2793038) at nodeHashjoin.c:290\n#12 ExecParallelHashJoin (pstate=0x2793038) at nodeHashjoin.c:581\n#13 0x0000000000638ce0 in ExecProcNodeInstr (node=0x2793038) at execProcnode.c:461\n#14 0x00000000006349c7 in ExecProcNode (queryDesc=0x2782cd0, direction=<value optimized out>, count=0, execute_once=56) at ../../../src/include/executor/executor.h:237\n#15 ExecutePlan (queryDesc=0x2782cd0, direction=<value optimized out>, count=0, execute_once=56) at execMain.c:1723\n#16 standard_ExecutorRun (queryDesc=0x2782cd0, direction=<value optimized out>, count=0, execute_once=56) at execMain.c:364\n#17 0x00007f84a97c8618 in pgss_ExecutorRun (queryDesc=0x2782cd0, direction=ForwardScanDirection, count=0, execute_once=true) at pg_stat_statements.c:892\n#18 0x00007f84a93357dd in explain_ExecutorRun (queryDesc=0x2782cd0, direction=ForwardScanDirection, count=0, execute_once=true) at auto_explain.c:268\n#19 0x0000000000635071 in ParallelQueryMain (seg=0x268fba8, toc=0x7f84a9578000) at execParallel.c:1402\n#20 0x0000000000508f34 in ParallelWorkerMain (main_arg=<value optimized out>) at parallel.c:1409\n#21 0x0000000000704760 in StartBackgroundWorker () at bgworker.c:834\n#22 0x000000000070e11c in do_start_bgworker () at postmaster.c:5698\n#23 maybe_start_bgworkers () at postmaster.c:5911\n#24 0x0000000000710786 in sigusr1_handler (postgres_signal_arg=<value optimized out>) at postmaster.c:5091\n#25 <signal handler called>\n#26 0x00000037b9ce1603 in __select_nocancel () from /lib64/libc.so.6\n#27 0x000000000071300e in ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1670\n#28 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1379\n#29 0x000000000067e8c0 in main (argc=3, argv=0x265f960) at main.c:228\n\n#0 0x00000037b9c32495 in raise () from /lib64/libc.so.6\nNo symbol table info available.\n#1 0x00000037b9c33c75 in abort () from /lib64/libc.so.6\nNo symbol table info available.\n#2 0x00000037b9c2b60e in __assert_fail_base () from /lib64/libc.so.6\nNo symbol table info available.\n#3 0x00000037b9c2b6d0 in __assert_fail () from /lib64/libc.so.6\nNo symbol table info available.\n#4 0x00000000008c4a72 in get_segment_by_index (area=0x2788440, index=<value optimized out>) at dsa.c:1744\n handle = <value optimized out>\n segment = 0x0\n segment_map = <value optimized out>\n __func__ = \"get_segment_by_index\"\n __PRETTY_FUNCTION__ = \"get_segment_by_index\"\n#5 0x00000000008c58e9 in get_best_segment (area=0x2788440, npages=8) at dsa.c:1995\n segment_map = <value optimized out>\n next_segment_index = <value optimized out>\n contiguous_pages = <value optimized out>\n threshold = 512\n segment_index = 10\n bin = <value optimized out>\n#6 0x00000000008c6c99 in dsa_allocate_extended (area=0x2788440, size=32768, flags=0) at dsa.c:703\n npages = 8\n first_page = <value optimized out>\n span_pointer = 8796097199728\n pool = 0x7f84a9579730\n size_class = <value optimized out>\n start_pointer = <value optimized out>\n segment_map = <value optimized out>\n result = 140207753496128\n __func__ = \"dsa_allocate_extended\"\n __PRETTY_FUNCTION__ = \"dsa_allocate_extended\"\n#7 0x000000000064c6fe in ExecParallelHashTupleAlloc (hashtable=0x27affb0, size=104, shared=0x7ffc6b5cfc48) at nodeHash.c:2837\n pstate = 0x7f84a9578540\n chunk_shared = <value optimized out>\n chunk = <value optimized out>\n chunk_size = 32768\n result = <value optimized out>\n curbatch = 0\n#8 0x000000000064cb92 in ExecParallelHashTableInsert (hashtable=0x27affb0, slot=<value optimized out>, hashvalue=423104953) at nodeHash.c:1693\n hashTuple = <value optimized out>\n tuple = 0x27b00c8\n shared = <value optimized out>\n bucketno = 1577401\n batchno = 0\n#9 0x000000000064cf17 in MultiExecParallelHash (node=0x27a1ed8) at nodeHash.c:288\n outerNode = 0x27a1ff0\n hashkeys = 0x27af110\n slot = 0x27a3d70\n econtext = 0x27a3798\n hashvalue = 423104953\n i = <value optimized out>\n pstate = 0x7f84a9578540\n hashtable = 0x27affb0\n build_barrier = 0x7f84a9578590\n#10 MultiExecHash (node=0x27a1ed8) at nodeHash.c:112\nNo locals.\n#11 0x000000000064e1f8 in ExecHashJoinImpl (pstate=0x2793038) at nodeHashjoin.c:290\n outerNode = 0x2792f20\n hashNode = 0x27a1ed8\n econtext = 0x2792c68\n outerTupleSlot = 0x1\n node = 0x2793038\n joinqual = 0x27ac270\n otherqual = 0x0\n hashtable = 0x27affb0\n hashvalue = 0\n batchno = 41493896\n parallel_state = 0x7f84a9578540\n#12 ExecParallelHashJoin (pstate=0x2793038) at nodeHashjoin.c:581\nNo locals.\n\nJustin\n\n",
"msg_date": "Tue, 5 Feb 2019 10:35:09 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "And here's the \"dsa_allocate could not find %zu free pages\" error with core.\n\n@@ -726,5 +728,5 @@ dsa_allocate_extended(dsa_area *area, size_t size, int flags)\n */\n- if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n- elog(FATAL,\n- \"dsa_allocate could not find %zu free pages\", npages);\n+ assert (FreePageManagerGet(segment_map->fpm, npages, &first_page));\n+\n+ // if (!FreePageManagerGet(segment_map->fpm, npages, &first_page)) elog(PANIC, \"dsa_allocate could not find %zu free pages\", npages);\n LWLockRelease(DSA_AREA_LOCK(area));\n\n< 2019-02-05 13:23:29.137 EST >LOG: background worker \"parallel worker\" (PID 7140) was terminated by signal 6: Aborted\n< 2019-02-05 13:23:29.137 EST >DETAIL: Failed process was running: explain analyze SELECT * FROM eric_enodeb_metrics WHERE start_time>='2017-10-01' AND (site_id<1900 OR site_id>2700)\n\n#0 0x00000037b9c32495 in raise () from /lib64/libc.so.6\n#1 0x00000037b9c33c75 in abort () from /lib64/libc.so.6\n#2 0x00000037b9c2b60e in __assert_fail_base () from /lib64/libc.so.6\n#3 0x00000037b9c2b6d0 in __assert_fail () from /lib64/libc.so.6\n#4 0x00000000008c6f74 in dsa_allocate_extended (area=0x27c05e0, size=393220, flags=5) at dsa.c:729\n#5 0x000000000068521f in pagetable_allocate (pagetable=<value optimized out>, size=<value optimized out>) at tidbitmap.c:1511\n#6 0x00000000006876d2 in pagetable_grow (tbm=0x7f84a8d86a58, pageno=1635) at ../../../src/include/lib/simplehash.h:383\n#7 pagetable_insert (tbm=0x7f84a8d86a58, pageno=1635) at ../../../src/include/lib/simplehash.h:508\n#8 tbm_get_pageentry (tbm=0x7f84a8d86a58, pageno=1635) at tidbitmap.c:1225\n#9 0x0000000000687c50 in tbm_add_tuples (tbm=0x7f84a8d86a58, tids=<value optimized out>, ntids=1, recheck=false) at tidbitmap.c:405\n#10 0x00000000004e43df in btgetbitmap (scan=0x2829fa8, tbm=0x7f84a8d86a58) at nbtree.c:332\n#11 0x00000000004d8a91 in index_getbitmap (scan=0x2829fa8, bitmap=<value optimized out>) at indexam.c:726\n#12 0x0000000000647c98 in MultiExecBitmapIndexScan (node=0x2829720) at nodeBitmapIndexscan.c:104\n#13 0x0000000000646078 in MultiExecBitmapOr (node=0x28046e8) at nodeBitmapOr.c:153\n#14 0x0000000000646afd in BitmapHeapNext (node=0x2828db8) at nodeBitmapHeapscan.c:145\n#15 0x000000000063a550 in ExecScanFetch (node=0x2828db8, accessMtd=0x6469e0 <BitmapHeapNext>, recheckMtd=0x646740 <BitmapHeapRecheck>) at execScan.c:95\n#16 ExecScan (node=0x2828db8, accessMtd=0x6469e0 <BitmapHeapNext>, recheckMtd=0x646740 <BitmapHeapRecheck>) at execScan.c:162\n#17 0x0000000000638ce0 in ExecProcNodeInstr (node=0x2828db8) at execProcnode.c:461\n#18 0x00000000006414fc in ExecProcNode (pstate=<value optimized out>) at ../../../src/include/executor/executor.h:237\n#19 ExecAppend (pstate=<value optimized out>) at nodeAppend.c:294\n#20 0x0000000000638ce0 in ExecProcNodeInstr (node=0x27cb0a0) at execProcnode.c:461\n#21 0x00000000006349c7 in ExecProcNode (queryDesc=0x7f84a8de7520, direction=<value optimized out>, count=0, execute_once=160) at ../../../src/include/executor/executor.h:237\n#22 ExecutePlan (queryDesc=0x7f84a8de7520, direction=<value optimized out>, count=0, execute_once=160) at execMain.c:1723\n#23 standard_ExecutorRun (queryDesc=0x7f84a8de7520, direction=<value optimized out>, count=0, execute_once=160) at execMain.c:364\n#24 0x00007f84a97c8618 in pgss_ExecutorRun (queryDesc=0x7f84a8de7520, direction=ForwardScanDirection, count=0, execute_once=true) at pg_stat_statements.c:892\n#25 0x00007f84a91aa7dd in explain_ExecutorRun (queryDesc=0x7f84a8de7520, direction=ForwardScanDirection, count=0, execute_once=true) at auto_explain.c:268\n#26 0x0000000000635071 in ParallelQueryMain (seg=0x268fba8, toc=0x7f84a93ed000) at execParallel.c:1402\n#27 0x0000000000508f34 in ParallelWorkerMain (main_arg=<value optimized out>) at parallel.c:1409\n#28 0x0000000000704760 in StartBackgroundWorker () at bgworker.c:834\n#29 0x000000000070e11c in do_start_bgworker () at postmaster.c:5698\n#30 maybe_start_bgworkers () at postmaster.c:5911\n#31 0x0000000000710786 in sigusr1_handler (postgres_signal_arg=<value optimized out>) at postmaster.c:5091\n#32 <signal handler called>\n#33 0x00000037b9ce1603 in __select_nocancel () from /lib64/libc.so.6\n#34 0x000000000071300e in ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1670\n#35 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1379\n#36 0x000000000067e8c0 in main (argc=3, argv=0x265f960) at main.c:228\n\n#0 0x00000037b9c32495 in raise () from /lib64/libc.so.6\nNo symbol table info available.\n#1 0x00000037b9c33c75 in abort () from /lib64/libc.so.6\nNo symbol table info available.\n#2 0x00000037b9c2b60e in __assert_fail_base () from /lib64/libc.so.6\nNo symbol table info available.\n#3 0x00000037b9c2b6d0 in __assert_fail () from /lib64/libc.so.6\nNo symbol table info available.\n#4 0x00000000008c6f74 in dsa_allocate_extended (area=0x27c05e0, size=393220, flags=5) at dsa.c:729\n npages = 97\n first_page = <value optimized out>\n span_pointer = 1099511632488\n pool = 0x7f84a93eecf0\n size_class = <value optimized out>\n start_pointer = <value optimized out>\n segment_map = <value optimized out>\n result = 140207751879680\n __func__ = \"dsa_allocate_extended\"\n __PRETTY_FUNCTION__ = \"dsa_allocate_extended\"\n#5 0x000000000068521f in pagetable_allocate (pagetable=<value optimized out>, size=<value optimized out>) at tidbitmap.c:1511\n tbm = 0x7f84a8d86a58\n ptbase = <value optimized out>\n#6 0x00000000006876d2 in pagetable_grow (tbm=0x7f84a8d86a58, pageno=1635) at ../../../src/include/lib/simplehash.h:383\n olddata = 0x7f84a8d23004\n i = <value optimized out>\n copyelem = <value optimized out>\n startelem = 0\n oldsize = <value optimized out>\n newdata = <value optimized out>\n#7 pagetable_insert (tbm=0x7f84a8d86a58, pageno=1635) at ../../../src/include/lib/simplehash.h:508\n hash = 218584604\n startelem = <value optimized out>\n curelem = <value optimized out>\n data = <value optimized out>\n insertdist = 0\n#8 tbm_get_pageentry (tbm=0x7f84a8d86a58, pageno=1635) at tidbitmap.c:1225\n page = <value optimized out>\n found = <value optimized out>\n#9 0x0000000000687c50 in tbm_add_tuples (tbm=0x7f84a8d86a58, tids=<value optimized out>, ntids=1, recheck=false) at tidbitmap.c:405\n blk = <value optimized out>\n off = 14\n wordnum = <value optimized out>\n bitnum = <value optimized out>\n currblk = <value optimized out>\n page = <value optimized out>\n i = <value optimized out>\n __func__ = \"tbm_add_tuples\"\n#10 0x00000000004e43df in btgetbitmap (scan=0x2829fa8, tbm=0x7f84a8d86a58) at nbtree.c:332\n so = 0x2843c90\n ntids = 5842\n heapTid = <value optimized out>\n#11 0x00000000004d8a91 in index_getbitmap (scan=0x2829fa8, bitmap=<value optimized out>) at indexam.c:726\n ntids = <value optimized out>\n __func__ = \"index_getbitmap\"\n#12 0x0000000000647c98 in MultiExecBitmapIndexScan (node=0x2829720) at nodeBitmapIndexscan.c:104\n tbm = 0x7f84a8d86a58\n scandesc = 0x2829fa8\n nTuples = <value optimized out>\n doscan = <value optimized out>\n#13 0x0000000000646078 in MultiExecBitmapOr (node=0x28046e8) at nodeBitmapOr.c:153\n subnode = 0x2829720\n subresult = <value optimized out>\n bitmapplans = <value optimized out>\n nplans = 2\n i = <value optimized out>\n result = 0x7f84a8d86a58\n __func__ = \"MultiExecBitmapOr\"\n#14 0x0000000000646afd in BitmapHeapNext (node=0x2828db8) at nodeBitmapHeapscan.c:145\n econtext = 0x2828b08\n scan = 0x282d808\n tbm = <value optimized out>\n tbmiterator = 0x0\n shared_tbmiterator = 0x0\n tbmres = <value optimized out>\n targoffset = <value optimized out>\n slot = 0x282a888\n pstate = 0x7f84a93eda40\n dsa = 0x27c05e0\n __func__ = \"BitmapHeapNext\"\n\nJustin\n\n",
"msg_date": "Tue, 5 Feb 2019 12:34:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "I should have included query plan for the query which caused the \"could not\nfind free pages\" error.\n\nThis is a contrived query which I made up to try to exercise/stress bitmap\nscans based on Thomas's working hypothesis for this error/bug. This seems to\nbe easier to hit than the other error (\"could not attach to segment\") - a loop\naround this query has run into \"free pages\" several times today.\n\nexplain (analyze,costs off,timing off) SELECT * FROM eric_enodeb_metrics WHERE start_time>='2017-10-01' AND (site_id<1900 OR site_id>2700)\n\n Gather (actual rows=82257 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n -> Parallel Append (actual rows=20564 loops=4)\n -> Parallel Bitmap Heap Scan on eric_enodeb_201901 (actual rows=6366 loops=4)\n Recheck Cond: ((site_id < 1900) OR (site_id > 2700))\n Filter: (start_time >= '2017-10-01 00:00:00-04'::timestamp with time zone)\n Heap Blocks: exact=2549\n -> BitmapOr (actual rows=0 loops=1)\n -> Bitmap Index Scan on eric_enodeb_201901_site_idx (actual rows=0 loops=1)\n Index Cond: (site_id < 1900)\n -> Bitmap Index Scan on eric_enodeb_201901_site_idx (actual rows=25463 loops=1)\n Index Cond: (site_id > 2700)\n -> Parallel Bitmap Heap Scan on eric_enodeb_201810 (actual rows=15402 loops=1)\n Recheck Cond: ((site_id < 1900) OR (site_id > 2700))\n Filter: (start_time >= '2017-10-01 00:00:00-04'::timestamp with time zone)\n -> BitmapOr (actual rows=0 loops=1)\n -> Bitmap Index Scan on eric_enodeb_201810_site_idx (actual rows=0 loops=1)\n Index Cond: (site_id < 1900)\n -> Bitmap Index Scan on eric_enodeb_201810_site_idx (actual rows=15402 loops=1)\n Index Cond: (site_id > 2700)\n -> Parallel Bitmap Heap Scan on eric_enodeb_201812 (actual rows=14866 loops=1)\n Recheck Cond: ((site_id < 1900) OR (site_id > 2700))\n Filter: (start_time >= '2017-10-01 00:00:00-04'::timestamp with time zone)\n -> BitmapOr (actual rows=0 loops=1)\n -> Bitmap Index Scan on eric_enodeb_201812_site_idx (actual rows=0 loops=1)\n Index Cond: (site_id < 1900)\n -> Bitmap Index Scan on eric_enodeb_201812_site_idx (actual rows=14866 loops=1)\n Index Cond: (site_id > 2700)\n -> Parallel Bitmap Heap Scan on eric_enodeb_201811 (actual rows=7204 loops=2)\n Recheck Cond: ((site_id < 1900) OR (site_id > 2700))\n Filter: (start_time >= '2017-10-01 00:00:00-04'::timestamp with time zone)\n Heap Blocks: exact=7372\n -> BitmapOr (actual rows=0 loops=1)\n -> Bitmap Index Scan on eric_enodeb_201811_site_idx (actual rows=0 loops=1)\n Index Cond: (site_id < 1900)\n -> Bitmap Index Scan on eric_enodeb_201811_site_idx (actual rows=14408 loops=1)\n Index Cond: (site_id > 2700)\n -> Parallel Bitmap Heap Scan on eric_enodeb_201902 (actual rows=5128 loops=1)\n Recheck Cond: ((site_id < 1900) OR (site_id > 2700))\n Filter: (start_time >= '2017-10-01 00:00:00-04'::timestamp with time zone)\n Heap Blocks: exact=3374\n -> BitmapOr (actual rows=0 loops=1)\n -> Bitmap Index Scan on eric_enodeb_201902_site_idx (actual rows=0 loops=1)\n Index Cond: (site_id < 1900)\n -> Bitmap Index Scan on eric_enodeb_201902_site_idx (actual rows=5128 loops=1)\n Index Cond: (site_id > 2700)\n [...]\n\nJustin\n\n",
"msg_date": "Tue, 5 Feb 2019 20:10:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Wed, Feb 6, 2019 at 1:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This is a contrived query which I made up to try to exercise/stress bitmap\n> scans based on Thomas's working hypothesis for this error/bug. This seems to\n> be easier to hit than the other error (\"could not attach to segment\") - a loop\n> around this query has run into \"free pages\" several times today.\n\nThanks. I'll go and try to repro this with queries that look like that.\n\nIt's possibly interesting that you're running on VMWare (as mentioned\nin an off-list email), though I haven't got a specific theory about\nwhy that'd be relevant. I suppose it could be some kind of cache\ncoherency bug that is more likely there for whatever reason. I've\nbeing trying to repro on a laptop and a couple of bare metal servers.\nCan anyone else who has hit this comment on any virtualisation they\nmight be using?\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Wed, 6 Feb 2019 16:22:12 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Wed, Feb 6, 2019 at 4:22 PM Thomas Munro\n<thomas.munro@enterprisedb.com> wrote:\n> On Wed, Feb 6, 2019 at 1:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This is a contrived query which I made up to try to exercise/stress bitmap\n> > scans based on Thomas's working hypothesis for this error/bug. This seems to\n> > be easier to hit than the other error (\"could not attach to segment\") - a loop\n> > around this query has run into \"free pages\" several times today.\n>\n> Thanks. I'll go and try to repro this with queries that look like that.\n\nNo luck so far. My colleague Robert pointed out that the\nfpm->contiguous_pages_dirty mechanism (that lazily maintains\nfpm->contiguous_pages) is suspicious here, but we haven't yet found a\ntheory to explain how fpm->contiguous_pages could have a value that is\ntoo large. Clearly such a bug could result in a segment that claims\ntoo high a number, and that'd result in this error.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Wed, 6 Feb 2019 20:40:25 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Wed, Feb 06, 2019 at 04:22:12PM +1100, Thomas Munro wrote:\n> Can anyone else who has hit this comment on any virtualisation they\n> might be using?\n\nI don't think most of these people are on -hackers (one of the original reports\nwas on -performance) so I'm copying them now.\n\nCould you let us know which dsa_* error you were seeing, whether or not you\nwere running postgres under virtual environment, and (if so) which VM\nhypervisor?\n\nThanks,\nJustin\n\nhttps://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D0aPq2yEy39gEqVK2m_Qi6jJdy96ysHGJ6VSHOZFz%2Bxbg%40mail.gmail.com#e02bee0220b422fe91a3383916107504\nhttps://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com\n\n",
"msg_date": "Wed, 6 Feb 2019 11:19:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi Justin\nI'm seeing dsa_allocate on two different servers.\nOne is virtualized with VMWare the other is bare metal.\n\nubuntu@db1:~$ grep dsa_allocate /var/log/postgresql/postgresql-11-main.log\n2019-02-03 17:03:03 CET:192.168.10.83(48336):foo@bar:[27979]: FATAL:\ndsa_allocate could not find 7 free pages\n2019-02-05 17:05:12 CET:192.168.10.83(38138):foo@bar:[2725]: FATAL:\ndsa_allocate could not find 49 free pages\n2019-02-06 09:04:18 CET::@:[22120]: FATAL: dsa_allocate could not find 13\nfree pages\n2019-02-06 09:04:18 CET:192.168.10.83(55740):foo@bar:[21725]: ERROR:\ndsa_allocate could not find 13 free pages\nubuntu@db1:~$ sudo dmidecode -s system-product-name\nVMware Virtual Platform\n\n----------------------------------\nubuntu@db2:~$ grep dsa_allocate /var/log/postgresql/postgresql-11-main2.log\n2019-02-03 07:45:45 CET::@:[28592]: FATAL: dsa_allocate could not find 25\nfree pages\n2019-02-03 07:45:45 CET:127.0.0.1(41920):foo1@bar:[27320]: ERROR:\ndsa_allocate could not find 25 free pages\n2019-02-03 07:46:03 CET:127.0.0.1(41920):foo1@bar:[27320]: FATAL:\ndsa_allocate could not find 25 free pages\n2019-02-04 11:56:28 CET::@:[31713]: FATAL: dsa_allocate could not find 7\nfree pages\n2019-02-04 11:56:28 CET:127.0.0.1(41950):foo1@bar:[30465]: ERROR:\ndsa_allocate could not find 7 free pages\n2019-02-04 11:57:59 CET::@:[31899]: FATAL: dsa_allocate could not find 7\nfree pages\n2019-02-04 11:57:59 CET:127.0.0.1(44096):foo1@bar:[31839]: ERROR:\ndsa_allocate could not find 7 free pages\nubuntu@db2:~$ sudo dmidecode -s system-product-name\nProLiant DL380 Gen9\n\n\n\n\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Wed, Feb 6, 2019 at 6:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Feb 06, 2019 at 04:22:12PM +1100, Thomas Munro wrote:\n> > Can anyone else who has hit this comment on any virtualisation they\n> > might be using?\n>\n> I don't think most of these people are on -hackers (one of the original\n> reports\n> was on -performance) so I'm copying them now.\n>\n> Could you let us know which dsa_* error you were seeing, whether or not you\n> were running postgres under virtual environment, and (if so) which VM\n> hypervisor?\n>\n> Thanks,\n> Justin\n>\n>\n> https://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\n>\n> https://www.postgresql.org/message-id/flat/CAEepm%3D0aPq2yEy39gEqVK2m_Qi6jJdy96ysHGJ6VSHOZFz%2Bxbg%40mail.gmail.com#e02bee0220b422fe91a3383916107504\n>\n> https://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com\n>\n\nHi JustinI'm seeing dsa_allocate on two different servers. One is virtualized with VMWare the other is bare metal.ubuntu@db1:~$ grep dsa_allocate /var/log/postgresql/postgresql-11-main.log2019-02-03 17:03:03 CET:192.168.10.83(48336):foo@bar:[27979]: FATAL: dsa_allocate could not find 7 free pages2019-02-05 17:05:12 CET:192.168.10.83(38138):foo@bar:[2725]: FATAL: dsa_allocate could not find 49 free pages2019-02-06 09:04:18 CET::@:[22120]: FATAL: dsa_allocate could not find 13 free pages2019-02-06 09:04:18 CET:192.168.10.83(55740):foo@bar:[21725]: ERROR: dsa_allocate could not find 13 free pagesubuntu@db1:~$ sudo dmidecode -s system-product-nameVMware Virtual Platform----------------------------------ubuntu@db2:~$ grep dsa_allocate /var/log/postgresql/postgresql-11-main2.log2019-02-03 07:45:45 CET::@:[28592]: FATAL: dsa_allocate could not find 25 free pages2019-02-03 07:45:45 CET:127.0.0.1(41920):foo1@bar:[27320]: ERROR: dsa_allocate could not find 25 free pages2019-02-03 07:46:03 CET:127.0.0.1(41920):foo1@bar:[27320]: FATAL: dsa_allocate could not find 25 free pages2019-02-04 11:56:28 CET::@:[31713]: FATAL: dsa_allocate could not find 7 free pages2019-02-04 11:56:28 CET:127.0.0.1(41950):foo1@bar:[30465]: ERROR: dsa_allocate could not find 7 free pages2019-02-04 11:57:59 CET::@:[31899]: FATAL: dsa_allocate could not find 7 free pages2019-02-04 11:57:59 CET:127.0.0.1(44096):foo1@bar:[31839]: ERROR: dsa_allocate could not find 7 free pagesubuntu@db2:~$ sudo dmidecode -s system-product-nameProLiant DL380 Gen9--regards,pozdrawiam,Jakub GlapaOn Wed, Feb 6, 2019 at 6:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Feb 06, 2019 at 04:22:12PM +1100, Thomas Munro wrote:\n> Can anyone else who has hit this comment on any virtualisation they\n> might be using?\n\nI don't think most of these people are on -hackers (one of the original reports\nwas on -performance) so I'm copying them now.\n\nCould you let us know which dsa_* error you were seeing, whether or not you\nwere running postgres under virtual environment, and (if so) which VM\nhypervisor?\n\nThanks,\nJustin\n\nhttps://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D0aPq2yEy39gEqVK2m_Qi6jJdy96ysHGJ6VSHOZFz%2Bxbg%40mail.gmail.com#e02bee0220b422fe91a3383916107504\nhttps://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com",
"msg_date": "Wed, 6 Feb 2019 18:37:16 +0100",
"msg_from": "Jakub Glapa <jakub.glapa@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi\n\n> Could you let us know which dsa_* error you were seeing, whether or not you\n> were running postgres under virtual environment, and (if so) which VM\n> hypervisor?\n\nSystem from my report is amazon virtual server. lscpu say:\nHypervisor vendor: Xen\nVirtualization type: full\n\nregards, Sergei\n\n",
"msg_date": "Wed, 06 Feb 2019 20:52:48 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Wed, Feb 06, 2019 at 06:37:16PM +0100, Jakub Glapa wrote:\n> I'm seeing dsa_allocate on two different servers.\n> One is virtualized with VMWare the other is bare metal.\n\nThanks. So it's not limited to vmware or VM at all.\n\nFYI here we've seen DSA errors on (and only on) two vmware VMs.\n\nIt might be interesting to have CPU info, too.\n\nFor us the affected servers are:\n\nIntel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz stepping 02\nmicrocode: CPU0 sig=0x206d2, pf=0x1, revision=0xb00002e\n\nIntel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz stepping 02\nmicrocode: CPU0 sig=0x206d2, pf=0x1, revision=0x710\n\n",
"msg_date": "Wed, 6 Feb 2019 12:52:41 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": ">\n> It might be interesting to have CPU info, too.\n\n\nmodel name : Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz (virtualized\nvmware)\nand\nmodel name : Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz (bare metal)\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Wed, Feb 6, 2019 at 7:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Feb 06, 2019 at 06:37:16PM +0100, Jakub Glapa wrote:\n> > I'm seeing dsa_allocate on two different servers.\n> > One is virtualized with VMWare the other is bare metal.\n>\n> Thanks. So it's not limited to vmware or VM at all.\n>\n> FYI here we've seen DSA errors on (and only on) two vmware VMs.\n>\n> It might be interesting to have CPU info, too.\n>\n> For us the affected servers are:\n>\n> Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz stepping 02\n> microcode: CPU0 sig=0x206d2, pf=0x1, revision=0xb00002e\n>\n> Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz stepping 02\n> microcode: CPU0 sig=0x206d2, pf=0x1, revision=0x710\n>\n\nIt might be interesting to have CPU info, too.model name : Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz (virtualized vmware)andmodel name : Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz (bare metal)--regards,pozdrawiam,Jakub GlapaOn Wed, Feb 6, 2019 at 7:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Feb 06, 2019 at 06:37:16PM +0100, Jakub Glapa wrote:\n> I'm seeing dsa_allocate on two different servers.\n> One is virtualized with VMWare the other is bare metal.\n\nThanks. So it's not limited to vmware or VM at all.\n\nFYI here we've seen DSA errors on (and only on) two vmware VMs.\n\nIt might be interesting to have CPU info, too.\n\nFor us the affected servers are:\n\nIntel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz stepping 02\nmicrocode: CPU0 sig=0x206d2, pf=0x1, revision=0xb00002e\n\nIntel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz stepping 02\nmicrocode: CPU0 sig=0x206d2, pf=0x1, revision=0x710",
"msg_date": "Wed, 6 Feb 2019 20:32:38 +0100",
"msg_from": "Jakub Glapa <jakub.glapa@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
}
] |
[
{
"msg_contents": "Hi,\n\nIn src/test/example, the implicit make rules produce errors:\n\nmake -C ../../../src/backend generated-headers\nmake[1]: Entering directory '/home/ddong/postgresql/bld/src/backend'\nmake -C catalog distprep generated-header-symlinks\nmake[2]: Entering directory '/home/ddong/postgresql/bld/src/backend/catalog'\nmake[2]: Nothing to be done for 'distprep'.\nmake[2]: Nothing to be done for 'generated-header-symlinks'.\nmake[2]: Leaving directory '/home/ddong/postgresql/bld/src/backend/catalog'\nmake -C utils distprep generated-header-symlinks\nmake[2]: Entering directory '/home/ddong/postgresql/bld/src/backend/utils'\nmake[2]: Nothing to be done for 'distprep'.\nmake[2]: Nothing to be done for 'generated-header-symlinks'.\nmake[2]: Leaving directory '/home/ddong/postgresql/bld/src/backend/utils'\nmake[1]: Leaving directory '/home/ddong/postgresql/bld/src/backend'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -fexcess-precision=standard -Wno-format-truncation -O2\n-I/home/ddong/postgresql/bld/../src/interfaces/libpq\n-I../../../src/include -I/home/ddong/postgresql/bld/../src/include\n-D_GNU_SOURCE -c -o testlibpq.o\n/home/ddong/postgresql/bld/../src/test/examples/testlibpq.c\ngcc -L../../../src/port -L../../../src/common -L../../../src/common\n-lpgcommon -L../../../src/port -lpgport\n-L../../../src/interfaces/libpq -lpq -Wl,--as-needed\n-Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags testlibpq.o -o\ntestlibpq\ntestlibpq.o: In function `exit_nicely':\ntestlibpq.c:(.text.unlikely+0x5): undefined reference to `PQfinish'\ntestlibpq.o: In function `main':\ntestlibpq.c:(.text.startup+0x22): undefined reference to `PQconnectdb'\ntestlibpq.c:(.text.startup+0x2d): undefined reference to `PQstatus'\ntestlibpq.c:(.text.startup+0x44): undefined reference to `PQexec’\n…\n\nI think the -lpq flag does not have any effects in the middle of the\narguments. It works if move the flag to the end:\n\ngcc -L../../../src/port -L../../../src/common -L../../../src/common\n-lpgcommon -L../../../src/port -lpgport\n-L../../../src/interfaces/libpq -Wl,--as-needed\n-Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags testlibpq.o -o\ntestlibpq -lpq\n\nSo I added an explicit rule to rearrange the flags:\n\ngcc testlibpq.o -o testlibpq -L../../../src/port -L../../../src/common\n-L../../../src/common -lpgcommon -L../../../src/port -lpgport\n-L../../../src/interfaces/libpq -lpq -Wl,--as-needed\n-Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\n\nThen the make command works as expected. This is my first time writing\na patch. Please let me know what you think!\n\nThank you,\nHappy new year!\nDonald Dong\n\n",
"msg_date": "Mon, 31 Dec 2018 23:24:45 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Implicit make rules break test examples"
},
{
"msg_contents": "On Mon, Dec 31, 2018 at 11:24 PM Donald Dong <xdong@csumb.edu> wrote:\n>\n> Hi,\n>\n> In src/test/example, the implicit make rules produce errors:\n>\n> make -C ../../../src/backend generated-headers\n> make[1]: Entering directory '/home/ddong/postgresql/bld/src/backend'\n> make -C catalog distprep generated-header-symlinks\n> make[2]: Entering directory '/home/ddong/postgresql/bld/src/backend/catalog'\n> make[2]: Nothing to be done for 'distprep'.\n> make[2]: Nothing to be done for 'generated-header-symlinks'.\n> make[2]: Leaving directory '/home/ddong/postgresql/bld/src/backend/catalog'\n> make -C utils distprep generated-header-symlinks\n> make[2]: Entering directory '/home/ddong/postgresql/bld/src/backend/utils'\n> make[2]: Nothing to be done for 'distprep'.\n> make[2]: Nothing to be done for 'generated-header-symlinks'.\n> make[2]: Leaving directory '/home/ddong/postgresql/bld/src/backend/utils'\n> make[1]: Leaving directory '/home/ddong/postgresql/bld/src/backend'\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n> -fwrapv -fexcess-precision=standard -Wno-format-truncation -O2\n> -I/home/ddong/postgresql/bld/../src/interfaces/libpq\n> -I../../../src/include -I/home/ddong/postgresql/bld/../src/include\n> -D_GNU_SOURCE -c -o testlibpq.o\n> /home/ddong/postgresql/bld/../src/test/examples/testlibpq.c\n> gcc -L../../../src/port -L../../../src/common -L../../../src/common\n> -lpgcommon -L../../../src/port -lpgport\n> -L../../../src/interfaces/libpq -lpq -Wl,--as-needed\n> -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags testlibpq.o -o\n> testlibpq\n> testlibpq.o: In function `exit_nicely':\n> testlibpq.c:(.text.unlikely+0x5): undefined reference to `PQfinish'\n> testlibpq.o: In function `main':\n> testlibpq.c:(.text.startup+0x22): undefined reference to `PQconnectdb'\n> testlibpq.c:(.text.startup+0x2d): undefined reference to `PQstatus'\n> testlibpq.c:(.text.startup+0x44): undefined reference to `PQexec’\n> …\n>\n> I think the -lpq flag does not have any effects in the middle of the\n> arguments. It works if move the flag to the end:\n>\n> gcc -L../../../src/port -L../../../src/common -L../../../src/common\n> -lpgcommon -L../../../src/port -lpgport\n> -L../../../src/interfaces/libpq -Wl,--as-needed\n> -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags testlibpq.o -o\n> testlibpq -lpq\n>\n> So I added an explicit rule to rearrange the flags:\n>\n> gcc testlibpq.o -o testlibpq -L../../../src/port -L../../../src/common\n> -L../../../src/common -lpgcommon -L../../../src/port -lpgport\n> -L../../../src/interfaces/libpq -lpq -Wl,--as-needed\n> -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\n>\n> Then the make command works as expected. This is my first time writing\n> a patch. Please let me know what you think!\n>\n> Thank you,\n> Happy new year!\n> Donald Dong\n\n\n\n-- \nDonald Dong",
"msg_date": "Mon, 31 Dec 2018 23:26:37 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Implicit make rules break test examples"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> In src/test/example, the implicit make rules produce errors:\n\nHm. \"make\" in src/test/examples works fine for me.\n\nThe only way I can account for the results you're showing is if your\nlinker is preferring libpq.a to libpq.so, so that reading the library\nbefore the *.o files causes none of it to get pulled in. But that\nisn't the default behavior on any modern platform AFAIK, and certainly\nisn't considered good practice these days. Moreover, if that's what's\nhappening, I don't see how you would have managed to build PG at all,\nbecause there are a lot of other places where our Makefiles write\n$(LDFLAGS) before the *.o files they're trying to link. Maybe we\nshouldn't have done it like that, but it's been working for everybody\nelse.\n\nWhat platform are you on exactly, and what toolchain (gcc and ld\nversions) are you using?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 01 Jan 2019 12:54:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implicit make rules break test examples"
},
{
"msg_contents": "Thank you for the explanation! That makes sense. It is strange that it does\nnot work for me.\n\n\n> What platform are you on exactly, and what toolchain (gcc and ld\n> versions) are you using?\n\n\nI'm using Ubuntu 18.04.1 LTS.\n\ngcc version:\ngcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0\n\nld version:\nGNU ld (GNU Binutils for Ubuntu) 2.30\n\nRegards,\nDonald Dong\n\n\nOn Tue, Jan 1, 2019 at 9:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Donald Dong <xdong@csumb.edu> writes:\n> > In src/test/example, the implicit make rules produce errors:\n>\n> Hm. \"make\" in src/test/examples works fine for me.\n>\n> The only way I can account for the results you're showing is if your\n> linker is preferring libpq.a to libpq.so, so that reading the library\n> before the *.o files causes none of it to get pulled in. But that\n> isn't the default behavior on any modern platform AFAIK, and certainly\n> isn't considered good practice these days. Moreover, if that's what's\n> happening, I don't see how you would have managed to build PG at all,\n> because there are a lot of other places where our Makefiles write\n> $(LDFLAGS) before the *.o files they're trying to link. Maybe we\n> shouldn't have done it like that, but it's been working for everybody\n> else.\n>\n> What platform are you on exactly, and what toolchain (gcc and ld\n> versions) are you using?\n>\n> regards, tom lane\n>\n\n\n-- \nDonald Dong\n\nThank you for the explanation! That makes sense. It is strange that it does not work for me. What platform are you on exactly, and what toolchain (gcc and ldversions) are you using?I'm using Ubuntu 18.04.1 LTS.gcc version:gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0ld version:GNU ld (GNU Binutils for Ubuntu) 2.30Regards,Donald DongOn Tue, Jan 1, 2019 at 9:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Donald Dong <xdong@csumb.edu> writes:\n> In src/test/example, the implicit make rules produce errors:\n\nHm. \"make\" in src/test/examples works fine for me.\n\nThe only way I can account for the results you're showing is if your\nlinker is preferring libpq.a to libpq.so, so that reading the library\nbefore the *.o files causes none of it to get pulled in. But that\nisn't the default behavior on any modern platform AFAIK, and certainly\nisn't considered good practice these days. Moreover, if that's what's\nhappening, I don't see how you would have managed to build PG at all,\nbecause there are a lot of other places where our Makefiles write\n$(LDFLAGS) before the *.o files they're trying to link. Maybe we\nshouldn't have done it like that, but it's been working for everybody\nelse.\n\nWhat platform are you on exactly, and what toolchain (gcc and ld\nversions) are you using?\n\n regards, tom lane\n-- Donald Dong",
"msg_date": "Tue, 1 Jan 2019 10:24:55 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Implicit make rules break test examples"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> Thank you for the explanation! That makes sense. It is strange that it does\n> not work for me.\n\nYeah, I still can't account for the difference in behavior between your\nplatform and mine (I tried several different ones here, and they all\nmanage to build src/test/examples). However, I'm now convinced that\nwe do have an issue, because I found another place that does fail on my\nplatforms: src/interfaces/libpq/test gives failures like\n\nuri-regress.o: In function `main':\nuri-regress.c:58: undefined reference to `pg_printf'\n\nthe reason being that the link command lists -lpgport before\nuri-regress.o, and since we only make the .a flavor of libpgport, it's\ndefinitely going to be order-sensitive. (This has probably been busted\nsince we changed over to using snprintf.c everywhere, but nobody noticed\nbecause this test isn't normally run.)\n\nThe reason we haven't noticed this already seems to be that in all the\nplaces where it matters, we have explicit link rules that order things\nlike this:\n\n\t$(CC) $(CFLAGS) $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n\nHowever, the places that are having problems are trying to rely on\ngmake's implicit rule, which according to their manual is\n\n Linking a single object file\n `N' is made automatically from `N.o' by running the linker\n (usually called `ld') via the C compiler. The precise command\n used is `$(CC) $(LDFLAGS) N.o $(LOADLIBES) $(LDLIBS)'.\n\nSo really the problem here is that we're doing the wrong thing by\ninjecting -l switches into LDFLAGS; it would be more standard to\nput them into LDLIBS (or maybe LOADLIBES, though I think that's\nnot commonly used).\n\nI hesitate to try to change that though. The places that are messing with\nthat are injecting both -L and -l switches, and we want to keep putting\nthe -L switches into LDFLAGS because of the strong likelihood that the\ninitial (autoconf-derived) value of LDFLAGS contains -L switches; our\nswitches pointing at within-tree directories need to come first.\nSo the options seem to be:\n\n1. Redefine our makefile conventions as being that internal -L switches\ngo into LDFLAGS_INTERNAL but internal -l switches go into LDLIBS_INTERNAL,\nand we use the same recursive-expansion dance for LDLIBS[_INTERNAL] as for\nLDFLAGS[_INTERNAL], and we have to start mentioning LDLIBS in our explicit\nlink rules. This would be a pretty invasive patch, I'm afraid, and would\nalmost certainly break some third-party extensions' Makefiles. It'd be\nthe cleanest solution in a green field, I think, but our Makefiles are\nhardly a green field.\n\n2. Be sure to override the gmake implicit link rule with an explicit\nlink rule everywhere --- basically like your patch, but touching more\nplaces. This seems like the least risky alternative in the short\nrun, but we'd be highly likely to reintroduce such problems in future.\n\n3. Replace the default implicit link rule with one of our own.\nConceivably this also breaks some extensions' Makefiles, though\nI think that's less likely.\n\nI observe that ecpg's Makefile.regress is already doing #3:\n\n%: %.o\n\t$(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@\n\nso what we'd be talking about is moving that to some more global spot,\nprobably Makefile.global. (I wonder why the target is not specified\nas $@$(X) here?)\n\nI notice that the platform-specific makefiles in src/makefiles\nare generally also getting it wrong in their implicit rules for\nbuilding shlibs, eg in Makefile.linux:\n\n# Rule for building a shared library from a single .o file\n%.so: %.o\n\t$(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_SL) -shared -o $@ $<\n\nPer this discussion, that needs to be more like\n\n\t$(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_SL) -shared -o $@\n\nin case a reference to libpgport or libpgcommon has been inserted\ninto LDFLAGS. I'm a bit surprised that that hasn't bitten us\nalready.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 01 Jan 2019 14:54:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implicit make rules break test examples"
},
{
"msg_contents": "> I observe that ecpg's Makefile.regress is already doing #3:\n> %: %.o\n> $(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@\n> so what we'd be talking about is moving that to some more global spot,\n> probably Makefile.global. (I wonder why the target is not specified\n> as $@$(X) here?)\n\n\nThank you for pointing that out!\nI think #3 is a better choice since it's less invasive and would\npotentially avoid similar problems in the future. I think may worth\nthe risks of breaking some extensions. I moved the rule to the\nMakefile.global and added $(X) in case it's set.\n\nI also updated the order in Makefile.linux in the same patch since\nthey have the same cause. I'm not sure if changes are necessary for\nother platforms, and I am not able to test it, unfortunately.\n\nI've built it again on Ubuntu and tested src/test/examples and\nsrc/interfaces/libpq/test. There are no errors.\n\nThank you again for the awesome explanation,\nDonald Dong\n\nOn Tue, Jan 1, 2019 at 11:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Donald Dong <xdong@csumb.edu> writes:\n> > Thank you for the explanation! That makes sense. It is strange that it does\n> > not work for me.\n>\n> Yeah, I still can't account for the difference in behavior between your\n> platform and mine (I tried several different ones here, and they all\n> manage to build src/test/examples). However, I'm now convinced that\n> we do have an issue, because I found another place that does fail on my\n> platforms: src/interfaces/libpq/test gives failures like\n>\n> uri-regress.o: In function `main':\n> uri-regress.c:58: undefined reference to `pg_printf'\n>\n> the reason being that the link command lists -lpgport before\n> uri-regress.o, and since we only make the .a flavor of libpgport, it's\n> definitely going to be order-sensitive. (This has probably been busted\n> since we changed over to using snprintf.c everywhere, but nobody noticed\n> because this test isn't normally run.)\n>\n> The reason we haven't noticed this already seems to be that in all the\n> places where it matters, we have explicit link rules that order things\n> like this:\n>\n> $(CC) $(CFLAGS) $(OBJS) $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)\n>\n> However, the places that are having problems are trying to rely on\n> gmake's implicit rule, which according to their manual is\n>\n> Linking a single object file\n> `N' is made automatically from `N.o' by running the linker\n> (usually called `ld') via the C compiler. The precise command\n> used is `$(CC) $(LDFLAGS) N.o $(LOADLIBES) $(LDLIBS)'.\n>\n> So really the problem here is that we're doing the wrong thing by\n> injecting -l switches into LDFLAGS; it would be more standard to\n> put them into LDLIBS (or maybe LOADLIBES, though I think that's\n> not commonly used).\n>\n> I hesitate to try to change that though. The places that are messing with\n> that are injecting both -L and -l switches, and we want to keep putting\n> the -L switches into LDFLAGS because of the strong likelihood that the\n> initial (autoconf-derived) value of LDFLAGS contains -L switches; our\n> switches pointing at within-tree directories need to come first.\n> So the options seem to be:\n>\n> 1. Redefine our makefile conventions as being that internal -L switches\n> go into LDFLAGS_INTERNAL but internal -l switches go into LDLIBS_INTERNAL,\n> and we use the same recursive-expansion dance for LDLIBS[_INTERNAL] as for\n> LDFLAGS[_INTERNAL], and we have to start mentioning LDLIBS in our explicit\n> link rules. This would be a pretty invasive patch, I'm afraid, and would\n> almost certainly break some third-party extensions' Makefiles. It'd be\n> the cleanest solution in a green field, I think, but our Makefiles are\n> hardly a green field.\n>\n> 2. Be sure to override the gmake implicit link rule with an explicit\n> link rule everywhere --- basically like your patch, but touching more\n> places. This seems like the least risky alternative in the short\n> run, but we'd be highly likely to reintroduce such problems in future.\n>\n> 3. Replace the default implicit link rule with one of our own.\n> Conceivably this also breaks some extensions' Makefiles, though\n> I think that's less likely.\n>\n> I observe that ecpg's Makefile.regress is already doing #3:\n>\n> %: %.o\n> $(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@\n>\n> so what we'd be talking about is moving that to some more global spot,\n> probably Makefile.global. (I wonder why the target is not specified\n> as $@$(X) here?)\n>\n> I notice that the platform-specific makefiles in src/makefiles\n> are generally also getting it wrong in their implicit rules for\n> building shlibs, eg in Makefile.linux:\n>\n> # Rule for building a shared library from a single .o file\n> %.so: %.o\n> $(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_SL) -shared -o $@ $<\n>\n> Per this discussion, that needs to be more like\n>\n> $(CC) $(CFLAGS) $< $(LDFLAGS) $(LDFLAGS_SL) -shared -o $@\n>\n> in case a reference to libpgport or libpgcommon has been inserted\n> into LDFLAGS. I'm a bit surprised that that hasn't bitten us\n> already.\n>\n> regards, tom lane\n\n\n\n-- \nDonald Dong",
"msg_date": "Tue, 1 Jan 2019 14:10:55 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Implicit make rules break test examples"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> I think #3 is a better choice since it's less invasive and would\n> potentially avoid similar problems in the future. I think may worth\n> the risks of breaking some extensions. I moved the rule to the\n> Makefile.global and added $(X) in case it's set.\n\nYeah, I think #3 is the best choice too.\n\nI'm not quite sure about the $(X) addition --- it makes the output\nfile not agree with what gmake thinks the target is. However, I\nobserve other places doing the same thing, so let's try that and\nsee what the buildfarm thinks.\n\n> I also updated the order in Makefile.linux in the same patch since\n> they have the same cause. I'm not sure if changes are necessary for\n> other platforms, and I am not able to test it, unfortunately.\n\nThat's what we have a buildfarm for. Pushed, we'll soon find out.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 02 Jan 2019 14:07:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implicit make rules break test examples"
}
] |
[
{
"msg_contents": "I've been trying to use vim for postgres development some yeas ago, but I did \nnot manage to do it for log time, as I quit the job etc.\n\nNow I am trying again, but I've lost my .vimrc and notes and had to start from \nthe very beginning. I vaguely remember what tools I've been using, but I have \nto google for them as they are not listed anywhere, and I have to meet all the \nproblems I've met before, again. \n\nSo I decided to write it down to a wiki article, while I am restoring the \nconfiguration so I do not have to remember them for the third time, if I loose \n.vimrc again. :-)\n\nThe article is here:\n\nhttps://wiki.postgresql.org/wiki/Configuring_vim_for_postgres_development\n\nIf you are using vim, and use some tools that is not listed there, but find \nthem very useful, please add them... Or tell me about them, I will hopefully \nadd them there myself\n\nThe main purpose of article is to allow a user new to vim, start using vim for \npostgres development with maximum efficiency. It should not be a vim tutorial, \nbut cover all postgres specific things that can be useful.\n\n",
"msg_date": "Tue, 01 Jan 2019 17:23:08 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Using vim for developing porstres wiki article"
},
{
"msg_contents": "> I've been trying to use vim for postgres development some yeas ago, but I did\n> not manage to do it for log time, as I quit the job etc.\n>\n> Now I am trying again, but I've lost my .vimrc and notes and had to start from\n> the very beginning. I vaguely remember what tools I've been using, but I have\n> to google for them as they are not listed anywhere, and I have to meet all the\n> problems I've met before, again.\n>\n> So I decided to write it down to a wiki article, while I am restoring the\n> configuration so I do not have to remember them for the third time, if I loose\n> .vimrc again. :-)\n\nI like expanding the small section in the Developer FAQ into a more\ndetailed article.\n\nBut the new article is missing several things relative to the old\ninstructions, and I don't think the changes should have been made to\nthat page until this was more fully baked.\n\nA few specific thoughts:\n\n1. The new article makes it more difficult to get started since every\nsetting would need to be copied separately. I think there should be a\ncohesive block of options that we recommend for copy/paste along with\ninline comments explaining what each one does.\n\n2. There's a bit of conflation between general Vim setup and Postgres\nspecific development. The old section I think was mostly geared toward\nsomeone who uses Vim but wants the Postgres-specific parts, and that's\na valuable use case. Perhaps we could split the article into a section\non general Vim setup (for example, turning on syntax) and a section on\n\"if you also already use Vim, there's a way to do project-specific\nsettings and the ones you should use\".\n\n3. Several of the old specified options didn't make it into the new\narticle's details and are a loss. I noticed this particularly since\nsince just 2 or 3 days ago I myself had edited this section to add the\nsofttabstop=0 option (the Vim default) so that if soft tabs are\nenabled in someone's general Vim config then hitting the tab key won't\nresult in inserting 2 spaces while working in the Postgres source.\n\nThanks,\nJames Coleman\n\n",
"msg_date": "Wed, 2 Jan 2019 08:59:13 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using vim for developing porstres wiki article"
},
{
"msg_contents": "В письме от среда, 2 января 2019 г. 8:59:13 MSK пользователь James Coleman \nнаписал:\n\n> > So I decided to write it down to a wiki article, while I am restoring the\n> > configuration so I do not have to remember them for the third time, if I\n> > loose .vimrc again. :-)\n> \n> I like expanding the small section in the Developer FAQ into a more\n> detailed article.\n> \n> But the new article is missing several things relative to the old\n> instructions, and I don't think the changes should have been made to\n> that page until this was more fully baked.\n\nActually I've kept most of the old instructions in \"Old staff\" section. So all \ndata is available as it id (I've just removed filestyle paragraph, form the \nFAQ, as I added it there myself, and it is properly described in a new \narticle). So nothing is lost, only new info is added.\n\n\n> \n> A few specific thoughts:\n> \n> 1. The new article makes it more difficult to get started since every\n> setting would need to be copied separately. I think there should be a\n> cohesive block of options that we recommend for copy/paste along with\n> inline comments explaining what each one does.\nAs far as I can understand wiki is first of all for explaining, not for ready \nrecipes. So I explained.\nWe have a better place for ready recipe, but nobody uses it. \nIt is src/tools/editors/vim.samples so it you want to make ready recipe, I \nwould suggest to but it there, and just tell about it in the wiki.\n\n> 2. There's a bit of conflation between general Vim setup and Postgres\n> specific development. The old section I think was mostly geared toward\n> someone who uses Vim but wants the Postgres-specific parts, and that's\n> a valuable use case. Perhaps we could split the article into a section\n> on general Vim setup (for example, turning on syntax) and a section on\n> \"if you also already use Vim, there's a way to do project-specific\n> settings and the ones you should use\".\nI've been thinking about it. My idea was: an experienced vim user knows better \nwhat he really wants, he does not need any advises. And even less he needs \nready recipes. \n\nA vim beginner needs understanding first of all. He can copy ready recipe, but \nit gives him nothing. So every options needs an explanation. \n\nSo I'd keep article style as it is, just allow experienced user to scan it and \nchoose tips he really wants. And let the beginner get what understanding he \ncan get.\n\n> 3. Several of the old specified options didn't make it into the new\n> article's details and are a loss. I noticed this particularly since\n> since just 2 or 3 days ago I myself had edited this section to add the\n> softtabstop=0 option (the Vim default) so that if soft tabs are\n> enabled in someone's general Vim config then hitting the tab key won't\n> result in inserting 2 spaces while working in the Postgres source.\n\nAs far as I said above, I've kept old example. So if you have some expertise \nin options I've omitted, you can join explaining them.\n\nAs for softtabstop. From what I've heard, I'd just ignored this option. If \nuser did something in a global config, then he is an experienced user, and \nneeds no advises from us, he knows what he is doing. And for beginner this is \ninformation is useless, overriding default value with the same value is a \nstrange thing. It is not the thing should think abut when he is starting\n\nBut it you think it is important, you can add an abstract about softtabstop, I \ndo not mind. Different people sees importance in different things.\n\nPS. I am beginner in vim, and for me config example in FAQ was totally useless. \nSo I tried to create an article that would be useful for me, and tried not to \ndestroy information that exists.\n\n\n",
"msg_date": "Thu, 03 Jan 2019 14:53:04 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: Using vim for developing porstres wiki article"
}
] |
[
{
"msg_contents": "It's a new year and I'm getting reflective, so resuming a portion of\nconversation we had here:\nhttps://www.postgresql.org/message-id/CAMkU%3D1yVbwEAugaCmKWxjaX15ZduWee45%2B_DqCw--d_3N_O_%3DQ%40mail.gmail.com\n\nFind attached patch which implements use of correlation statistic in costing\nfor bitmap scans.\n\nAn opened question in my mind is how to combine the correlation statistic with\nexisting cost_per_page:\n\n . sqrt(a^2+b^2)\n . MIN()\n . multiply existing computation by new correlation component\n\nOn Wed, Dec 20, 2017 at 09:55:40PM -0800, Jeff Janes wrote:\n> On Tue, Dec 19, 2017 at 7:25 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > I started playing with this weeks ago (probably during Vitaliy's problem\n> > report). Is there any reason cost_bitmap_heap_scan shouldn't interpolate\n> > based on correlation from seq_page_cost to rand_page_cost, same as\n> > cost_index ?\n> \n> I think that doing something like that is a good idea in general, but someone\n> has to implement the code, and so far no one seems enthused to do so. You\n> seem pretty interested in the topic, so....\n\nI tested patch using CDR data which was causing issues for us years ago:\nhttps://www.postgresql.org/message-id/20160524173914.GA11880%40telsasoft.com\n\nNote: since that original problem report:\n . the SAN is backed by SSD rather than rotational storage;\n . we're using relkind=p partitioned tables;\n . PG12 uses pread() rather than lstat()+read(), so overhead of seek()+read()\n is avoided (but probably wasn't a substantial component of the problem);\n\nUnpatched, I'm unable to get bitmap scan without disabling indexscan and seqscan.\n| Bitmap Heap Scan on cdrs_huawei_pgwrecord_2018_12_25 (cost=55764.07..1974230.46 rows=2295379 width=1375)\n| Recheck Cond: ((recordopeningtime >= '2018-12-25 05:00:00-06'::timestamp with time zone) AND (recordopeningtime <= '2018-12-25 10:00:00-06'::timestamp with time zone))\n| -> Bitmap Index Scan on cdrs_huawei_pgwrecord_2018_12_25_recordopeningtime_idx (cost=0.00..55190.22 rows=2295379 width=0)\n| Index Cond: ((recordopeningtime >= '2018-12-25 05:00:00-06'::timestamp with time zone) AND (recordopeningtime <= '2018-12-25 10:00:00-06'::timestamp with time zone))\n\nPatched, I get bitmap scan when random_page_cost is reduced enough that startup\ncost (index scan component) is not prohibitive. But for simplicity, this just\nforces bitmap by setting enable_indexscan=off;\n| Bitmap Heap Scan on cdrs_huawei_pgwrecord_2018_12_25 (cost=55764.07..527057.45 rows=2295379 width=1375)\n| Recheck Cond: ((recordopeningtime >= '2018-12-25 05:00:00-06'::timestamp with time zone) AND (recordopeningtime <= '2018-12-25 10:00:00-06'::timestamp with time zone))\n| -> Bitmap Index Scan on cdrs_huawei_pgwrecord_2018_12_25_recordopeningtime_idx (cost=0.00..55190.22 rows=2295379 width=0)\n| Index Cond: ((recordopeningtime >= '2018-12-25 05:00:00-06'::timestamp with time zone) AND (recordopeningtime <= '2018-12-25 10:00:00-06'::timestamp with time zone))\n\nThat addresses the issue we had in part. A remaining problem is that\ncorrelation fails to distinguish between \"fresh\" index and fragmented index,\nand so heap access of a correlated index may looks cheaper than it is. Which\nis why I still have to set random_page_cost to get bitmap scan.\n\nJustin\n\n",
"msg_date": "Tue, 1 Jan 2019 16:56:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "bitmaps and correlation"
},
{
"msg_contents": "Attached for real.",
"msg_date": "Tue, 1 Jan 2019 17:02:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "Attached is a fixed and rebasified patch for cfbot.\nIncluded inline for conceptual review.\n\n\n From f3055a5696924427dda280da702c41d2d2796a24 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Tue, 1 Jan 2019 16:17:28 -0600\nSubject: [PATCH v2] Use correlation statistic in costing bitmap scans..\n\nSame as for an index scan, an uncorrelated bitmap (like modulus) which access a\ncertain number of pages across the entire length of a table should have cost\nestimate heavily weighted by random access, compared to an bitmap scan which\naccesses same number of pages across a small portion of the table.\n\nNote, Tom points out that there are cases where a column could be\ntightly-clumped without being hightly-ordered. Since we have correlation\nalready, we use that for now, and if someone creates a statistic for\nclumpiness, we'll re-evaluate at some later date.\n---\n src/backend/optimizer/path/costsize.c | 84 ++++++++++++++++++++++++++++-------\n src/backend/optimizer/path/indxpath.c | 8 ++--\n src/include/nodes/pathnodes.h | 3 ++\n src/include/optimizer/cost.h | 2 +-\n 4 files changed, 77 insertions(+), 20 deletions(-)\n\ndiff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c\nindex c5f6593..aaac29a 100644\n--- a/src/backend/optimizer/path/costsize.c\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -549,11 +549,12 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,\n \n \t/*\n \t * Save amcostestimate's results for possible use in bitmap scan planning.\n-\t * We don't bother to save indexStartupCost or indexCorrelation, because a\n-\t * bitmap scan doesn't care about either.\n+\t * We don't bother to save indexStartupCost, because a\n+\t * bitmap scan doesn't care.\n \t */\n \tpath->indextotalcost = indexTotalCost;\n \tpath->indexselectivity = indexSelectivity;\n+\tpath->indexCorrelation = indexCorrelation;\n \n \t/* all costs for touching index itself included here */\n \tstartup_cost += indexStartupCost;\n@@ -986,12 +987,33 @@ cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,\n \t * appropriate to charge spc_seq_page_cost apiece. The effect is\n \t * nonlinear, too. For lack of a better idea, interpolate like this to\n \t * determine the cost per page.\n+\t * Note this works at PAGE granularity, so even if we read 1% of a\n+\t * table's tuples, if we have to read nearly every page, it should be\n+\t * considered sequential.\n \t */\n-\tif (pages_fetched >= 2.0)\n+\tif (pages_fetched >= 2.0) {\n+\t\tdouble correlation, cost_per_page2;\n \t\tcost_per_page = spc_random_page_cost -\n \t\t\t(spc_random_page_cost - spc_seq_page_cost)\n \t\t\t* sqrt(pages_fetched / T);\n-\telse\n+\n+\t\t// XXX: interpolate based on correlation from seq_page_cost to rand_page_cost?\n+\t\t// a highly correlated bitmap scan 1) likely reads fewer pages; and,\n+\t\t// 2) at higher \"density\" (more sequential).\n+\t\t// double correlation = get_indexpath_correlation(root, bitmapqual);\n+\t\tcorrelation = ((IndexPath *)bitmapqual)->indexCorrelation;\n+\t\tcost_per_page2 = spc_seq_page_cost +\n+\t\t\t(1-correlation*correlation)*(spc_random_page_cost - spc_seq_page_cost); // XXX: *sqrt(pages_fetched/T) ?\n+\t\t// There are two variables: fraction of pages(T) and correlation.\n+\t\t// If T is high, giving sequential reads, we want low cost_per_page regardless of correlation.\n+\t\t// If correlation is high, we (probably) want low cost per page.\n+\t\t// ...the exception is if someone does an =ANY() query on a list of non-consecutive values.\n+\t\t// Something like start_time=ANY('2017-01-01', '2017-02-01',...)\n+\t\t// which reads small number of rows from pages all across the length of the table.\n+\t\t// But index scan doesn't seem to do address that at all, so leave it alone for now.\n+\t\tcost_per_page=Min(cost_per_page, cost_per_page2);\n+\t\t// cost_per_page=sqrt(cost_per_page*cost_per_page + cost_per_page2*cost_per_page2);\n+\t} else\n \t\tcost_per_page = spc_random_page_cost;\n \n \trun_cost += pages_fetched * cost_per_page;\n@@ -1035,15 +1057,16 @@ cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *baserel,\n \n /*\n * cost_bitmap_tree_node\n- *\t\tExtract cost and selectivity from a bitmap tree node (index/and/or)\n+ *\t\tExtract cost, selectivity, and correlation from a bitmap tree node (index/and/or)\n */\n void\n-cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)\n+cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec, double *correlation)\n {\n \tif (IsA(path, IndexPath))\n \t{\n \t\t*cost = ((IndexPath *) path)->indextotalcost;\n \t\t*selec = ((IndexPath *) path)->indexselectivity;\n+\t\tif (correlation) *correlation = ((IndexPath *) path)->indexCorrelation;\n \n \t\t/*\n \t\t * Charge a small amount per retrieved tuple to reflect the costs of\n@@ -1057,11 +1080,13 @@ cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec)\n \t{\n \t\t*cost = path->total_cost;\n \t\t*selec = ((BitmapAndPath *) path)->bitmapselectivity;\n+\t\tif (correlation) *correlation = ((BitmapAndPath *) path)->bitmapcorrelation;\n \t}\n \telse if (IsA(path, BitmapOrPath))\n \t{\n \t\t*cost = path->total_cost;\n \t\t*selec = ((BitmapOrPath *) path)->bitmapselectivity;\n+\t\tif (correlation) *correlation = ((BitmapOrPath *) path)->bitmapcorrelation;\n \t}\n \telse\n \t{\n@@ -1084,8 +1109,9 @@ void\n cost_bitmap_and_node(BitmapAndPath *path, PlannerInfo *root)\n {\n \tCost\t\ttotalCost;\n-\tSelectivity selec;\n+\tSelectivity selec, minsubselec;\n \tListCell *l;\n+\tdouble\t\tcorrelation;\n \n \t/*\n \t * We estimate AND selectivity on the assumption that the inputs are\n@@ -1097,22 +1123,32 @@ cost_bitmap_and_node(BitmapAndPath *path, PlannerInfo *root)\n \t * definitely too simplistic?\n \t */\n \ttotalCost = 0.0;\n-\tselec = 1.0;\n+\tminsubselec = selec = 1.0;\n+\tcorrelation = 0;\n \tforeach(l, path->bitmapquals)\n \t{\n \t\tPath\t *subpath = (Path *) lfirst(l);\n \t\tCost\t\tsubCost;\n \t\tSelectivity subselec;\n+\t\tdouble\t\tsubcorrelation;\n \n-\t\tcost_bitmap_tree_node(subpath, &subCost, &subselec);\n+\t\tcost_bitmap_tree_node(subpath, &subCost, &subselec, &subcorrelation);\n \n \t\tselec *= subselec;\n \n+\t\t/* For an AND node, use the correlation of its most-selective subpath */\n+\t\tif (subselec<=minsubselec) {\n+\t\t\t\tcorrelation = subcorrelation;\n+\t\t\t\tminsubselec = subselec;\n+\t\t}\n+\n \t\ttotalCost += subCost;\n \t\tif (l != list_head(path->bitmapquals))\n+\t\t\t// ??? XXX && !IsA(subpath, IndexPath))\n \t\t\ttotalCost += 100.0 * cpu_operator_cost;\n \t}\n \tpath->bitmapselectivity = selec;\n+\tpath->bitmapcorrelation = correlation;\n \tpath->path.rows = 0;\t\t/* per above, not used */\n \tpath->path.startup_cost = totalCost;\n \tpath->path.total_cost = totalCost;\n@@ -1128,8 +1164,9 @@ void\n cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)\n {\n \tCost\t\ttotalCost;\n-\tSelectivity selec;\n+\tSelectivity selec, maxsubselec;\n \tListCell *l;\n+\tdouble\t\tcorrelation;\n \n \t/*\n \t * We estimate OR selectivity on the assumption that the inputs are\n@@ -1142,23 +1179,32 @@ cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root)\n \t * optimized out when the inputs are BitmapIndexScans.\n \t */\n \ttotalCost = 0.0;\n-\tselec = 0.0;\n+\tmaxsubselec = selec = 0.0;\n+\tcorrelation = 0;\n \tforeach(l, path->bitmapquals)\n \t{\n \t\tPath\t *subpath = (Path *) lfirst(l);\n \t\tCost\t\tsubCost;\n \t\tSelectivity subselec;\n+\t\tdouble\t\tsubcorrelation;\n \n-\t\tcost_bitmap_tree_node(subpath, &subCost, &subselec);\n+\t\tcost_bitmap_tree_node(subpath, &subCost, &subselec, &subcorrelation);\n \n \t\tselec += subselec;\n \n+\t\t/* For an OR node, use the correlation of its least-selective subpath */\n+\t\tif (subselec>=maxsubselec) {\n+\t\t\t\tcorrelation = subcorrelation;\n+\t\t\t\tmaxsubselec = subselec;\n+\t\t}\n+\n \t\ttotalCost += subCost;\n \t\tif (l != list_head(path->bitmapquals) &&\n \t\t\t!IsA(subpath, IndexPath))\n \t\t\ttotalCost += 100.0 * cpu_operator_cost;\n \t}\n \tpath->bitmapselectivity = Min(selec, 1.0);\n+\tpath->bitmapcorrelation = correlation;\n \tpath->path.rows = 0;\t\t/* per above, not used */\n \tpath->path.startup_cost = totalCost;\n \tpath->path.total_cost = totalCost;\n@@ -5510,8 +5556,11 @@ compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel, Path *bitmapqual,\n {\n \tCost\t\tindexTotalCost;\n \tSelectivity indexSelectivity;\n+\tdouble\t\tindexCorrelation;\n \tdouble\t\tT;\n-\tdouble\t\tpages_fetched;\n+\tdouble\t\tpages_fetched,\n+\t\t\t\tpages_fetchedMIN,\n+\t\t\t\tpages_fetchedMAX;\n \tdouble\t\ttuples_fetched;\n \tdouble\t\theap_pages;\n \tlong\t\tmaxentries;\n@@ -5520,7 +5569,7 @@ compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel, Path *bitmapqual,\n \t * Fetch total cost of obtaining the bitmap, as well as its total\n \t * selectivity.\n \t */\n-\tcost_bitmap_tree_node(bitmapqual, &indexTotalCost, &indexSelectivity);\n+\tcost_bitmap_tree_node(bitmapqual, &indexTotalCost, &indexSelectivity, &indexCorrelation);\n \n \t/*\n \t * Estimate number of main-table pages fetched.\n@@ -5534,7 +5583,12 @@ compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel, Path *bitmapqual,\n \t * the same as the Mackert and Lohman formula for the case T <= b (ie, no\n \t * re-reads needed).\n \t */\n-\tpages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n+\tpages_fetchedMAX = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n+\n+\t/* pages_fetchedMIN is for the perfectly correlated case (csquared=1) */\n+\tpages_fetchedMIN = ceil(indexSelectivity * (double) baserel->pages);\n+\n+\tpages_fetched = pages_fetchedMAX + indexCorrelation*indexCorrelation*(pages_fetchedMIN - pages_fetchedMAX);\n \n \t/*\n \t * Calculate the number of pages fetched from the heap. Then based on\ndiff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c\nindex 37b257c..2a3db34 100644\n--- a/src/backend/optimizer/path/indxpath.c\n+++ b/src/backend/optimizer/path/indxpath.c\n@@ -1467,8 +1467,8 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel, List *paths)\n \t\t\tSelectivity nselec;\n \t\t\tSelectivity oselec;\n \n-\t\t\tcost_bitmap_tree_node(pathinfo->path, &ncost, &nselec);\n-\t\t\tcost_bitmap_tree_node(pathinfoarray[i]->path, &ocost, &oselec);\n+\t\t\tcost_bitmap_tree_node(pathinfo->path, &ncost, &nselec, NULL);\n+\t\t\tcost_bitmap_tree_node(pathinfoarray[i]->path, &ocost, &oselec, NULL);\n \t\t\tif (ncost < ocost)\n \t\t\t\tpathinfoarray[i] = pathinfo;\n \t\t}\n@@ -1580,8 +1580,8 @@ path_usage_comparator(const void *a, const void *b)\n \tSelectivity aselec;\n \tSelectivity bselec;\n \n-\tcost_bitmap_tree_node(pa->path, &acost, &aselec);\n-\tcost_bitmap_tree_node(pb->path, &bcost, &bselec);\n+\tcost_bitmap_tree_node(pa->path, &acost, &aselec, NULL);\n+\tcost_bitmap_tree_node(pb->path, &bcost, &bselec, NULL);\n \n \t/*\n \t * If costs are the same, sort by selectivity.\ndiff --git a/src/include/nodes/pathnodes.h b/src/include/nodes/pathnodes.h\nindex 23a06d7..beaac03 100644\n--- a/src/include/nodes/pathnodes.h\n+++ b/src/include/nodes/pathnodes.h\n@@ -1181,6 +1181,7 @@ typedef struct IndexPath\n \tScanDirection indexscandir;\n \tCost\t\tindextotalcost;\n \tSelectivity indexselectivity;\n+\tdouble\t\tindexCorrelation;\n } IndexPath;\n \n /*\n@@ -1261,6 +1262,7 @@ typedef struct BitmapAndPath\n \tPath\t\tpath;\n \tList\t *bitmapquals;\t/* IndexPaths and BitmapOrPaths */\n \tSelectivity bitmapselectivity;\n+\tdouble\t\tbitmapcorrelation;\n } BitmapAndPath;\n \n /*\n@@ -1274,6 +1276,7 @@ typedef struct BitmapOrPath\n \tPath\t\tpath;\n \tList\t *bitmapquals;\t/* IndexPaths and BitmapAndPaths */\n \tSelectivity bitmapselectivity;\n+\tdouble\t\tbitmapcorrelation;\n } BitmapOrPath;\n \n /*\ndiff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h\nindex b3d0b4f..9a28665 100644\n--- a/src/include/optimizer/cost.h\n+++ b/src/include/optimizer/cost.h\n@@ -79,7 +79,7 @@ extern void cost_bitmap_heap_scan(Path *path, PlannerInfo *root, RelOptInfo *bas\n \t\t\t\t\t\t\t\t Path *bitmapqual, double loop_count);\n extern void cost_bitmap_and_node(BitmapAndPath *path, PlannerInfo *root);\n extern void cost_bitmap_or_node(BitmapOrPath *path, PlannerInfo *root);\n-extern void cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec);\n+extern void cost_bitmap_tree_node(Path *path, Cost *cost, Selectivity *selec, double *correlation);\n extern void cost_tidscan(Path *path, PlannerInfo *root,\n \t\t\t\t\t\t RelOptInfo *baserel, List *tidquals, ParamPathInfo *param_info);\n extern void cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root,\n-- \n2.7.4",
"msg_date": "Sat, 2 Nov 2019 15:26:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Sat, Nov 02, 2019 at 03:26:17PM -0500, Justin Pryzby wrote:\n> Attached is a fixed and rebasified patch for cfbot.\n> Included inline for conceptual review.\n\nYour patch still causes two regression tests to fail per Mr Robot's\nreport: join and select. Could you look at those problems? I have\nmoved the patch to next CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 12:34:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Sun, Dec 01, 2019 at 12:34:37PM +0900, Michael Paquier wrote:\n> On Sat, Nov 02, 2019 at 03:26:17PM -0500, Justin Pryzby wrote:\n> > Attached is a fixed and rebasified patch for cfbot.\n> > Included inline for conceptual review.\n> \n> Your patch still causes two regression tests to fail per Mr Robot's\n> report: join and select. Could you look at those problems? I have\n> moved the patch to next CF, waiting on author.\n\nThe regression failures seem to be due to deliberate, expected plan changes.\n\nI think the patch still needs some discussion, but find attached rebasified\npatch including regression diffs.\n\nJustin",
"msg_date": "Sun, 1 Dec 2019 10:00:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "Find attached cleaned up patch.\nFor now, I updated the regress/expected/, but I think the test maybe has to be\nupdated to do what it was written to do.",
"msg_date": "Mon, 6 Jan 2020 13:58:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 1:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Find attached cleaned up patch.\n> For now, I updated the regress/expected/, but I think the test maybe has to be\n> updated to do what it was written to do.\n\nI have noticed that in \"cost_index\" we have used the indexCorrelation\nfor computing the run_cost, not the number of pages whereas in your\npatch you have used it for computing the number of pages. Any reason\nfor the same?\n\ncost_index\n{\n..\n/*\n* Now interpolate based on estimated index order correlation to get total\n* disk I/O cost for main table accesses.\n*/\ncsquared = indexCorrelation * indexCorrelation;\nrun_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n}\n\nPatch\n- pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n+ pages_fetchedMAX = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n+\n+ /* pages_fetchedMIN is for the perfectly correlated case (csquared=1) */\n+ pages_fetchedMIN = ceil(indexSelectivity * (double) baserel->pages);\n+\n+ pages_fetched = pages_fetchedMAX +\nindexCorrelation*indexCorrelation*(pages_fetchedMIN -\npages_fetchedMAX);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Jan 2020 09:21:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 09:21:03AM +0530, Dilip Kumar wrote:\n> On Tue, Jan 7, 2020 at 1:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Find attached cleaned up patch.\n> > For now, I updated the regress/expected/, but I think the test maybe has to be\n> > updated to do what it was written to do.\n> \n> I have noticed that in \"cost_index\" we have used the indexCorrelation\n> for computing the run_cost, not the number of pages whereas in your\n> patch you have used it for computing the number of pages. Any reason\n> for the same?\n\nAs Jeff has pointed out, high correlation has two effects in cost_index():\n1) the number of pages read will be less;\n2) the pages will be read more sequentially;\n\ncost_index reuses the pages_fetched variable, so (1) isn't particularly clear,\nbut does essentially:\n\n /* max_IO_cost is for the perfectly uncorrelated case (csquared=0) */\n pages_fetched(MIN) = index_pages_fetched(tuples_fetched,\n baserel->pages,\n (double) index->pages,\n root);\n max_IO_cost = pages_fetchedMIN * spc_random_page_cost;\n\n /* min_IO_cost is for the perfectly correlated case (csquared=1) */\n pages_fetched(MAX) = ceil(indexSelectivity * (double) baserel->pages);\n min_IO_cost = (pages_fetchedMAX - 1) * spc_seq_page_cost;\n\n\nMy patch 1) changes compute_bitmap_pages() to interpolate pages_fetched using\nthe correlation; pages_fetchedMIN is new:\n\n> Patch\n> - pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n> + pages_fetchedMAX = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n> +\n> + /* pages_fetchedMIN is for the perfectly correlated case (csquared=1) */\n> + pages_fetchedMIN = ceil(indexSelectivity * (double) baserel->pages);\n> +\n> + pages_fetched = pages_fetchedMAX + indexCorrelation*indexCorrelation*(pages_fetchedMIN - pages_fetchedMAX);\n\nAnd, 2) also computes cost_per_page by interpolation between seq_page and\nrandom_page cost:\n\n+ cost_per_page_corr = spc_random_page_cost -\n+ (spc_random_page_cost - spc_seq_page_cost)\n+ * (1-correlation*correlation);\n\nThanks for looking. I'll update the name of pages_fetchedMIN/MAX in my patch\nfor consistency with cost_index.\n\nJustin\n\n\n",
"msg_date": "Mon, 6 Jan 2020 23:26:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 11:26:06PM -0600, Justin Pryzby wrote:\n> As Jeff has pointed out, high correlation has two effects in cost_index():\n> 1) the number of pages read will be less;\n> 2) the pages will be read more sequentially;\n> \n> cost_index reuses the pages_fetched variable, so (1) isn't particularly clear,\n\nI tried to make this more clear in 0001\n\n> + cost_per_page_corr = spc_random_page_cost -\n> + (spc_random_page_cost - spc_seq_page_cost)\n> + * (1-correlation*correlation);\n\nAnd fixed bug: this should be c*c not 1-c*c.",
"msg_date": "Sun, 12 Jan 2020 19:47:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "There were no comments last month, so rebased, fixed tests, and kicked to next\nCF.\n\n-- \nJustin",
"msg_date": "Fri, 13 Mar 2020 09:09:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "Status update for a commitfest entry\r\n\r\nAccording to cfbot, the patch fails to apply. Could you please send a rebased version?\r\n\r\nI wonder why this patch hangs so long without a review. Maybe it will help to move discussion forward, if you provide more examples of queries that can benefit from this imporovement?\r\n\r\nThe first patch is simply a refactoring and don't see any possible objections against it.\r\nThe second patch also looks fine to me. The logic is understandable and the code is neat.\r\n\r\nIt wouldn't hurt to add a comment for this computation, though.\r\n+\tpages_fetched = pages_fetchedMAX + indexCorrelation*indexCorrelation*(pages_fetchedMIN - pages_fetchedMAX);\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Fri, 06 Nov 2020 13:51:26 +0000",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Fri, Nov 06, 2020 at 01:51:26PM +0000, Anastasia Lubennikova wrote:\n> I wonder why this patch hangs so long without a review. Maybe it will help to move discussion forward, if you provide more examples of queries that can benefit from this imporovement?\n\nThanks for looking.\n\nThe explanation is that the planner currently gives index scans a cost\n\"discount\" for correlation. Jeff Janes has pointed out that there are two\ndiscounts: 1) fewer pages are read; and, 2) lower cost-per-page. This patch\naims to give bitmap scans the same benefits. A \"dense\" bitmap will read fewer\npages, more sequentially.\n\nTom pointed out that the \"correlation\" isn't a perfect metric for this, since\nthe index might be \"clumpy\" without being well-ordered, which doesn't matter\nfor bitmap scans, which access in physical order anyway. In those cases, the\ncorrelation logic would fail to reduce the estimated cost of bitmap scan, even\nthough the actual cost is less (same as now). This is true, but my goal is to\ngive bitmap scans at least the same benefit as index scans, even if there's\nadditional \"discounts\" which aren't yet being considered.\n\nThis was an issue for me in the past when the planner chose a to scan a index,\nbut it was slower than projected (for reasons unrelated to this patch). Making\nbitmap cost account for high correlation was one step towards addressing that.\nSince then, we switched to brin indexes, which force bitmap scans.\nhttps://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com\n\nHere's an example.\n\nCREATE TABLE t AS SELECT a,b FROM generate_series(1,999) a, generate_series(1,999) b ORDER BY a+b/9.9;\nCREATE INDEX ON t(a);\n\npostgres=# SELECT attname, correlation FROM pg_stats WHERE tablename ='t';\n a | 0.9951819\n b | 0.10415093\n\npostgres=# explain analyze SELECT * FROM t WHERE a BETWEEN 55 AND 77;\n Index Scan using t_a_idx on t (cost=0.42..810.89 rows=22683 width=8) (actual time=0.292..66.657 rows=22977 loops=1)\n\nvs (without my patch, with SET enable_indexscan=off);\n Bitmap Heap Scan on t (cost=316.93..5073.17 rows=22683 width=8) (actual time=10.810..26.633 rows=22977 loops=1)\n\nvs (with my patch, with SET enable_indexscan=off):\npostgres=# explain analyze SELECT * FROM t WHERE a BETWEEN 55 AND 77;\n Bitmap Heap Scan on t (cost=316.93..823.84 rows=22683 width=8) (actual time=10.742..33.279 rows=22977 loops=1)\n\nSo bitmap scan is cheaper, but the cost estimate is a lot higher. My patch\nimproves but doesn't completely \"fix\" that - bitmap scan is still costed as\nmore expensive, but happens to be. This is probably not even a particularly\ngood example, as it's a small table cached in RAM. There's always going to be\ncases like this, certainly near the costs where the plan changes \"shape\". I\nthink a cost difference of 10 here is very reasonable (cpu_oper_cost,\nprobably), but a cost difference of 5x is not.\n\nThere's not many regression tests changed. Probably partially because bitmap\nscans have an overhead (the heap scan cannot start until after the index scan\nfinishes), and we avoid large tests.\n\nIf there's no interest in the patch, I guess we should just close it rather\nthan letting it rot.\n\n> The first patch is simply a refactoring and don't see any possible objections against it.\n> The second patch also looks fine to me. The logic is understandable and the code is neat.\n> \n> It wouldn't hurt to add a comment for this computation, though.\n> +\tpages_fetched = pages_fetchedMAX + indexCorrelation*indexCorrelation*(pages_fetchedMIN - pages_fetchedMAX);\n\nYou're right. It's like this:\n// interpolate between c==0: pages_fetched=max and c==1: pages_fetched=min\npages_fetched = min + (max-min)*(1-c**2) \npages_fetched = min + max*(1-c**2) - min*(1-c**2)\npages_fetched = max*(1-c**2) + min - min*(1-c**2)\npages_fetched = max*(1-c**2) + min*(c**2)\npages_fetched = max - max*c**2 + min*(c**2)\npages_fetched = max + min*(c**2) - max*c**2\npages_fetched = max + c**2 * (min-max)\n\nI'm not sure if there's a reason why it's written like that, but (min-max)\nlooks odd, so I wrote it like:\npages_fetched = max - c**2 * (max-min)\n\n> The new status of this patch is: Waiting on Author",
"msg_date": "Fri, 6 Nov 2020 11:57:33 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On 06/11/2020 19:57, Justin Pryzby wrote:\n> On Fri, Nov 06, 2020 at 01:51:26PM +0000, Anastasia Lubennikova wrote:\n>> The first patch is simply a refactoring and don't see any possible objections against it.\n>> The second patch also looks fine to me. The logic is understandable and the code is neat.\n\n+1\n\n>> It wouldn't hurt to add a comment for this computation, though.\n>> +\tpages_fetched = pages_fetchedMAX + indexCorrelation*indexCorrelation*(pages_fetchedMIN - pages_fetchedMAX);\n> \n> You're right. It's like this:\n> // interpolate between c==0: pages_fetched=max and c==1: pages_fetched=min\n> pages_fetched = min + (max-min)*(1-c**2)\n> pages_fetched = min + max*(1-c**2) - min*(1-c**2)\n> pages_fetched = max*(1-c**2) + min - min*(1-c**2)\n> pages_fetched = max*(1-c**2) + min*(c**2)\n> pages_fetched = max - max*c**2 + min*(c**2)\n> pages_fetched = max + min*(c**2) - max*c**2\n> pages_fetched = max + c**2 * (min-max)\n> \n> I'm not sure if there's a reason why it's written like that, but (min-max)\n> looks odd, so I wrote it like:\n> pages_fetched = max - c**2 * (max-min)\n\nI agree min-max looks odd. max - c**2 * (max-min) looks a bit odd too, \nthough. Whatever we do here, though, I'd suggest that we keep it \nconsistent with cost_index().\n\nOther than that, and a quick pgdindent run, this seems ready to me. I'll \nmark it as Ready for Committer.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 27 Nov 2020 19:27:19 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Other than that, and a quick pgdindent run, this seems ready to me. I'll \n> mark it as Ready for Committer.\n\nI dunno, this seems largely misguided to me.\n\nIt's already the case that index correlation is just not the right\nstat for this purpose, since it doesn't give you much of a toehold\non whether a particular scan is going to be accessing tightly-clumped\ndata. For specific kinds of index conditions, such as a range query\non a btree index, maybe you could draw that conclusion ... but this\npatch isn't paying any attention to the index condition in use.\n\nAnd then the rules for bitmap AND and OR correlations, if not just\nplucked out of the air, still seem *far* too optimistic. As an\nexample, even if my individual indexes are perfectly correlated and\nso a probe would touch only one page, OR'ing ten such probes together\nis likely going to touch ten different pages. But unless I'm\nmisreading the patch, it's going to report back an OR correlation\nthat corresponds to touching one page.\n\nEven if we assume that the correlation is nonetheless predictive of\nhow big a part of the table we'll be examining, I don't see a lot\nof basis for deciding that the equations the patch adds to\ncost_bitmap_heap_scan are the right ones.\n\nI'd have expected this thread to focus a whole lot more on actual\nexamples than it has done, so that we could have some confidence\nthat these equations have something to do with reality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Nov 2020 15:48:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Sat, Nov 28, 2020 at 5:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > Other than that, and a quick pgdindent run, this seems ready to me. I'll\n> > mark it as Ready for Committer.\n>\n> I dunno, this seems largely misguided to me.\n>\n> It's already the case that index correlation is just not the right\n> stat for this purpose, since it doesn't give you much of a toehold\n> on whether a particular scan is going to be accessing tightly-clumped\n> data. For specific kinds of index conditions, such as a range query\n> on a btree index, maybe you could draw that conclusion ... but this\n> patch isn't paying any attention to the index condition in use.\n>\n> And then the rules for bitmap AND and OR correlations, if not just\n> plucked out of the air, still seem *far* too optimistic. As an\n> example, even if my individual indexes are perfectly correlated and\n> so a probe would touch only one page, OR'ing ten such probes together\n> is likely going to touch ten different pages. But unless I'm\n> misreading the patch, it's going to report back an OR correlation\n> that corresponds to touching one page.\n>\n> Even if we assume that the correlation is nonetheless predictive of\n> how big a part of the table we'll be examining, I don't see a lot\n> of basis for deciding that the equations the patch adds to\n> cost_bitmap_heap_scan are the right ones.\n>\n> I'd have expected this thread to focus a whole lot more on actual\n> examples than it has done, so that we could have some confidence\n> that these equations have something to do with reality.\n>\n\nStatus update for a commitfest entry.\n\nThe discussion has been inactive since the end of the last CF. It\nseems to me that we need some discussion on the point Tom mentioned.\nIt looks either \"Needs Review\" or \"Ready for Committer\" status but\nJustin set it to \"Waiting on Author\" on 2020-12-03 by himself. Are you\nworking on this, Justin?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Jan 2021 21:51:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 9:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Nov 28, 2020 at 5:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > Other than that, and a quick pgdindent run, this seems ready to me. I'll\n> > > mark it as Ready for Committer.\n> >\n> > I dunno, this seems largely misguided to me.\n> >\n> > It's already the case that index correlation is just not the right\n> > stat for this purpose, since it doesn't give you much of a toehold\n> > on whether a particular scan is going to be accessing tightly-clumped\n> > data. For specific kinds of index conditions, such as a range query\n> > on a btree index, maybe you could draw that conclusion ... but this\n> > patch isn't paying any attention to the index condition in use.\n> >\n> > And then the rules for bitmap AND and OR correlations, if not just\n> > plucked out of the air, still seem *far* too optimistic. As an\n> > example, even if my individual indexes are perfectly correlated and\n> > so a probe would touch only one page, OR'ing ten such probes together\n> > is likely going to touch ten different pages. But unless I'm\n> > misreading the patch, it's going to report back an OR correlation\n> > that corresponds to touching one page.\n> >\n> > Even if we assume that the correlation is nonetheless predictive of\n> > how big a part of the table we'll be examining, I don't see a lot\n> > of basis for deciding that the equations the patch adds to\n> > cost_bitmap_heap_scan are the right ones.\n> >\n> > I'd have expected this thread to focus a whole lot more on actual\n> > examples than it has done, so that we could have some confidence\n> > that these equations have something to do with reality.\n> >\n>\n> Status update for a commitfest entry.\n>\n> The discussion has been inactive since the end of the last CF. It\n> seems to me that we need some discussion on the point Tom mentioned.\n> It looks either \"Needs Review\" or \"Ready for Committer\" status but\n> Justin set it to \"Waiting on Author\" on 2020-12-03 by himself. Are you\n> working on this, Justin?\n>\n\nStatus update for a commitfest entry.\n\nThis patch, which you submitted to this CommitFest, has been awaiting\nyour attention for more than one month. As such, we have moved it to\n\"Returned with Feedback\" and removed it from the reviewing queue.\nDepending on timing, this may be reversable, so let us know if there\nare extenuating circumstances. In any case, you are welcome to address\nthe feedback you have received, and resubmit the patch to the next\nCommitFest.\n\nThank you for contributing to PostgreSQL.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 22:32:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bitmaps and correlation"
}
] |
[
{
"msg_contents": " Hi,\n\nThe doc on COPY CSV says about the backslash-dot sequence:\n\n To avoid any misinterpretation, a \\. data value appearing as a\n lone entry on a line is automatically quoted on output, and on\n input, if quoted, is not interpreted as the end-of-data marker\n\nHowever this quoting does not happen when \\. is already part\nof a quoted field. Example:\n\nCOPY (select 'somevalue', E'foo\\n\\\\.\\nbar') TO STDOUT CSV;\n\noutputs:\n\nsomevalue,\"foo\n\\.\nbar\"\n\nwhich conforms to the CSV rules, by which we are not allowed\nto replace \\. by anything AFAICS.\nThe trouble is, when trying to import this back with COPY FROM,\nit will error out at the backslash-dot and not import anything.\nFurthermore, if these data are meant to be embedded into a\nscript, it creates a potential risk of SQL injection.\n\nIt is a known issue? I haven't found previous discussions on this.\nIt looks to me like the ability of backslash-dot to be an end-of-data\nmarker should be neutralizable for CSV. When the data is not embedded,\nit's not needed anyway, and when it's embedded, we could surely think\nof alternatives.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n",
"msg_date": "Wed, 02 Jan 2019 16:58:35 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Wed, Jan 2, 2019 at 04:58:35PM +0100, Daniel Verite wrote:\n> Hi,\n> \n> The doc on COPY CSV says about the backslash-dot sequence:\n> \n> To avoid any misinterpretation, a \\. data value appearing as a\n> lone entry on a line is automatically quoted on output, and on\n> input, if quoted, is not interpreted as the end-of-data marker\n> \n> However this quoting does not happen when \\. is already part\n> of a quoted field. Example:\n> \n> COPY (select 'somevalue', E'foo\\n\\\\.\\nbar') TO STDOUT CSV;\n> \n> outputs:\n> \n> somevalue,\"foo\n> \\.\n> bar\"\n> \n> which conforms to the CSV rules, by which we are not allowed\n> to replace \\. by anything AFAICS.\n> The trouble is, when trying to import this back with COPY FROM,\n> it will error out at the backslash-dot and not import anything.\n> Furthermore, if these data are meant to be embedded into a\n> script, it creates a potential risk of SQL injection.\n> \n> It is a known issue? I haven't found previous discussions on this.\n> It looks to me like the ability of backslash-dot to be an end-of-data\n> marker should be neutralizable for CSV. When the data is not embedded,\n> it's not needed anyway, and when it's embedded, we could surely think\n> of alternatives.\n\nI was unable to reproduce the failure here using files:\n\n\tCREATE TABLE test (x TEXT);\n\tINSERT INTO test VALUES (E'foo\\n\\\\.\\nbar');\n\n\tCOPY test TO STDOUT CSV;\n\t\"foo\n\t\\.\n\tbar\"\n\n\tCOPY test TO '/u/postgres/tmp/x' CSV;\n\t\n\tCOPY test FROM '/u/postgres/tmp/x' CSV;\n\n\tSELECT * FROM test;\n\t x\n\t-----\n\t foo+\n\t \\. +\n\t bar\n\t foo+\n\t \\. +\n\t bar\n\nbut I am able to see the failure using STDIN:\n\n\tCOPY test FROM STDIN CSV;\n\tEnter data to be copied followed by a newline.\n\tEnd with a backslash and a period on a line by itself, or an EOF signal.\n\t\"foo\n\t\\.\n\tERROR: unterminated CSV quoted field\n\tCONTEXT: COPY test, line 1: \"\"foo\n\nThis seems like a bug to me. Looking at the code, psql issues the\nprompts for STDIN, but when it sees \\. alone on a line, it has no idea\nyou are in a quoted CSV string, so it thinks the copy is done and sends\nthe result to the server. I can't see an easy way to fix this. I guess\nwe could document it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Thu, 24 Jan 2019 22:09:30 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "\tBruce Momjian wrote:\n\n> but I am able to see the failure using STDIN:\n> \n> COPY test FROM STDIN CSV;\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF\n> signal.\n> \"foo\n> \\.\n> ERROR: unterminated CSV quoted field\n> CONTEXT: COPY test, line 1: \"\"foo\n> \n> This seems like a bug to me. Looking at the code, psql issues the\n> prompts for STDIN, but when it sees \\. alone on a line, it has no idea\n> you are in a quoted CSV string, so it thinks the copy is done and sends\n> the result to the server. I can't see an easy way to fix this. I guess\n> we could document it.\n\nThanks for looking into this. \n\n\\copy from file with csv is also affected since it uses COPY FROM\nSTDIN behind the scene. The case of embedded data looks more worrying\nbecause psql will execute the data following \\. as if they were\nSQL statements.\n\nISTM that only ON_ERROR_STOP=on prevents the risk of SQL injection\nin that scenario, but it's off by default.\n\nWhat about this idea: when psql is feeding COPY data from its command\nstream and an error occurs, it should act as if ON_ERROR_STOP was \"on\"\neven if it's not.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n",
"msg_date": "Fri, 25 Jan 2019 13:01:22 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Thu, Jan 24, 2019 at 10:09:30PM -0500, Bruce Momjian wrote:\n> This seems like a bug to me. Looking at the code, psql issues the\n> prompts for STDIN, but when it sees \\. alone on a line, it has no idea\n> you are in a quoted CSV string, so it thinks the copy is done and sends\n> the result to the server. I can't see an easy way to fix this. I guess\n> we could document it.\n\nIn src/bin/psql/copy.c, handleCopyIn():\n\n/*\n * This code erroneously assumes '\\.' on a line alone\n * inside a quoted CSV string terminates the \\copy.\n * http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org\n */\nif (strcmp(buf, \"\\\\.\\n\") == 0 ||\n strcmp(buf, \"\\\\.\\r\\n\") == 0)\n{\n copydone = true;\n break;\n}\n\nThis story pops up from time to time..\n--\nMichael",
"msg_date": "Sun, 27 Jan 2019 22:10:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "\tMichael Paquier wrote:\n\n> In src/bin/psql/copy.c, handleCopyIn():\n> \n> /*\n> * This code erroneously assumes '\\.' on a line alone\n> * inside a quoted CSV string terminates the \\copy.\n> *\n> http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org\n> */\n> if (strcmp(buf, \"\\\\.\\n\") == 0 ||\n> strcmp(buf, \"\\\\.\\r\\n\") == 0)\n> {\n> copydone = true;\n> break;\n> }\n\nIndeed, it's exactly that problem.\nAnd there's the related problem that it derails the input stream\nin a way that lines of data become commands, but that one is\nnot specific to that particular error.\n\nFor the backslash-dot in a quoted string, the root cause is\nthat psql is not aware that the contents are CSV so it can't\nparse them properly.\nI can think of several ways of working around that, more or less\ninelegant:\n\n- the end of data could be expressed as a length (in number of lines\nfor instance) instead of an in-data marker.\n\n- the end of data could be configurable, as in the MIME structure of\nmultipart mail messages, where a part is ended by a \"boundary\",\nline, generally a long randomly generated string. This boundary\nwould have to be known to psql through setting a dedicated\nvariable or command.\n\n- COPY as the SQL command could have the boundary option\nfor data fed through its STDIN. This could neutralize the\nspecial role of backslash-dot in general, not just in quoted fields,\nsince the necessity to quote backslash-dot is a wart anyway.\n\n- psql could be told somehow that the next piece of inline data is in\nthe CSV format, and then pass it through a CSV parser.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n",
"msg_date": "Mon, 28 Jan 2019 16:06:17 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Sun, Jan 27, 2019 at 10:10:36PM +0900, Michael Paquier wrote:\n> On Thu, Jan 24, 2019 at 10:09:30PM -0500, Bruce Momjian wrote:\n> > This seems like a bug to me. Looking at the code, psql issues the\n> > prompts for STDIN, but when it sees \\. alone on a line, it has no idea\n> > you are in a quoted CSV string, so it thinks the copy is done and sends\n> > the result to the server. I can't see an easy way to fix this. I guess\n> > we could document it.\n> \n> In src/bin/psql/copy.c, handleCopyIn():\n> \n> /*\n> * This code erroneously assumes '\\.' on a line alone\n> * inside a quoted CSV string terminates the \\copy.\n> * http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org\n> */\n> if (strcmp(buf, \"\\\\.\\n\") == 0 ||\n> strcmp(buf, \"\\\\.\\r\\n\") == 0)\n> {\n> copydone = true;\n> break;\n> }\n> \n> This story pops up from time to time..\n\nThe killer is I committed this C comment six years ago, and didn't\nremember it. :-O\n\n\tcommit 361b94c4b98b85b19b850cff37be76d1f6d4f8f7\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\tDate: Thu Jul 4 13:09:52 2013 -0400\n\t\n\t Add C comment about \\copy bug in CSV mode\n\t Comment: This code erroneously assumes '\\.' on a line alone inside a\n\t quoted CSV string terminates the \\copy.\n\t http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org\n\nGlad I mentioned the URL, at least.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 28 Jan 2019 16:40:09 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Fri, Jan 25, 2019 at 01:01:22PM +0100, Daniel Verite wrote:\n> \tBruce Momjian wrote:\n> \n> > but I am able to see the failure using STDIN:\n> > \n> > COPY test FROM STDIN CSV;\n> > Enter data to be copied followed by a newline.\n> > End with a backslash and a period on a line by itself, or an EOF\n> > signal.\n> > \"foo\n> > \\.\n> > ERROR: unterminated CSV quoted field\n> > CONTEXT: COPY test, line 1: \"\"foo\n> > \n> > This seems like a bug to me. Looking at the code, psql issues the\n> > prompts for STDIN, but when it sees \\. alone on a line, it has no idea\n> > you are in a quoted CSV string, so it thinks the copy is done and sends\n> > the result to the server. I can't see an easy way to fix this. I guess\n> > we could document it.\n> \n> Thanks for looking into this. \n> \n> \\copy from file with csv is also affected since it uses COPY FROM\n> STDIN behind the scene. The case of embedded data looks more worrying\n> because psql will execute the data following \\. as if they were\n> SQL statements.\n> \n> ISTM that only ON_ERROR_STOP=on prevents the risk of SQL injection\n> in that scenario, but it's off by default.\n\nYou are correct that someone having data that is SQL commands would be\nable to perhaps execute those commands on restore. pg_dump doesn't use\nCSV, and this only affects STDIN, not files or PROGRAM input. I think\nthe question is how many people are using CSV/STDIN for insecure data\nloads?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 28 Jan 2019 16:44:48 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Mon, Jan 28, 2019 at 04:06:17PM +0100, Daniel Verite wrote:\n> \tMichael Paquier wrote:\n> \n> > In src/bin/psql/copy.c, handleCopyIn():\n> > \n> > /*\n> > * This code erroneously assumes '\\.' on a line alone\n> > * inside a quoted CSV string terminates the \\copy.\n> > *\n> > http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org\n> > */\n> > if (strcmp(buf, \"\\\\.\\n\") == 0 ||\n> > strcmp(buf, \"\\\\.\\r\\n\") == 0)\n> > {\n> > copydone = true;\n> > break;\n> > }\n> \n> Indeed, it's exactly that problem.\n> And there's the related problem that it derails the input stream\n> in a way that lines of data become commands, but that one is\n> not specific to that particular error.\n> \n> For the backslash-dot in a quoted string, the root cause is\n> that psql is not aware that the contents are CSV so it can't\n> parse them properly.\n> I can think of several ways of working around that, more or less\n> inelegant:\n> \n> - the end of data could be expressed as a length (in number of lines\n> for instance) instead of an in-data marker.\n> \n> - the end of data could be configurable, as in the MIME structure of\n> multipart mail messages, where a part is ended by a \"boundary\",\n> line, generally a long randomly generated string. This boundary\n> would have to be known to psql through setting a dedicated\n> variable or command.\n> \n> - COPY as the SQL command could have the boundary option\n> for data fed through its STDIN. This could neutralize the\n> special role of backslash-dot in general, not just in quoted fields,\n> since the necessity to quote backslash-dot is a wart anyway.\n\nWell, these all kind of require a change to the COPY format, which\nhasn't changed in many years.\n\n> - psql could be told somehow that the next piece of inline data is in\n> the CSV format, and then pass it through a CSV parser.\n\nThat might be the cleanest solution, but how would we actually input\nmulti-line data in CSV mode with \\. alone on a line?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 28 Jan 2019 16:47:25 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "\tBruce Momjian wrote:\n\n> > - the end of data could be expressed as a length (in number of lines\n> > for instance) instead of an in-data marker.\n> > \n> > - the end of data could be configurable, as in the MIME structure of\n> > multipart mail messages, where a part is ended by a \"boundary\",\n> > line, generally a long randomly generated string. This boundary\n> > would have to be known to psql through setting a dedicated\n> > variable or command.\n> > \n> > - COPY as the SQL command could have the boundary option\n> > for data fed through its STDIN. This could neutralize the\n> > special role of backslash-dot in general, not just in quoted fields,\n> > since the necessity to quote backslash-dot is a wart anyway.\n> \n> Well, these all kind of require a change to the COPY format, which\n> hasn't changed in many years.\n\nNot for the first two. As an example of solution #2, it could look like this:\n\n=# \\set INLINE_COPY_BOUNDARY ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n=# COPY table FROM STDIN CSV;\nsomevalue,\"foo\n\\.\nbar\"\n==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n\nInstead of looking for \\. on a line by itself, psql would look for the\nboundary to know where the data ends.\nThe boundary is not transmitted to the server, it has no need to know\nabout it.\n\n> > - psql could be told somehow that the next piece of inline data is in\n> > the CSV format, and then pass it through a CSV parser.\n> \n> That might be the cleanest solution, but how would we actually input\n> multi-line data in CSV mode with \\. alone on a line?\n\nWith this solution, the content doesn't change at all.\nThe weird part would be the user interface, because the information\npsql needs is not only \"CSV\", it's also the options DELIMITER, QUOTE,\nESCAPE and possibly ENCODING. Currently it doesn't know any of these,\nthey're passed to the server in an opaque, unparsed form within\nthe COPY command.\n\nPersonally, the solution I find cleaner is the server side not having\nany end-of-data marker for CSV. So backslash-dot would never be\nspecial. psql could allow for a custom ending boundary for in-script\ndata, and users could set that to backslash-dot if they want, but that\nwould be their choice.\nThat would be clearly not backward compatible, and I believe it wouldn't\nwork with the v2 protocol, so I'm not sure it would have much chance of\napproval.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n",
"msg_date": "Wed, 30 Jan 2019 18:32:11 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Wed, Jan 30, 2019 at 06:32:11PM +0100, Daniel Verite wrote:\n> \tBruce Momjian wrote:\n> > Well, these all kind of require a change to the COPY format, which\n> > hasn't changed in many years.\n> \n> Not for the first two. As an example of solution #2, it could look like this:\n> \n> =# \\set INLINE_COPY_BOUNDARY ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n> =# COPY table FROM STDIN CSV;\n> somevalue,\"foo\n> \\.\n> bar\"\n> ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n> \n> Instead of looking for \\. on a line by itself, psql would look for the\n> boundary to know where the data ends.\n> The boundary is not transmitted to the server, it has no need to know\n> about it.\n\nWow, that is an odd API, as you stated below.\n\n> > > - psql could be told somehow that the next piece of inline data is in\n> > > the CSV format, and then pass it through a CSV parser.\n> > \n> > That might be the cleanest solution, but how would we actually input\n> > multi-line data in CSV mode with \\. alone on a line?\n> \n> With this solution, the content doesn't change at all.\n> The weird part would be the user interface, because the information\n> psql needs is not only \"CSV\", it's also the options DELIMITER, QUOTE,\n> ESCAPE and possibly ENCODING. Currently it doesn't know any of these,\n> they're passed to the server in an opaque, unparsed form within\n> the COPY command.\n> \n> Personally, the solution I find cleaner is the server side not having\n> any end-of-data marker for CSV. So backslash-dot would never be\n> special. psql could allow for a custom ending boundary for in-script\n> data, and users could set that to backslash-dot if they want, but that\n> would be their choice.\n> That would be clearly not backward compatible, and I believe it wouldn't\n> work with the v2 protocol, so I'm not sure it would have much chance of\n> approval.\n\nI had forgotten that the DELIMITER and QUOTE can be changed --- that\nkills the idea of adding a simple CSV parser into psql because we would\nhave to parse the COPY SQL command as well.\n\nI am wondering if we should just disallow CSV from STDIN, on security\ngrounds. How big a problem would that be for people? Would we have to\ndisable to STDOUT as well since it could not be restored? Should we\nissue some kind of security warning in such cases? Should we document\nthis?\n\nIn hindsight, I am not sure how we could have designed this more\nsecurly. I guess we could have required some special text to start all\nCSV continuation lines that were not end-of-file, but that would have\nbeen very unportable, which is the goal of CSV.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Wed, 30 Jan 2019 12:50:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "st 30. 1. 2019 18:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Wed, Jan 30, 2019 at 06:32:11PM +0100, Daniel Verite wrote:\n> > Bruce Momjian wrote:\n> > > Well, these all kind of require a change to the COPY format, which\n> > > hasn't changed in many years.\n> >\n> > Not for the first two. As an example of solution #2, it could look like\n> this:\n> >\n> > =# \\set INLINE_COPY_BOUNDARY ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n> > =# COPY table FROM STDIN CSV;\n> > somevalue,\"foo\n> > \\.\n> > bar\"\n> > ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n> >\n> > Instead of looking for \\. on a line by itself, psql would look for the\n> > boundary to know where the data ends.\n> > The boundary is not transmitted to the server, it has no need to know\n> > about it.\n>\n> Wow, that is an odd API, as you stated below.\n>\n> > > > - psql could be told somehow that the next piece of inline data is in\n> > > > the CSV format, and then pass it through a CSV parser.\n> > >\n> > > That might be the cleanest solution, but how would we actually input\n> > > multi-line data in CSV mode with \\. alone on a line?\n> >\n> > With this solution, the content doesn't change at all.\n> > The weird part would be the user interface, because the information\n> > psql needs is not only \"CSV\", it's also the options DELIMITER, QUOTE,\n> > ESCAPE and possibly ENCODING. Currently it doesn't know any of these,\n> > they're passed to the server in an opaque, unparsed form within\n> > the COPY command.\n> >\n> > Personally, the solution I find cleaner is the server side not having\n> > any end-of-data marker for CSV. So backslash-dot would never be\n> > special. psql could allow for a custom ending boundary for in-script\n> > data, and users could set that to backslash-dot if they want, but that\n> > would be their choice.\n> > That would be clearly not backward compatible, and I believe it wouldn't\n> > work with the v2 protocol, so I'm not sure it would have much chance of\n> > approval.\n>\n> I had forgotten that the DELIMITER and QUOTE can be changed --- that\n> kills the idea of adding a simple CSV parser into psql because we would\n> have to parse the COPY SQL command as well.\n>\n> I am wondering if we should just disallow CSV from STDIN, on security\n> grounds. How big a problem would that be for people? Would we have to\n> disable to STDOUT as well since it could not be restored? Should we\n> issue some kind of security warning in such cases? Should we document\n> this?\n>\n\nit is pretty common pattern for etl, copy from stdin. I am thinking it can\nbe big problem\n\n\n\n> In hindsight, I am not sure how we could have designed this more\n> securly. I guess we could have required some special text to start all\n> CSV continuation lines that were not end-of-file, but that would have\n> been very unportable, which is the goal of CSV.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n>\n\nst 30. 1. 2019 18:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Wed, Jan 30, 2019 at 06:32:11PM +0100, Daniel Verite wrote:\n> Bruce Momjian wrote:\n> > Well, these all kind of require a change to the COPY format, which\n> > hasn't changed in many years.\n> \n> Not for the first two. As an example of solution #2, it could look like this:\n> \n> =# \\set INLINE_COPY_BOUNDARY ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n> =# COPY table FROM STDIN CSV;\n> somevalue,\"foo\n> \\.\n> bar\"\n> ==JuQW3gc2mQjXuvmJ32TlOLhJ3F2Eh2LcsBup0oH7==\n> \n> Instead of looking for \\. on a line by itself, psql would look for the\n> boundary to know where the data ends.\n> The boundary is not transmitted to the server, it has no need to know\n> about it.\n\nWow, that is an odd API, as you stated below.\n\n> > > - psql could be told somehow that the next piece of inline data is in\n> > > the CSV format, and then pass it through a CSV parser.\n> > \n> > That might be the cleanest solution, but how would we actually input\n> > multi-line data in CSV mode with \\. alone on a line?\n> \n> With this solution, the content doesn't change at all.\n> The weird part would be the user interface, because the information\n> psql needs is not only \"CSV\", it's also the options DELIMITER, QUOTE,\n> ESCAPE and possibly ENCODING. Currently it doesn't know any of these,\n> they're passed to the server in an opaque, unparsed form within\n> the COPY command.\n> \n> Personally, the solution I find cleaner is the server side not having\n> any end-of-data marker for CSV. So backslash-dot would never be\n> special. psql could allow for a custom ending boundary for in-script\n> data, and users could set that to backslash-dot if they want, but that\n> would be their choice.\n> That would be clearly not backward compatible, and I believe it wouldn't\n> work with the v2 protocol, so I'm not sure it would have much chance of\n> approval.\n\nI had forgotten that the DELIMITER and QUOTE can be changed --- that\nkills the idea of adding a simple CSV parser into psql because we would\nhave to parse the COPY SQL command as well.\n\nI am wondering if we should just disallow CSV from STDIN, on security\ngrounds. How big a problem would that be for people? Would we have to\ndisable to STDOUT as well since it could not be restored? Should we\nissue some kind of security warning in such cases? Should we document\nthis?it is pretty common pattern for etl, copy from stdin. I am thinking it can be big problem\n\nIn hindsight, I am not sure how we could have designed this more\nsecurly. I guess we could have required some special text to start all\nCSV continuation lines that were not end-of-file, but that would have\nbeen very unportable, which is the goal of CSV.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Wed, 30 Jan 2019 19:03:18 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 30. 1. 2019 18:51 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n>> I am wondering if we should just disallow CSV from STDIN, on security\n>> grounds. How big a problem would that be for people? Would we have to\n>> disable to STDOUT as well since it could not be restored? Should we\n>> issue some kind of security warning in such cases? Should we document\n>> this?\n\n> it is pretty common pattern for etl, copy from stdin. I am thinking it can\n> be big problem\n\nGiven how long we've had COPY CSV support, and the tiny number of\ncomplaints to date, I do not think it's something to panic over.\nDisallowing the functionality altogether is surely an overreaction.\n\nI don't really see an argument for calling it a security problem,\ngiven that pg_dump doesn't use CSV and it isn't the default for\nanything else either. Sure, you can imagine some bad actor hoping\nto cause problems by putting crafted data into a table, but how\ndoes that data end up in a script that's using COPY CSV FROM STDIN\n(as opposed to copying out-of-line data)? It's a bit far-fetched.\n\nA documentation warning might be the appropriate response. I don't\nsee any plausible way for psql to actually fix the problem, short\nof a protocol change to allow the backend to tell it how the data\nstream is going to be parsed.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 30 Jan 2019 13:20:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
},
{
"msg_contents": "On Wed, Jan 30, 2019 at 01:20:59PM -0500, Tom Lane wrote:\n> Given how long we've had COPY CSV support, and the tiny number of\n> complaints to date, I do not think it's something to panic over.\n> Disallowing the functionality altogether is surely an overreaction.\n>\n> A documentation warning might be the appropriate response. I don't\n> see any plausible way for psql to actually fix the problem, short\n> of a protocol change to allow the backend to tell it how the data\n> stream is going to be parsed.\n\nYes, agreed. I looked at this problem a couple of months (year(s)?)\nago and gave up on designing a clear portable solution after a couple\nof hours over it, and that's quite a corner case.\n--\nMichael",
"msg_date": "Thu, 31 Jan 2019 09:39:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: backslash-dot quoting in COPY CSV"
}
] |
[
{
"msg_contents": "I was surprised yesterday in a difference between querying domains as scalars versus domains as arrays. As we're all generally aware, when a domain is queried and projected as a scalar in a result set, it is described over-the-wire as that column having the oid of the domain's base type, NOT the oid of the domain itself. This helps out many clients and their applications, but confuses a few who want to use domains as 'tagged types' to register new client-side type mappings against. Changing that behavior seems to asked every now and then and rejected due to breaking more than it would help. And can be worked around through making a whole new type sharing much of the config as the base type.\n\nBut when arrays of the domain are returned to the client, the column is described on the wire with the oid of the domain's array type, instead of the oid of the base type's array type. This seems inconsistent to me, even though it can be worked around in SQL by a cast of either the element type when building the array, or casting the resulting array type.\n\nExample SQL:\n\ncreate database test;\n\n\\c test\n\ncreate domain required_text text\n\tcheck (trim(value) = value and length(value) > 0) not null;\n\ncreate table people\n(\n\tname required_text\n);\n\ninsert into people values ('Joe'), ('Mary'), ('Jane');\n\nAnd then client-side interaction using python/psycopg2 (sorry, am ignorant of how to get psql itself to show the protocol-level oids):\n\nimport psycopg2\ncon = psycopg2.connect('dbname=test')\ncur = con.cursor()\n\n# Scalar behaviours first: a query of the domain or the base type return the base type's oid:\n>>> cur.execute('select name from people')\n>>> cur.description\n(Column(name='name', type_code=25, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)\n>>> cur.execute('select name::text from people')\n>>> cur.description\n(Column(name='name', type_code=25, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)\n\nArrays of the base type (forced through explicit cast of either the element or the array):\n>>> cur.execute('select array_agg(name::text) from people')\n>>> cur.description\n(Column(name='array_agg', type_code=1009, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)\n>>> cur.execute('select array_agg(name)::text[] from people')\n>>> cur.description\n(Column(name='array_agg', type_code=1009, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)\n\nArrays of the domain, showing the new array type:\ncur.execute('select array_agg(name) from people')\n>>> cur.description\n(Column(name='array_agg', type_code=2392140, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)\n\nInteresting bits from my pg_type -- 2392140 is indeed the oid of the array type for the domain.\n\ntest=# select oid, typname, typcategory, typelem from pg_type where typname in ( '_text', '_required_text');\n oid | typname | typcategory | typelem \n---------+----------------+-------------+---------\n 1009 | _text | A | 25\n 2392140 | _required_text | A | 2392141\n\nSo -- do others find this inconsistent, or is it just me and I should work on having psycopg2 be able to learn the type mapping itself if I don't want to do SQL-side casts? I'll argue that if scalar projections erase the domain's oid, then array projections ought to as well.\n\nThanks!\nJames\n\n-----\nJames Robinson\njames@jlr-photo.com\nhttp://jlr-photo.com/\n\n\n\n\nI was surprised yesterday in a difference between querying domains as scalars versus domains as arrays. As we're all generally aware, when a domain is queried and projected as a scalar in a result set, it is described over-the-wire as that column having the oid of the domain's base type, NOT the oid of the domain itself. This helps out many clients and their applications, but confuses a few who want to use domains as 'tagged types' to register new client-side type mappings against. Changing that behavior seems to asked every now and then and rejected due to breaking more than it would help. And can be worked around through making a whole new type sharing much of the config as the base type.But when arrays of the domain are returned to the client, the column is described on the wire with the oid of the domain's array type, instead of the oid of the base type's array type. This seems inconsistent to me, even though it can be worked around in SQL by a cast of either the element type when building the array, or casting the resulting array type.Example SQL:create database test;\\c testcreate domain required_text text check (trim(value) = value and length(value) > 0) not null;create table people( name required_text);insert into people values ('Joe'), ('Mary'), ('Jane');And then client-side interaction using python/psycopg2 (sorry, am ignorant of how to get psql itself to show the protocol-level oids):import psycopg2con = psycopg2.connect('dbname=test')cur = con.cursor()# Scalar behaviours first: a query of the domain or the base type return the base type's oid:>>> cur.execute('select name from people')>>> cur.description(Column(name='name', type_code=25, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)>>> cur.execute('select name::text from people')>>> cur.description(Column(name='name', type_code=25, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)Arrays of the base type (forced through explicit cast of either the element or the array):>>> cur.execute('select array_agg(name::text) from people')>>> cur.description(Column(name='array_agg', type_code=1009, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)>>> cur.execute('select array_agg(name)::text[] from people')>>> cur.description(Column(name='array_agg', type_code=1009, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)Arrays of the domain, showing the new array type:cur.execute('select array_agg(name) from people')>>> cur.description(Column(name='array_agg', type_code=2392140, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None),)Interesting bits from my pg_type -- 2392140 is indeed the oid of the array type for the domain.test=# select oid, typname, typcategory, typelem from pg_type where typname in ( '_text', '_required_text'); oid | typname | typcategory | typelem ---------+----------------+-------------+--------- 1009 | _text | A | 25 2392140 | _required_text | A | 2392141So -- do others find this inconsistent, or is it just me and I should work on having psycopg2 be able to learn the type mapping itself if I don't want to do SQL-side casts? I'll argue that if scalar projections erase the domain's oid, then array projections ought to as well.Thanks!James-----James Robinsonjames@jlr-photo.comhttp://jlr-photo.com/",
"msg_date": "Wed, 2 Jan 2019 15:57:33 -0500",
"msg_from": "James Robinson <james@jlr-photo.com>",
"msg_from_op": true,
"msg_subject": "Arrays of domain returned to client as non-builtin oid describing the\n array, not the base array type's oid"
},
{
"msg_contents": "On 2019-Jan-02, James Robinson wrote:\n\n> So -- do others find this inconsistent, or is it just me and I should\n> work on having psycopg2 be able to learn the type mapping itself if I\n> don't want to do SQL-side casts? I'll argue that if scalar projections\n> erase the domain's oid, then array projections ought to as well.\n\nSounds reasonable.\n\nDo you have a link to a previous discussion that rejected changing the\nreturned OID to that of the domain array? I want to know what the argument\nis, other than backwards compatibility.\n\nDisregarding the size/shape of a patch to change this, I wonder what's\nthe cost of the change. I mean, how many clients are going to be broken\nif we change it? And by contrast, how many apps are going to work\nbetter with array-on-domain if we change it?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 16:24:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Arrays of domain returned to client as non-builtin oid\n describing the array, not the base array type's oid"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jan-02, James Robinson wrote:\n>> So -- do others find this inconsistent, or is it just me and I should\n>> work on having psycopg2 be able to learn the type mapping itself if I\n>> don't want to do SQL-side casts? I'll argue that if scalar projections\n>> erase the domain's oid, then array projections ought to as well.\n\n> Sounds reasonable.\n\n> Do you have a link to a previous discussion that rejected changing the\n> returned OID to that of the domain array? I want to know what the argument\n> is, other than backwards compatibility.\n\nTBH I doubt it was ever discussed; I don't recall having thought about\ndoing that while working on c12d570fa.\n\n> Disregarding the size/shape of a patch to change this, I wonder what's\n> the cost of the change.\n\nIt could be kind of expensive. The only way to find out whether an array\nis over a domain type is to drill down to the element type and see. Then\nif it is, we'd have to drill down to the domain base type, after which we\ncould use its typarray field. So that means at least one additional\nsyscache lookup each time we determine which type OID to report.\n\nI think there are also corner cases to worry about, in particular what\nif the base type lacks a typarray entry? This would happen at least\nfor domains over arrays. We don't have arrays of arrays according to\nthe type system, but arrays of domains over arrays allow you to kind\nof fake that. I don't see a way to report a valid description of the\ndata type while still abstracting out the domain in that case.\n\n> I mean, how many clients are going to be broken\n> if we change it?\n\nThis possibility only came in with v11, so probably there are few if any\nuse-cases of arrays-of-domains in the wild yet, and few or no clients\nwith intelligence about it. I don't think that backwards compatibility\nwould be a show-stopper argument against changing it, if we could satisfy\nourselves about the above points.\n\nHaving said that: in the end, the business of flattening scalar domains\nwas mainly meant to help simple clients handle simple cases simply.\nI'm not sure that array cases fall into that category at all, so I'm\nnot that excited about adding complexity/cycles for this.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 04 Jan 2019 15:14:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Arrays of domain returned to client as non-builtin oid describing\n the array, not the base array type's oid"
}
] |
[
{
"msg_contents": "Greetings,\n\nHappy new year!\n\nWe would like to follow up again for this issue and fix proposal. Could someone give some suggestions to the fix proposal? Or other ideas to fix this issue?\n\nLooking forward to your feedbacks!\n\n\nBest regards,\n\n--\n\nChengchao Yu\n\nSoftware Engineer | Microsoft | Azure Database for PostgreSQL\n\nhttps://azure.microsoft.com/en-us/services/postgresql/<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fpostgresql%2F&data=02%7C01%7Cchengyu%40microsoft.com%7C519f3f8b8d304d8945ba08d666048905%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636808567020764789&sdata=n9LnXSl1tXwEJWw71Nfv1Txj6iFXiEd9fWh3wM1pvfs%3D&reserved=0>\n\n\nFrom: Chengchao Yu <chengyu@microsoft.com>\nSent: Wednesday, December 19, 2018 2:51 PM\nTo: pgsql-hackers@postgresql.org\nCc: Prabhat Tripathi <ptrip@microsoft.com>; Sunil Kamath <Sunil.Kamath@microsoft.com>; Michal Primke <mprimke@microsoft.com>; Bhavin Gandhi <bhaving@microsoft.com>\nSubject: RE: [PATCH] Fix Proposal - Deadlock Issue in Single User Mode When IO Failure Occurs\n\nGreetings,\n\nJust would like to follow up this issue and fix proposal. We really would like to have this issue fixed in PG. Could someone give some suggestions to the fix proposal? Or other ideas to fix this issue?\n\nLooking forward for your feedbacks!\n\n\nBest regards,\n\n--\n\nChengchao Yu\n\nSoftware Engineer | Microsoft | Azure Database for PostgreSQL\n\nhttps://azure.microsoft.com/en-us/services/postgresql/<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fpostgresql%2F&data=02%7C01%7Cchengyu%40microsoft.com%7C519f3f8b8d304d8945ba08d666048905%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636808567020764789&sdata=n9LnXSl1tXwEJWw71Nfv1Txj6iFXiEd9fWh3wM1pvfs%3D&reserved=0>\n\nFrom: Chengchao Yu\nSent: Friday, November 30, 2018 1:00 PM\nTo: 'Pg Hackers' <pgsql-hackers@postgresql.org<mailto:pgsql-hackers@postgresql.org>>\nCc: Prabhat Tripathi <ptrip@microsoft.com<mailto:ptrip@microsoft.com>>; Sunil Kamath <Sunil.Kamath@microsoft.com<mailto:Sunil.Kamath@microsoft.com>>; Michal Primke <mprimke@microsoft.com<mailto:mprimke@microsoft.com>>\nSubject: [PATCH] Fix Proposal - Deadlock Issue in Single User Mode When IO Failure Occurs\n\n\nGreetings,\n\n\n\nRecently, we hit a few occurrences of deadlock when IO failure (including disk full, random remote disk IO failures) happens in single user mode. We found the issue exists on both Linux and Windows in multiple postgres versions.\n\n\n\nHere are the steps to repro on Linux (as Windows repro is similar):\n\n\n1. Get latest PostgreSQL code, build and install the executables.\n\n\n\n$ git clone https://git.postgresql.org/git/postgresql.git<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.postgresql.org%2Fgit%2Fpostgresql.git&data=02%7C01%7Cchengyu%40microsoft.com%7C519f3f8b8d304d8945ba08d666048905%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636808567020774790&sdata=QWbuMQ2RM4JVJPTcu2dCosMS18smHQHePjVIbhgw1Uo%3D&reserved=0>\n\n$ cd postgresql\n\n$ PGROOT=$(pwd)\n\n$ git checkout REL_11_STABLE\n\n$ mkdir build\n\n$ cd build\n\n$ ../configure --prefix=/path/to/postgres\n\n$ make && make install\n\n\n2. Run initdb to initialize a PG database folder.\n\n\n\n$ /path/to/postgres/bin/initdb -D /path/to/data\n\n\n3. Because the unable to write relation data scenario is difficult to hit naturally even reserved space is turned off, I have prepared a small patch (see attachment \"emulate-error.patch\") to force an error when PG tries to write data to relation files. We can just apply the patch and there is no need to put efforts flooding data to disk any more.\n\n\n\n$ cd $PGROOT\n\n$ git apply /path/to/emulate-error.patch\n\n$ cd build\n\n$ make && make install\n\n\n4. Connect to the newly initialized database cluster with single user mode, create a table, and insert some data to the table, do a checkpoint or directly give EOF. Then we hit the deadlock issue and the process will not exit until we kill it.\n\n\n\nDo a checkpoint explicitly:\n\n\n\n$ /path/to/postgres/bin/postgres --single -D /path/to/data/ postgres -c exit_on_error=true <<EOF\n\n> create table t1(a int);\n\n> insert into t1 values (1), (2), (3);\n\n> checkpoint;\n\n> EOF\n\n\n\nPostgreSQL stand-alone backend 11.1\n\nbackend> backend> backend> 2018-11-29 02:45:27.891 UTC [18806] FATAL: Emulate exception in mdwrite() when writing to disk\n\n2018-11-29 02:55:27.891 UTC [18806] CONTEXT: writing block 8 of relation base/12368/1247\n\n2018-11-29 02:55:27.891 UTC [18806] STATEMENT: checkpoint;\n\n\n\n2018-11-29 02:55:27.900 UTC [18806] FATAL: Emulate exception in mdwrite() when writing to disk\n\n2018-11-29 02:55:27.900 UTC [18806] CONTEXT: writing block 8 of relation base/12368/1247\n\n\n\nOr directly give an EOF:\n\n\n\n$ /path/to/postgres/bin/postgres --single -D /path/to/data/ postgres -c exit_on_error=true <<EOF\n\n> create table t1(a int);\n\n> insert into t1 values (1), (2), (3);\n\n> EOF\n\n\n\nPostgreSQL stand-alone backend 11.1\n\nbackend> backend> backend> 2018-11-29 02:55:24.438 UTC [18149] FATAL: Emulate exception in mdwrite() when writing to disk\n\n2018-11-29 02:45:24.438 UTC [18149] CONTEXT: writing block 8 of relation base/12368/1247\n\n\n5. Moreover, when we try to recover the database with single user mode, we hit the issue again, and the process does not bring up the database nor exit.\n\n\n\n$ /path/to/postgres/bin/postgres --single -D /path/to/data/ postgres -c exit_on_error=true\n\n2018-11-29 02:59:33.257 UTC [19058] LOG: database system shutdown was interrupted; last known up at 2018-11-29 02:58:49 UTC\n\n2018-11-29 02:59:33.485 UTC [19058] LOG: database system was not properly shut down; automatic recovery in progress\n\n2018-11-29 02:59:33.500 UTC [19058] LOG: redo starts at 0/1672E40\n\n2018-11-29 02:59:33.500 UTC [19058] LOG: invalid record length at 0/1684B90: wanted 24, got 0\n\n2018-11-29 02:59:33.500 UTC [19058] LOG: redo done at 0/1684B68\n\n2018-11-29 02:59:33.500 UTC [19058] LOG: last completed transaction was at log time 2018-11-29 02:58:49.856663+00\n\n2018-11-29 02:59:33.547 UTC [19058] FATAL: Emulate exception in mdwrite() when writing to disk\n\n2018-11-29 02:59:33.547 UTC [19058] CONTEXT: writing block 8 of relation base/12368/1247\n\n\n\nAnalyses:\n\n\n\nSo, what happened? Actually, there are 2 types of the deadlock due to the same root cause. Let's first take a look at the scenario in step #5. In this scenario, the deadlock happens when disk IO failure occurs inside StartupXLOG(). If we attach debugger to PG process, we will see the process is stuck acquiring the buffer's lw-lock in AbortBufferIO().\n\n\n\nvoid\n\nAbortBufferIO(void)\n\n{\n\n BufferDesc *buf = InProgressBuf;\n\n\n\n if (buf)\n\n {\n\n uint32 buf_state;\n\n\n\n /*\n\n * Since LWLockReleaseAll has already been called, we're not holding\n\n * the buffer's io_in_progress_lock. We have to re-acquire it so that\n\n * we can use TerminateBufferIO. Anyone who's executing WaitIO on the\n\n * buffer will be in a busy spin until we succeed in doing this.\n\n */\n\n LWLockAcquire(BufferDescriptorGetIOLock(buf), LW_EXCLUSIVE);\n\n\n\nThis is because the same lock has been acquired before buffer manager attempts to flush the buffer page, which happens in StartBufferIO().\n\n\n\nstatic bool\n\nStartBufferIO(BufferDesc *buf, bool forInput)\n\n{\n\n uint32 buf_state;\n\n\n\n Assert(!InProgressBuf);\n\n\n\n for (;;)\n\n {\n\n /*\n\n * Grab the io_in_progress lock so that other processes can wait for\n\n * me to finish the I/O.\n\n */\n\n LWLockAcquire(BufferDescriptorGetIOLock(buf), LW_EXCLUSIVE);\n\n\n\n buf_state = LockBufHdr(buf);\n\n\n\n if (!(buf_state & BM_IO_IN_PROGRESS))\n\n break;\n\n\n\nAfter reading the code, AtProcExit_Buffers() assumes all the lw-locks are released. However, in single user mode, at the time StartupXLOG() is being executed, there is no before_shmem_exit/on_shmem_exit callback registered to release the lw-locks.\n\nAnd, given lw-lock is non-reentrant, so the process gets stuck re-acquiring the same lock.\n\n\n\nHere is the call stack:\n\n\n\n(gdb) bt\n\n#0 0x00007f0fdb7cb6d6 in futex_abstimed_wait_cancelable (private=128, abstime=0x0, expected=0, futex_word=0x7f0fd14c81b8) at ../sysdeps/unix/sysv/linux/futex-internal.h:205\n\n#1 do_futex_wait (sem=sem@entry=0x7f0fd14c81b8, abstime=0x0) at sem_waitcommon.c:111\n\n#2 0x00007f0fdb7cb7c8 in __new_sem_wait_slow (sem=0x7f0fd14c81b8, abstime=0x0) at sem_waitcommon.c:181\n\n#3 0x00005630d475658a in PGSemaphoreLock (sema=0x7f0fd14c81b8) at pg_sema.c:316\n\n#4 0x00005630d47f689e in LWLockAcquire (lock=0x7f0fd9ae9c00, mode=LW_EXCLUSIVE) at /path/to/postgres/source/build/../src/backend/storage/lmgr/lwlock.c:1243\n\n#5 0x00005630d47cd087 in AbortBufferIO () at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:3988\n\n#6 0x00005630d47cb3f9 in AtProcExit_Buffers (code=1, arg=0) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2473\n\n#7 0x00005630d47dbc32 in shmem_exit (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:272\n\n#8 0x00005630d47dba5e in proc_exit_prepare (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:194\n\n#9 0x00005630d47db9c6 in proc_exit (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:107\n\n#10 0x00005630d49811bc in errfinish (dummy=0) at /path/to/postgres/source/build/../src/backend/utils/error/elog.c:541\n\n#11 0x00005630d4801f1f in mdwrite (reln=0x5630d6588c68, forknum=MAIN_FORKNUM, blocknum=8, buffer=0x7f0fd1ae9c00 \"\", skipFsync=false) at /path/to/postgres/source/build/../src/backend/storage/smgr/md.c:843\n\n#12 0x00005630d4804716 in smgrwrite (reln=0x5630d6588c68, forknum=MAIN_FORKNUM, blocknum=8, buffer=0x7f0fd1ae9c00 \"\", skipFsync=false) at /path/to/postgres/source/build/../src/backend/storage/smgr/smgr.c:650\n\n#13 0x00005630d47cb824 in FlushBuffer (buf=0x7f0fd19e9c00, reln=0x5630d6588c68) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2751\n\n#14 0x00005630d47cb219 in SyncOneBuffer (buf_id=0, skip_recently_used=false, wb_context=0x7ffccc371a70) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2394\n\n#15 0x00005630d47cab00 in BufferSync (flags=6) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:1984\n\n#16 0x00005630d47cb57f in CheckPointBuffers (flags=6) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2578\n\n#17 0x00005630d44a685b in CheckPointGuts (checkPointRedo=23612304, flags=6) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:9149\n\n#18 0x00005630d44a62cf in CreateCheckPoint (flags=6) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:8937\n\n#19 0x00005630d44a45e3 in StartupXLOG () at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:7723\n\n#20 0x00005630d4995f88 in InitPostgres (in_dbname=0x5630d65582b0 \"postgres\", dboid=0, username=0x5630d653d7d0 \"chengyu\", useroid=0, out_dbname=0x0, override_allow_connections=false)\n\n at /path/to/postgres/source/build/../src/backend/utils/init/postinit.c:636\n\n#21 0x00005630d480b68b in PostgresMain (argc=7, argv=0x5630d6534d20, dbname=0x5630d65582b0 \"postgres\", username=0x5630d653d7d0 \"chengyu\") at /path/to/postgres/source/build/../src/backend/tcop/postgres.c:3810\n\n#22 0x00005630d4695e8b in main (argc=7, argv=0x5630d6534d20) at /path/to/postgres/source/build/../src/backend/main/main.c:224\n\n\n\n(gdb) p on_shmem_exit_list\n\n$1 = {{function = 0x55801cc68f5f <AnonymousShmemDetach>, arg = 0},\n\n{function = 0x55801cc68a4f <IpcMemoryDelete>, arg = 2490396},\n\n{function = 0x55801cc689f8 <IpcMemoryDetach>, arg = 140602018975744},\n\n{function = 0x55801cc6842e <ReleaseSemaphores>, arg = 0},\n\n{function = 0x55801ccec48a <dsm_postmaster_shutdown>, arg = 140602018975744},\n\n{function = 0x55801cd04053 <ProcKill>, arg = 0},\n\n{function = 0x55801cd0402d <RemoveProcFromArray>, arg = 0},\n\n{function = 0x55801ccf74e8 <CleanupInvalidationState>, arg = 140601991817088},\n\n{function = 0x55801ccf446f <CleanupProcSignalState>, arg = 1},\n\n{function = 0x55801ccdd3e5 <AtProcExit_Buffers>, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}}\n\n(gdb) p before_shmem_exit_list\n\n$2 = {{function = 0x0, arg = 0} <repeats 20 times>}\n\n\n\nThe second type is in Step #4. At the time when \"checkpoint\" SQL command is being executed, PG has already set up the before_shmem_exit callback ShutdownPostgres(), which releases all lw-locks given transaction or sub-transaction is on-going. So after the first IO error, the buffer page's lw-lock gets released successfully. However, later ShutdownXLOG() is invoked, and PG tries to flush buffer pages again, which results in the second IO error. Different from the first time, this time, all the previous executed before/on_shmem_exit callbacks are not invoked again due to the decrease of the indexes. So lw-locks for buffer pages are not released when PG tries to get the same buffer lock in AbortBufferIO(), and then PG process gets stuck.\n\n\n\nHere is the call stack:\n\n\n\n(gdb) bt\n\n#0 0x00007ff0c0c036d6 in futex_abstimed_wait_cancelable (private=128, abstime=0x0, expected=0, futex_word=0x7ff0b69001b8) at ../sysdeps/unix/sysv/linux/futex-internal.h:205\n\n#1 do_futex_wait (sem=sem@entry=0x7ff0b69001b8, abstime=0x0) at sem_waitcommon.c:111\n\n#2 0x00007ff0c0c037c8 in __new_sem_wait_slow (sem=0x7ff0b69001b8, abstime=0x0) at sem_waitcommon.c:181\n\n#3 0x0000562077cc258a in PGSemaphoreLock (sema=0x7ff0b69001b8) at pg_sema.c:316\n\n#4 0x0000562077d6289e in LWLockAcquire (lock=0x7ff0bef225c0, mode=LW_EXCLUSIVE) at /path/to/postgres/source/build/../src/backend/storage/lmgr/lwlock.c:1243\n\n#5 0x0000562077d39087 in AbortBufferIO () at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:3988\n\n#6 0x0000562077d373f9 in AtProcExit_Buffers (code=1, arg=0) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2473\n\n#7 0x0000562077d47c32 in shmem_exit (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:272\n\n#8 0x0000562077d47a5e in proc_exit_prepare (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:194\n\n#9 0x0000562077d479c6 in proc_exit (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:107\n\n#10 0x0000562077eed1bc in errfinish (dummy=0) at /path/to/postgres/source/build/../src/backend/utils/error/elog.c:541\n\n#11 0x0000562077d6df1f in mdwrite (reln=0x562078a12a18, forknum=MAIN_FORKNUM, blocknum=8, buffer=0x7ff0b6fbdc00 \"\", skipFsync=false) at /path/to/postgres/source/build/../src/backend/storage/smgr/md.c:843\n\n#12 0x0000562077d70716 in smgrwrite (reln=0x562078a12a18, forknum=MAIN_FORKNUM, blocknum=8, buffer=0x7ff0b6fbdc00 \"\", skipFsync=false) at /path/to/postgres/source/build/../src/backend/storage/smgr/smgr.c:650\n\n#13 0x0000562077d37824 in FlushBuffer (buf=0x7ff0b6e22f80, reln=0x562078a12a18) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2751\n\n#14 0x0000562077d37219 in SyncOneBuffer (buf_id=78, skip_recently_used=false, wb_context=0x7fffb0e3e230) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2394\n\n#15 0x0000562077d36b00 in BufferSync (flags=5) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:1984\n\n#16 0x0000562077d3757f in CheckPointBuffers (flags=5) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2578\n\n#17 0x0000562077a1285b in CheckPointGuts (checkPointRedo=24049152, flags=5) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:9149\n\n#18 0x0000562077a122cf in CreateCheckPoint (flags=5) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:8937\n\n#19 0x0000562077a1164f in ShutdownXLOG (code=1, arg=0) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:8485\n\n#20 0x0000562077d47c32 in shmem_exit (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:272\n\n#21 0x0000562077d47a5e in proc_exit_prepare (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:194\n\n#22 0x0000562077d479c6 in proc_exit (code=1) at /path/to/postgres/source/build/../src/backend/storage/ipc/ipc.c:107\n\n#23 0x0000562077eed1bc in errfinish (dummy=0) at /path/to/postgres/source/build/../src/backend/utils/error/elog.c:541\n\n#24 0x0000562077d6df1f in mdwrite (reln=0x562078a12a18, forknum=MAIN_FORKNUM, blocknum=8, buffer=0x7ff0b6fbdc00 \"\", skipFsync=false) at /path/to/postgres/source/build/../src/backend/storage/smgr/md.c:843\n\n#25 0x0000562077d70716 in smgrwrite (reln=0x562078a12a18, forknum=MAIN_FORKNUM, blocknum=8, buffer=0x7ff0b6fbdc00 \"\", skipFsync=false) at /path/to/postgres/source/build/../src/backend/storage/smgr/smgr.c:650\n\n#26 0x0000562077d37824 in FlushBuffer (buf=0x7ff0b6e22f80, reln=0x562078a12a18) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2751\n\n#27 0x0000562077d37219 in SyncOneBuffer (buf_id=78, skip_recently_used=false, wb_context=0x7fffb0e3fb10) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2394\n\n#28 0x0000562077d36b00 in BufferSync (flags=44) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:1984\n\n#29 0x0000562077d3757f in CheckPointBuffers (flags=44) at /path/to/postgres/source/build/../src/backend/storage/buffer/bufmgr.c:2578\n\n#30 0x0000562077a1285b in CheckPointGuts (checkPointRedo=24049152, flags=44) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:9149\n\n#31 0x0000562077a122cf in CreateCheckPoint (flags=44) at /path/to/postgres/source/build/../src/backend/access/transam/xlog.c:8937\n\n#32 0x0000562077cca792 in RequestCheckpoint (flags=44) at /path/to/postgres/source/build/../src/backend/postmaster/checkpointer.c:976\n\n#33 0x0000562077d7bce4 in standard_ProcessUtility (pstmt=0x562078a00b50, queryString=0x562078a00100 \"checkpoint;\\n\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n\n dest=0x5620783ac5e0 <debugtupDR>, completionTag=0x7fffb0e41520 \"\") at /path/to/postgres/source/build/../src/backend/tcop/utility.c:769\n\n#34 0x0000562077d7b204 in ProcessUtility (pstmt=0x562078a00b50, queryString=0x562078a00100 \"checkpoint;\\n\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x5620783ac5e0 <debugtupDR>,\n\n completionTag=0x7fffb0e41520 \"\") at /path/to/postgres/source/build/../src/backend/tcop/utility.c:360\n\n#35 0x0000562077d7a347 in PortalRunUtility (portal=0x5620789f20c0, pstmt=0x562078a00b50, isTopLevel=true, setHoldSnapshot=false, dest=0x5620783ac5e0 <debugtupDR>, completionTag=0x7fffb0e41520 \"\")\n\n at /path/to/postgres/source/build/../src/backend/tcop/pquery.c:1178\n\n#36 0x0000562077d7a534 in PortalRunMulti (portal=0x5620789f20c0, isTopLevel=true, setHoldSnapshot=false, dest=0x5620783ac5e0 <debugtupDR>, altdest=0x5620783ac5e0 <debugtupDR>,\n\n completionTag=0x7fffb0e41520 \"\") at /path/to/postgres/source/build/../src/backend/tcop/pquery.c:1324\n\n#37 0x0000562077d79a61 in PortalRun (portal=0x5620789f20c0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x5620783ac5e0 <debugtupDR>, altdest=0x5620783ac5e0 <debugtupDR>,\n\n completionTag=0x7fffb0e41520 \"\") at /path/to/postgres/source/build/../src/backend/tcop/pquery.c:799\n\n#38 0x0000562077d734c5 in exec_simple_query (query_string=0x562078a00100 \"checkpoint;\\n\") at /path/to/postgres/source/build/../src/backend/tcop/postgres.c:1145\n\n#39 0x0000562077d77bd5 in PostgresMain (argc=7, argv=0x562078980d20, dbname=0x5620789a42b0 \"postgres\", username=0x5620789897d0 \"chengyu\") at /path/to/postgres/source/build/../src/backend/tcop/postgres.c:4182\n\n#40 0x0000562077c01e8b in main (argc=7, argv=0x562078980d20) at /path/to/postgres/source/build/../src/backend/main/main.c:224\n\n\n\n(gdb) p on_shmem_exit_list\n\n$9 = {{function = 0x562077cc2f5f <AnonymousShmemDetach>, arg = 0},\n\n{function = 0x562077cc2a4f <IpcMemoryDelete>, arg = 2457627},\n\n{function = 0x562077cc29f8 <IpcMemoryDetach>, arg = 140672005165056},\n\n{function = 0x562077cc242e <ReleaseSemaphores>, arg = 0},\n\n{function = 0x562077d4648a <dsm_postmaster_shutdown>, arg = 140672005165056},\n\n{function = 0x562077d5e053 <ProcKill>, arg = 0},\n\n{function = 0x562077d5e02d <RemoveProcFromArray>, arg = 0},\n\n{function = 0x562077d514e8 <CleanupInvalidationState>, arg = 140671978006400},\n\n{function = 0x562077d4e46f <CleanupProcSignalState>, arg = 1},\n\n{function = 0x562077d373e5 <AtProcExit_Buffers>, arg = 0},\n\n{function = 0x562077a1159d <ShutdownXLOG>, arg = 0},\n\n{function = 0x562077cd0637 <pgstat_beshutdown_hook>, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}, {function = 0x0, arg = 0}}\n\n(gdb) p before_shmem_exit_list\n\n$10 = {{function = 0x562077f02caa <ShutdownPostgres>, arg = 0}, {function = 0x0, arg = 0} <repeats 19 times>}\n\n\n\nOK, now we understand the deadlock issue for single user mode. However, will this issue affect multi-user mode (i.e. under postmaster process)? We can have 3 cases for discussion:\n\n\n\n 1. Startup process: at the time StartupXLOG() is invoked, ShutdownAuxiliaryProcess(), which will release all the lw-locks, has been already registered in before_shmem_exit_list[]. So this case is safe.\n 2. Checkpointer process: ShutdownXLOG() is not registered as a before/on_shmem_exit callback, instead, it's only invoked in the main loop. So there is no chance to hit IO error for second time during shared memory exit callbacks. Also, Same as startup process, ShutdownAuxiliaryProcess() has been registered. So this case is also safe.\n 3. Other backend/background processes: these processes do not handle XLOG startup or shutdown, and are protected by ShutdownAuxiliaryProcess(). So they are safe to exit too.\n\n\n\nIn addition, we have done multiple experiments to confirm these cases.\n\n\n\nAffected versions: we found this issue in 9.5, 9.6, 10, 11 and 12devel.\n\n\n\nFix proposal:\n\n\n\nAccording to the affected 2 types of deadlock in single user mode discussed above, there might be multiple ways to fix this issue. In the fix proposal we would like to present, we register a new callback to release all the lw-locks (just like what ShutdownAuxiliaryProcess()does) in an order after AtProcExit_Buffers() and before ShutdownXLOG(). Also, it is registered before PG enters StartupXLOG(), so it can cover the case when ShutdownPostgres() has not been registered. Here is the content of the proposal:\n\n\n\ndiff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n\nindex 62baaf0ab3..d74e8aa1d5 100644\n\n--- a/src/backend/utils/init/postinit.c\n\n+++ b/src/backend/utils/init/postinit.c\n\n@@ -71,6 +71,7 @@ static HeapTuple GetDatabaseTupleByOid(Oid dboid);\n\nstatic void PerformAuthentication(Port *port);\n\nstatic void CheckMyDatabase(const char *name, bool am_superuser, bool override_allow_connections);\n\nstatic void InitCommunication(void);\n\n+static void ReleaseLWLocks(int code, Datum arg);\n\nstatic void ShutdownPostgres(int code, Datum arg);\n\nstatic void StatementTimeoutHandler(void);\n\nstatic void LockTimeoutHandler(void);\n\n@@ -653,6 +654,7 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n\n * way, start up the XLOG machinery, and register to have it closed\n\n * down at exit.\n\n */\n\n+ on_shmem_exit(ReleaseLWLocks, 0);\n\n StartupXLOG();\n\n on_shmem_exit(ShutdownXLOG, 0);\n\n }\n\n@@ -1214,6 +1216,23 @@ process_settings(Oid databaseid, Oid roleid)\n\n heap_close(relsetting, AccessShareLock);\n\n}\n\n\n\n+/*\n\n+ * There are 2 types of buffer locks on-holding when AtProcExit_Buffers() is\n\n+ * invoked in a bootstrap process or a standalone backend:\n\n+ * (1) Exceptions thrown during StartupXLOG()\n\n+ * (2) Exceptions thrown during exception-handling in ShutdownXLOG()\n\n+ * So we need this on_shmem_exit callback for single user mode.\n\n+ * For processes under postmaster, ShutdownAuxiliaryProcess() will release\n\n+ * the lw-locks and ShutdownXLOG() is not registered as a callback, so there\n\n+ * is no such issue. Also, please note this callback should be registered in\n\n+ * the order after AtProcExit_buffers() and before ShutdownXLOG().\n\n+ */\n\n+static void\n\n+ReleaseLWLocks(int code, Datum arg)\n\n+{\n\n+ LWLockReleaseAll();\n\n+}\n\n+\n\n/*\n\n * Backend-shutdown callback. Do cleanup that we want to be sure happens\n\n * before all the supporting modules begin to nail their doors shut via\n\n\n\nThe fix proposal is also attached to this email in file \"fix-deadlock.patch\".\n\n\n\nPlease let us know should you have suggestions on this issue and the fix.\n\n\n\nThank you!\n\n\n\nBest regards,\n\n--\n\nChengchao Yu\n\nSoftware Engineer | Microsoft | Azure Database for PostgreSQL\n\nhttps://azure.microsoft.com/en-us/services/postgresql/<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fpostgresql%2F&data=02%7C01%7Cchengyu%40microsoft.com%7C519f3f8b8d304d8945ba08d666048905%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636808567020774790&sdata=m9CrQkBuw7hFnA1feLtz%2B%2BeQtIm%2FkCvMnoHJ3ARZNtk%3D&reserved=0>",
"msg_date": "Thu, 3 Jan 2019 00:26:09 +0000",
"msg_from": "Chengchao Yu <chengyu@microsoft.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Fix Proposal - Deadlock Issue in Single User Mode When IO\n Failure Occurs"
}
] |
[
{
"msg_contents": "I am unable to `make install` on MacOS in the latest master (68a13f28be).\n\nHere are the steps to reproduce.\n\nOS: MacOSX 10.14.2\nBranch: master:HEAD (68a13f28be)\n\n$ git log --pretty=format:'%h' -n 1\n68a13f28be\n$ ./configure --with-bonjour\n$ make\n$ sudo make install\n...\n/usr/bin/install -c -m 644 utils/errcodes.h\n'/usr/local/pgsql/include/server/utils'\n/usr/bin/install -c -m 644 utils/fmgroids.h\n'/usr/local/pgsql/include/server/utils'\n/usr/bin/install -c -m 644 utils/fmgrprotos.h\n'/usr/local/pgsql/include/server/utils'\ncp ./*.h '/usr/local/pgsql/include/server'/\ncp: ./dynloader.h: No such file or directory\nmake[2]: *** [install] Error 1\nmake[1]: *** [install-include-recurse] Error 2\nmake: *** [install-src-recurse] Error 2\n\nFWIW, I've also tried `./configure` without any flags, but that didn't\neffect the results.\n\nI am able to successfully build/install from branch `REL_11_STABLE`\n(ad425aaf06)\n\nI am unable to `make install` on MacOS in the latest master (68a13f28be).Here are the steps to reproduce.OS: MacOSX 10.14.2Branch: master:HEAD (68a13f28be)$ git log --pretty=format:'%h' -n 168a13f28be$ ./configure --with-bonjour$ make$ sudo make install.../usr/bin/install -c -m 644 utils/errcodes.h '/usr/local/pgsql/include/server/utils'/usr/bin/install -c -m 644 utils/fmgroids.h '/usr/local/pgsql/include/server/utils'/usr/bin/install -c -m 644 utils/fmgrprotos.h '/usr/local/pgsql/include/server/utils'cp ./*.h '/usr/local/pgsql/include/server'/cp: ./dynloader.h: No such file or directorymake[2]: *** [install] Error 1make[1]: *** [install-include-recurse] Error 2make: *** [install-src-recurse] Error 2FWIW, I've also tried `./configure` without any flags, but that didn't effect the results.I am able to successfully build/install from branch `REL_11_STABLE` (ad425aaf06)",
"msg_date": "Thu, 3 Jan 2019 10:47:43 -0500",
"msg_from": "Andrew Alsup <bluesbreaker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unable to `make install` on MacOS in the latest master (68a13f28be)"
},
{
"msg_contents": "On 1/3/19 10:47 AM, Andrew Alsup wrote:\n> cp ./*.h '/usr/local/pgsql/include/server'/\n> cp: ./dynloader.h: No such file or directory\n\nHas dynloader.h somehow ended up as a symbolic link to a file\nno longer present?\n\nPerhaps influenced by commit 842cb9f ?\n\n-Chap\n\n",
"msg_date": "Thu, 3 Jan 2019 10:54:53 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Unable to `make install` on MacOS in the latest master\n (68a13f28be)"
},
{
"msg_contents": "> On 3 Jan 2019, at 16:54, Chapman Flack <chap@anastigmatix.net> wrote:\n> \n> On 1/3/19 10:47 AM, Andrew Alsup wrote:\n>> cp ./*.h '/usr/local/pgsql/include/server'/\n>> cp: ./dynloader.h: No such file or directory\n> \n> Has dynloader.h somehow ended up as a symbolic link to a file\n> no longer present?\n> \n> Perhaps influenced by commit 842cb9f ?\n\nIt is indeed related to that commit. You will need to do make distclean, or\nremove dynloader.h manually.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 3 Jan 2019 16:57:48 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Unable to `make install` on MacOS in the latest master\n (68a13f28be)"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 3 Jan 2019, at 16:54, Chapman Flack <chap@anastigmatix.net> wrote:\n>> Perhaps influenced by commit 842cb9f ?\n\n> It is indeed related to that commit. You will need to do make distclean, or\n> remove dynloader.h manually.\n\nAs a general rule, it's wise to do \"make distclean\" before \"git pull\"\nwhen you're tracking master. This saves a lot of grief when someone\nrearranges the set of generated files, as happened here. (If things\nare really messed up, you might need \"git clean -dfx\" to get rid of\neverything not in git.)\n\nYou might worry that this will greatly increase the rebuild time,\nwhich it will if you don't take precautions. The way to fix that\nis (1) use ccache and (2) set the configure script to use a cache\nfile.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 03 Jan 2019 11:14:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unable to `make install` on MacOS in the latest master\n (68a13f28be)"
},
{
"msg_contents": "> As a general rule, it's wise to do \"make distclean\" before \"git pull\"\n> when you're tracking master. This saves a lot of grief when someone\n> rearranges the set of generated files, as happened here. (If things\n> are really messed up, you might need \"git clean -dfx\" to get rid of\n> everything not in git.)\n>\n> You might worry that this will greatly increase the rebuild time,\n> which it will if you don't take precautions. The way to fix that\n> is (1) use ccache and (2) set the configure script to use a cache\n> file.\n>\n> regards, tom lane\n\nTom and Daniel,\n\nThanks for the help on \"make distclean\". That did the trick. I will be\nmore careful when pulling master. Somehow, I hadn't been hit with this\nbefore, which was just dumb luck. Thanks for helping me out.\n\n-- Andy\n\n",
"msg_date": "Thu, 3 Jan 2019 11:27:34 -0500",
"msg_from": "Andrew Alsup <bluesbreaker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unable to `make install` on MacOS in the latest master\n (68a13f28be)"
},
{
"msg_contents": "On Thu, Jan 03, 2019 at 11:27:34AM -0500, Andrew Alsup wrote:\n> Thanks for the help on \"make distclean\". That did the trick. I will be\n> more careful when pulling master. Somehow, I hadn't been hit with this\n> before, which was just dumb luck. Thanks for helping me out.\n\nA more violent method is that from the top of the tree:\ngit clean -d -x -f\n\nThat's really efficient when using the git reporitory directly.\n--\nMichael",
"msg_date": "Sat, 5 Jan 2019 10:29:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unable to `make install` on MacOS in the latest master\n (68a13f28be)"
}
] |
[
{
"msg_contents": " > Attached 21st version of the patches.\n >\n > I decided to include here patch 0000 with complete jsonpath \nimplementation (it\n > is a squash of all 6 jsonpath-v21 patches). I hope this will simplify \nreviewing\n > and testing in cfbot.cputube.org.\n\nI'd like to help in reviewing this patch. Please let me know if there's \nsomething in particular I should focus on such as documentation, \nfunctionality, or source. If not, I'll probably just proceed in that order.\n\nRegards, Andy Alsup\n\n\n",
"msg_date": "Thu, 3 Jan 2019 12:39:02 -0500",
"msg_from": "Andrew Alsup <bluesbreaker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON: functions"
},
{
"msg_contents": " > Attached patches implementing all SQL/JSON functions excluding \nJSON_TABLE:\n >\n > JSON_OBJECT()\n > JSON_OBJECTAGG()\n > JSON_ARRAY()\n > JSON_ARRAYAGG()\n >\n > JSON_EXISTS()\n > JSON_VALUE()\n > JSON_QUERY()\n\nSorry if this is a stupid question, but is this patch intended to \nimplement any SQL/JSON functions? I'm basing this question on the \noriginal patch post (quoted above). The patch appears to be focused \nexclusively on \"jsonpath expressions\". I only ask because I think I \nmisinterpreted and spent some time wondering if I had missed a patch file.\n\nSo far, the jsonpath docs are very readable; however, I think a few \ncomplete examples (full SELECT statements) of using jsonpath expressions \nwould be helpful to someone new to this technology. Does postgresql \nprovide a sample schema, similar to the Or*cle scott/tiger (emp/dept) \nschema, we could use for examples? Alternatively, we could reference \nsomething mentioned at \nhttps://wiki.postgresql.org/wiki/Sample_Databases. I think it would be \nnice if all the docs referenced the same schema, when possible.\n\nFor tests, would it be helpful to have some tests that \ndemonstrate/assert equality between \"jsonb operators\" and \"jsonpath \nexpressions\"? For example, using the existing regression test data the \nfollowing should assert equality in operator vs. expression:\n\nSELECT\n CASE WHEN jop_count = expr_count THEN 'pass' ELSE 'fail' END\nFROM\n (\n -- jsonb operator\n SELECT count(*)\n FROM testjsonb\n WHERE j->>'abstract' LIKE 'A%'\n ) as jop_count,\n (\n -- jsonpath expression\n SELECT count(*)\n FROM testjsonb\n WHERE j @? '$.abstract ? (@ starts with \"A\")'\n ) as expr_count;\n\n\n",
"msg_date": "Fri, 4 Jan 2019 00:13:37 -0500",
"msg_from": "Andrew Alsup <bluesbreaker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON: functions"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15572\nLogged by: Ash Marath\nEmail address: makmarath@hotmail.com\nPostgreSQL version: 10.5\nOperating system: RDS (on Amazon)\nDescription: \n\nScenario:\r\nDB has 2 functions with same name.\r\nDB: testDB\r\nSchema: test\r\nFunction 1: test.func1(param1 text, param2 text)\r\nFunction 2: test.func1(param1 text)\r\n---------------------------------\r\nIssue: Misleading message reported by \"DROP FUNCTION\" command with the above\nscenario \r\n\r\nStep 1: \r\nRun the command : DROP FUNCTION test.func1;\r\n\r\nNOTE: This operation failed to execute the drop and reported the following\nmessage\r\n\r\nMessage reported by PgAdmin4 & OmniDB:\r\n ---- start of message ------\r\n function name \"test.func1\" is not unique\r\n HINT: Specify the argument list to select the function\nunambiguously.\r\n ---- end of message ------\r\n--------------------------------------------------------------------------------------------------------\r\nStep 2: \r\nRun the command : DROP FUNCTION IF EXISTS test.func1;\r\n\r\nNOTE: This operation completed successfully without error and reported the\nfollowing message\r\n\r\nMessage reported by PgAdmin4 & OmniDB:\r\n ---- start of message ------\r\n function admq.test1() does not exist, skipping\r\n ---- end of message ------\r\n-----------------------------------------------------------------------------------------------------------\r\nProposed solution:\r\nThe operation in Step 2 should have failed with the same error as reported\nin Step 1;\r\n\r\nThanks\r\nAsh Marath\r\nmakmarath@hotmail.com",
"msg_date": "Thu, 03 Jan 2019 20:27:33 +0000",
"msg_from": "=?utf-8?q?PG_Bug_reporting_form?= <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15572: Misleading message reported by \"Drop function operation\"\n on DB with functions having same name"
},
{
"msg_contents": "On Fri, 4 Jan 2019 at 09:44, PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> Operating system: RDS (on Amazon)\n\nYou may want to talk to Amazon about this. However, since the same\nbehaviour exists in PostgreSQL too...\n\n> Run the command : DROP FUNCTION test.func1;\n>\n> NOTE: This operation failed to execute the drop and reported the following\n> message\n>\n> Message reported by PgAdmin4 & OmniDB:\n> ---- start of message ------\n> function name \"test.func1\" is not unique\n> HINT: Specify the argument list to select the function\n> unambiguously.\n> ---- end of message ------\n\n\n> Run the command : DROP FUNCTION IF EXISTS test.func1;\n>\n> NOTE: This operation completed successfully without error and reported the\n> following message\n>\n> Message reported by PgAdmin4 & OmniDB:\n> ---- start of message ------\n> function admq.test1() does not exist, skipping\n> ---- end of message ------\n> -----------------------------------------------------------------------------------------------------------\n> Proposed solution:\n> The operation in Step 2 should have failed with the same error as reported\n> in Step 1;\n\nIt's not really that clear to me that doing that would be any more\ncorrect than the alternative. If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error. Maintaining the status quo at\nleast has the benefit of not randomly changing the behaviour because\nit didn't suit one particular use case. The patch to change the\nbehaviour is pretty trivial and amounts to removing a single line of\ncode:\n\ndiff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c\nindex 4661fc4f62..a9912b0986 100644\n--- a/src/backend/parser/parse_func.c\n+++ b/src/backend/parser/parse_func.c\n@@ -2053,12 +2053,11 @@ LookupFuncName(List *funcname, int nargs,\nconst Oid *argtypes, bool noError)\n {\n if (clist->next)\n {\n- if (!noError)\n- ereport(ERROR,\n-\n(errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n-\nerrmsg(\"function name \\\"%s\\\" is not unique\",\n-\n NameListToString(funcname)),\n-\nerrhint(\"Specify the argument list to select the function\nunambiguously.\")));\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n+\nerrmsg(\"function name \\\"%s\\\" is not unique\",\n+\nNameListToString(funcname)),\n+\nerrhint(\"Specify the argument list to select the function\nunambiguously.\")));\n }\n else\n return clist->oid;\n\nI just don't know if we'll have a better database by removing it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 17:45:16 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Thursday, January 3, 2019, David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> If we changed the behaviour of this then\n> someone might equally come along later and complain that they\n> specified \"IF EXISTS\" and got an error.\n>\n\nI’m inclined to argue that the docs say you can only use the omitted-args\nname if it is unique within the schema. Since the second case is using\nthat form in violation of that requirement reporting an error would match\nthe documentation.\n\nIF EXISTS only applies when no functions exist; an error for ambiguity\ndoesn’t violate its promise; and likely even if we didn’t make it an error\nsomething else will fail later on.\n\nIt is wrong for the drop function if exists command to translate/print the\nomitted-args form of the name into a function with zero arguments; it\nshould not be looking explicitly for a zero-arg function as it is not the\nsame thing (as emphasized in the docs).\n\nSo, I vote for changing this in 12 but leaving prior versions as-is for\ncompatability as the harm doesn’t seem to be enough to risk breakage.\nMight be worth a doc patch showing the second case for the back branches\n(Head seems like it would be good as we are fixing the code to match the\ndocumentation, IMO).\n\nDavid J.\n\nOn Thursday, January 3, 2019, David Rowley <david.rowley@2ndquadrant.com> wrote: If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error. \nI’m inclined to argue that the docs say you can only use the omitted-args name if it is unique within the schema. Since the second case is using that form in violation of that requirement reporting an error would match the documentation.IF EXISTS only applies when no functions exist; an error for ambiguity doesn’t violate its promise; and likely even if we didn’t make it an error something else will fail later on.It is wrong for the drop function if exists command to translate/print the omitted-args form of the name into a function with zero arguments; it should not be looking explicitly for a zero-arg function as it is not the same thing (as emphasized in the docs).So, I vote for changing this in 12 but leaving prior versions as-is for compatability as the harm doesn’t seem to be enough to risk breakage. Might be worth a doc patch showing the second case for the back branches (Head seems like it would be good as we are fixing the code to match the documentation, IMO).David J.",
"msg_date": "Thu, 3 Jan 2019 23:10:05 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "BUG #15572: Misleading message reported by \"Drop function operation\"\n on DB with functions having same name"
},
{
"msg_contents": "Your Concern is valid as well for \"IF exists\" complaint from users (its a possibility ):\nThen I would propose:\n1. Either word the return message identical to the drop command message(without the \"If Exists\") & successfully pass the command.\nOR\n2. Fail the execution since just using the function name without parameters returns ambiguous results for the Drop to continue.\nOR\n3. Drop all functions with that function name & successfully pass the command.\n\nWith your comment the 1st option looks as a better option.\n\n\n\nRegards\nAsh\n\nA Marath.\n\n________________________________\nFrom: David Rowley <david.rowley@2ndquadrant.com>\nSent: Thursday, January 3, 2019 11:45:16 PM\nTo: makmarath@hotmail.com; pgsql-bugs@lists.postgresql.org\nSubject: Re: BUG #15572: Misleading message reported by \"Drop function operation\" on DB with functions having same name\n\nOn Fri, 4 Jan 2019 at 09:44, PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> Operating system: RDS (on Amazon)\n\nYou may want to talk to Amazon about this. However, since the same\nbehaviour exists in PostgreSQL too...\n\n> Run the command : DROP FUNCTION test.func1;\n>\n> NOTE: This operation failed to execute the drop and reported the following\n> message\n>\n> Message reported by PgAdmin4 & OmniDB:\n> ---- start of message ------\n> function name \"test.func1\" is not unique\n> HINT: Specify the argument list to select the function\n> unambiguously.\n> ---- end of message ------\n\n\n> Run the command : DROP FUNCTION IF EXISTS test.func1;\n>\n> NOTE: This operation completed successfully without error and reported the\n> following message\n>\n> Message reported by PgAdmin4 & OmniDB:\n> ---- start of message ------\n> function admq.test1() does not exist, skipping\n> ---- end of message ------\n> -----------------------------------------------------------------------------------------------------------\n> Proposed solution:\n> The operation in Step 2 should have failed with the same error as reported\n> in Step 1;\n\nIt's not really that clear to me that doing that would be any more\ncorrect than the alternative. If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error. Maintaining the status quo at\nleast has the benefit of not randomly changing the behaviour because\nit didn't suit one particular use case. The patch to change the\nbehaviour is pretty trivial and amounts to removing a single line of\ncode:\n\ndiff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c\nindex 4661fc4f62..a9912b0986 100644\n--- a/src/backend/parser/parse_func.c\n+++ b/src/backend/parser/parse_func.c\n@@ -2053,12 +2053,11 @@ LookupFuncName(List *funcname, int nargs,\nconst Oid *argtypes, bool noError)\n {\n if (clist->next)\n {\n- if (!noError)\n- ereport(ERROR,\n-\n(errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n-\nerrmsg(\"function name \\\"%s\\\" is not unique\",\n-\n NameListToString(funcname)),\n-\nerrhint(\"Specify the argument list to select the function\nunambiguously.\")));\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n+\nerrmsg(\"function name \\\"%s\\\" is not unique\",\n+\nNameListToString(funcname)),\n+\nerrhint(\"Specify the argument list to select the function\nunambiguously.\")));\n }\n else\n return clist->oid;\n\nI just don't know if we'll have a better database by removing it.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nYour Concern is valid as well for \"IF exists\" complaint from users (its a possibility ):\n\n\nThen I would propose:\n\n\n1. Either word the return message identical to the drop command message(without the \"If Exists\") & successfully pass the command.\n\n\nOR\n\n\n2. Fail the execution since just using the function name without parameters returns ambiguous results for the Drop to continue.\n\n\nOR\n\n\n3. Drop all functions with that function name & successfully pass the command.\n\n\n\nWith your comment the 1st option looks as a better option.\n\n\n\n\n\nRegards\n\n\nAsh\n\n\n\n\nA Marath.\n\n\n\nFrom: David Rowley <david.rowley@2ndquadrant.com>\nSent: Thursday, January 3, 2019 11:45:16 PM\nTo: makmarath@hotmail.com; pgsql-bugs@lists.postgresql.org\nSubject: Re: BUG #15572: Misleading message reported by \"Drop function operation\" on DB with functions having same name\n \n\n\n\nOn Fri, 4 Jan 2019 at 09:44, PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> Operating system: RDS (on Amazon)\n\nYou may want to talk to Amazon about this. However, since the same\nbehaviour exists in PostgreSQL too...\n\n> Run the command : DROP FUNCTION test.func1;\n>\n> NOTE: This operation failed to execute the drop and reported the following\n> message\n>\n> Message reported by PgAdmin4 & OmniDB:\n> ---- start of message ------\n> function name \"test.func1\" is not unique\n> HINT: Specify the argument list to select the function\n> unambiguously.\n> ---- end of message ------\n\n\n> Run the command : DROP FUNCTION IF EXISTS test.func1;\n>\n> NOTE: This operation completed successfully without error and reported the\n> following message\n>\n> Message reported by PgAdmin4 & OmniDB:\n> ---- start of message ------\n> function admq.test1() does not exist, skipping\n> ---- end of message ------\n> -----------------------------------------------------------------------------------------------------------\n> Proposed solution:\n> The operation in Step 2 should have failed with the same error as reported\n> in Step 1;\n\nIt's not really that clear to me that doing that would be any more\ncorrect than the alternative. If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error. Maintaining the status quo at\nleast has the benefit of not randomly changing the behaviour because\nit didn't suit one particular use case. The patch to change the\nbehaviour is pretty trivial and amounts to removing a single line of\ncode:\n\ndiff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c\nindex 4661fc4f62..a9912b0986 100644\n--- a/src/backend/parser/parse_func.c\n+++ b/src/backend/parser/parse_func.c\n@@ -2053,12 +2053,11 @@ LookupFuncName(List *funcname, int nargs,\nconst Oid *argtypes, bool noError)\n {\n if (clist->next)\n {\n- if (!noError)\n- ereport(ERROR,\n-\n(errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n-\nerrmsg(\"function name \\\"%s\\\" is not unique\",\n-\n NameListToString(funcname)),\n-\nerrhint(\"Specify the argument list to select the function\nunambiguously.\")));\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n+\nerrmsg(\"function name \\\"%s\\\" is not unique\",\n+\nNameListToString(funcname)),\n+\nerrhint(\"Specify the argument list to select the function\nunambiguously.\")));\n }\n else\n return clist->oid;\n\nI just don't know if we'll have a better database by removing it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 4 Jan 2019 23:01:51 +0000",
"msg_from": "a Marath <makmarath@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "I second David's suggestion\n\nA Marath.\n\n________________________________\nFrom: David G. Johnston <david.g.johnston@gmail.com>\nSent: Friday, January 4, 2019 1:10:05 AM\nTo: David Rowley\nCc: makmarath@hotmail.com; pgsql-bugs@lists.postgresql.org\nSubject: BUG #15572: Misleading message reported by \"Drop function operation\" on DB with functions having same name\n\nOn Thursday, January 3, 2019, David Rowley <david.rowley@2ndquadrant.com<mailto:david.rowley@2ndquadrant.com>> wrote:\n If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error.\n\nI’m inclined to argue that the docs say you can only use the omitted-args name if it is unique within the schema. Since the second case is using that form in violation of that requirement reporting an error would match the documentation.\n\nIF EXISTS only applies when no functions exist; an error for ambiguity doesn’t violate its promise; and likely even if we didn’t make it an error something else will fail later on.\n\nIt is wrong for the drop function if exists command to translate/print the omitted-args form of the name into a function with zero arguments; it should not be looking explicitly for a zero-arg function as it is not the same thing (as emphasized in the docs).\n\nSo, I vote for changing this in 12 but leaving prior versions as-is for compatability as the harm doesn’t seem to be enough to risk breakage. Might be worth a doc patch showing the second case for the back branches (Head seems like it would be good as we are fixing the code to match the documentation, IMO).\n\nDavid J.\n\n\n\n\n\n\n\n\n\nI second David's suggestion\n\n\n\n\nA Marath.\n\n\n\nFrom: David G. Johnston <david.g.johnston@gmail.com>\nSent: Friday, January 4, 2019 1:10:05 AM\nTo: David Rowley\nCc: makmarath@hotmail.com; pgsql-bugs@lists.postgresql.org\nSubject: BUG #15572: Misleading message reported by \"Drop function operation\" on DB with functions having same name\n \n\nOn Thursday, January 3, 2019, David Rowley <david.rowley@2ndquadrant.com> wrote:\n\n If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error. \n\n\n\nI’m inclined to argue that the docs say you can only use the omitted-args name if it is unique within the schema. Since the second case is using that form in violation of that requirement reporting an error would match the documentation.\n\n\nIF EXISTS only applies when no functions exist; an error for ambiguity doesn’t violate its promise; and likely even if we didn’t make it an error something else will fail later on.\n\n\nIt is wrong for the drop function if exists command to translate/print the omitted-args form of the name into a function with zero arguments; it should not be looking explicitly for a zero-arg function as it is not the same thing (as emphasized in the\n docs).\n\n\nSo, I vote for changing this in 12 but leaving prior versions as-is for compatability as the harm doesn’t seem to be enough to risk breakage. Might be worth a doc patch showing the second case for the back branches (Head seems like it would be good as\n we are fixing the code to match the documentation, IMO).\n\n\nDavid J.",
"msg_date": "Fri, 4 Jan 2019 23:04:48 +0000",
"msg_from": "a Marath <makmarath@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "I second David J. Suggestion.\n\nTo add to the possible list of solutions I also propose another solution and for better consistency between both the operation\n\nFix the error message reported by the \"drop function without IF Exists\" and make it similar to the \"Drop.. If Exists\".\n\nIf no parameters are passed by user then let the \"DROP FUNCTION\" routine only check for a function of that name which has no parameters => \"func1()\"\n\n\nAsh\n\nA Marath.\n\n________________________________\nFrom: David G. Johnston <david.g.johnston@gmail.com>\nSent: Friday, January 4, 2019 1:10:05 AM\nTo: David Rowley\nCc: makmarath@hotmail.com; pgsql-bugs@lists.postgresql.org\nSubject: BUG #15572: Misleading message reported by \"Drop function operation\" on DB with functions having same name\n\nOn Thursday, January 3, 2019, David Rowley <david.rowley@2ndquadrant.com<mailto:david.rowley@2ndquadrant.com>> wrote:\n If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error.\n\nI’m inclined to argue that the docs say you can only use the omitted-args name if it is unique within the schema. Since the second case is using that form in violation of that requirement reporting an error would match the documentation.\n\nIF EXISTS only applies when no functions exist; an error for ambiguity doesn’t violate its promise; and likely even if we didn’t make it an error something else will fail later on.\n\nIt is wrong for the drop function if exists command to translate/print the omitted-args form of the name into a function with zero arguments; it should not be looking explicitly for a zero-arg function as it is not the same thing (as emphasized in the docs).\n\nSo, I vote for changing this in 12 but leaving prior versions as-is for compatability as the harm doesn’t seem to be enough to risk breakage. Might be worth a doc patch showing the second case for the back branches (Head seems like it would be good as we are fixing the code to match the documentation, IMO).\n\nDavid J.\n\n\n\n\n\n\n\n\n\nI second David J. Suggestion. \n\n\n\nTo add to the possible list of solutions I also propose another solution and for better consistency between both the operation\n\n\n\nFix the error message reported by the \"drop function without IF Exists\" and make it similar to the \"Drop.. If Exists\".\n\n\n\nIf no parameters are passed by user then let the \"DROP FUNCTION\" routine only check for a function of that name which has no parameters => \"func1()\"\n\n\n\n\nAsh\n\n\n\n\nA Marath.\n\n\n\nFrom: David G. Johnston <david.g.johnston@gmail.com>\nSent: Friday, January 4, 2019 1:10:05 AM\nTo: David Rowley\nCc: makmarath@hotmail.com; pgsql-bugs@lists.postgresql.org\nSubject: BUG #15572: Misleading message reported by \"Drop function operation\" on DB with functions having same name\n \n\nOn Thursday, January 3, 2019, David Rowley <david.rowley@2ndquadrant.com> wrote:\n\n If we changed the behaviour of this then\nsomeone might equally come along later and complain that they\nspecified \"IF EXISTS\" and got an error. \n\n\n\nI’m inclined to argue that the docs say you can only use the omitted-args name if it is unique within the schema. Since the second case is using that form in violation of that requirement reporting an error would match the documentation.\n\n\nIF EXISTS only applies when no functions exist; an error for ambiguity doesn’t violate its promise; and likely even if we didn’t make it an error something else will fail later on.\n\n\nIt is wrong for the drop function if exists command to translate/print the omitted-args form of the name into a function with zero arguments; it should not be looking explicitly for a zero-arg function as it is not the same thing (as emphasized in the\n docs).\n\n\nSo, I vote for changing this in 12 but leaving prior versions as-is for compatability as the harm doesn’t seem to be enough to risk breakage. Might be worth a doc patch showing the second case for the back branches (Head seems like it would be good as\n we are fixing the code to match the documentation, IMO).\n\n\nDavid J.",
"msg_date": "Mon, 7 Jan 2019 14:18:49 +0000",
"msg_from": "a Marath <makmarath@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On 2019-Jan-04, David Rowley wrote:\n\n> It's not really that clear to me that doing that would be any more\n> correct than the alternative.\n\nI think it would be. Specifying a function without params works only if\nit's unambiguous; if ambiguity is possible, raise an error. On the\nother hand, lack of IF EXISTS is supposed to raise an error if the\nfunction doesn't exist; its presence means not the report that\nparticular error, but it doesn't mean to suppress other errors such as\nthe ambiguity one.\n\nI'm not sure what's a good way to implement this, however. Maybe the\nsolution is to have LookupFuncName return InvalidOid when the function\nname is ambiguous and let LookupFuncWithArgs report the error\nappropriately. I think this behavior is weird:\n\n\t/*\n\t * When looking for a function or routine, we pass noError through to\n\t * LookupFuncName and let it make any error messages. Otherwise, we make\n\t * our own errors for the aggregate and procedure cases.\n\t */\n\toid = LookupFuncName(func->objname, func->args_unspecified ? -1 : argcount, argoids,\n\t\t\t\t\t\t (objtype == OBJECT_FUNCTION || objtype == OBJECT_ROUTINE) ? noError : true);\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 7 Jan 2019 11:54:03 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Tue, 8 Jan 2019 at 03:54, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I'm not sure what's a good way to implement this, however. Maybe the\n> solution is to have LookupFuncName return InvalidOid when the function\n> name is ambiguous and let LookupFuncWithArgs report the error\n> appropriately. I think this behavior is weird:\n>\n> /*\n> * When looking for a function or routine, we pass noError through to\n> * LookupFuncName and let it make any error messages. Otherwise, we make\n> * our own errors for the aggregate and procedure cases.\n> */\n> oid = LookupFuncName(func->objname, func->args_unspecified ? -1 : argcount, argoids,\n> (objtype == OBJECT_FUNCTION || objtype == OBJECT_ROUTINE) ? noError : true);\n\nWhy can't we just remove the !noError check in the location where the\nerror is raised?\n\nI had a look and I can't see any other callers that pass nargs as -1\nand can pass noError as true. The only place I see is through\nget_object_address() in RemoveObjects(). There's another possible call\nin get_object_address_rv(), but there's only 1 call in the entire\nsource for that function and it passes missing_ok as false.\n\nI ended up with the attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 8 Jan 2019 12:55:54 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Why can't we just remove the !noError check in the location where the\n> error is raised?\n\nI don't like that a bit --- the point of noError is to prevent throwing\nerrors, and it doesn't seem like it should be LookupFuncName's business\nto decide it's smarter than its callers. Maybe we need another flag\nargument?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 08 Jan 2019 19:36:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 9 Jan 2019 at 13:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > Why can't we just remove the !noError check in the location where the\n> > error is raised?\n>\n> I don't like that a bit --- the point of noError is to prevent throwing\n> errors, and it doesn't seem like it should be LookupFuncName's business\n> to decide it's smarter than its callers. Maybe we need another flag\n> argument?\n\nWell, I guess you didn't have backpatching this in mind. The reason I\nthought it was okay to hijack that flag was that the ambiguous error\nwas only raised when the function parameters were not defined. I\nchased around and came to the conclusion this only happened during\nDROP. Maybe that's a big assumption as it certainly might not help\nfuture callers passing nargs as -1.\n\nI've attached another version with a newly added flag.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 16 Jan 2019 12:38:55 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 16 Jan 2019 at 12:38, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I've attached another version with a newly added flag.\n\nI've added this to the March commitfest.\nhttps://commitfest.postgresql.org/22/1982/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 5 Feb 2019 12:14:09 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 16 Jan 2019 at 12:38, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I've attached another version with a newly added flag.\n\nLooks like I missed updating a call in pltcl.c. Thanks to the\ncommitfest bot for noticing.\n\nUpdated patch attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 7 Feb 2019 01:17:23 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Thu, 7 Feb 2019 at 01:17, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> Updated patch attached.\n\nUpdated patch attached again. This time due to a newly added call to\nLookupFuncName() in 1fb57af92.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 11 Feb 2019 11:05:54 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Updated patch attached again. This time due to a newly added call to\n> LookupFuncName() in 1fb57af92.\n\nHmm ... I'd not looked at this before, but now that I do, the new API\nfor LookupFuncName seems mighty confused, or at least confusingly\ndocumented. It's not clear what the combinations of the flags actually\ndo, or why you'd want to use them.\n\nI wonder whether you'd be better off replacing the two bools with an\nenum, or something like that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 10 Feb 2019 17:39:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Mon, 11 Feb 2019 at 11:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... I'd not looked at this before, but now that I do, the new API\n> for LookupFuncName seems mighty confused, or at least confusingly\n> documented. It's not clear what the combinations of the flags actually\n> do, or why you'd want to use them.\n>\n> I wonder whether you'd be better off replacing the two bools with an\n> enum, or something like that.\n\nOkay. Here's a modified patch with the enum.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 11 Feb 2019 15:36:17 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 03:36:17PM +1300, David Rowley wrote:\n> Okay. Here's a modified patch with the enum.\n\nFWIW, it makes me a bit uneasy to change this function signature in\nback-branches if that's the intention as I suspect that it gets used\nin extensions.. For HEAD that's fine of course.\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 12:09:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "Hello,\n\nOn 11.02.2019 05:36, David Rowley wrote:\n> On Mon, 11 Feb 2019 at 11:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder whether you'd be better off replacing the two bools with an\n>> enum, or something like that.\n> \n> Okay. Here's a modified patch with the enum.\n\nThere is a LookupFuncWithArgs() call within CreateTransform() where \n`bool` is passed still:\n\ntosqlfuncid = LookupFuncWithArgs(OBJECT_FUNCTION, stmt->tosql, *false*);\n\n> I had a look and I can't see any other callers that pass nargs as -1\n> and can pass noError as true. The only place I see is through\n> get_object_address() in RemoveObjects(). There's another possible call\n> in get_object_address_rv(), but there's only 1 call in the entire\n> source for that function and it passes missing_ok as false.\n\nIf nargs as -1 and noError as true can be passed only within \nRemoveObjects() I wonder, could we just end up with a patch which raise \nan error at every ambiguity? That is I mean the following patch:\n\ndiff --git a/src/backend/parser/parse_func.c \nb/src/backend/parser/parse_func.c\nindex 5222231b51..cce8f49f52 100644\n--- a/src/backend/parser/parse_func.c\n+++ b/src/backend/parser/parse_func.c\n@@ -2053,7 +2053,6 @@ LookupFuncName(List *funcname, int nargs, const \nOid *argtypes, bool noError)\n {\n if (clist->next)\n {\n- if (!noError)\n ereport(ERROR,\n (errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n errmsg(\"function name \\\"%s\\\" is not unique\",\n\nBut I may overlook something of course.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Thu, 14 Feb 2019 16:42:35 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Fri, 15 Feb 2019 at 02:42, Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:\n> If nargs as -1 and noError as true can be passed only within\n> RemoveObjects() I wonder, could we just end up with a patch which raise\n> an error at every ambiguity? That is I mean the following patch:\n>\n> diff --git a/src/backend/parser/parse_func.c\n> b/src/backend/parser/parse_func.c\n> index 5222231b51..cce8f49f52 100644\n> --- a/src/backend/parser/parse_func.c\n> +++ b/src/backend/parser/parse_func.c\n> @@ -2053,7 +2053,6 @@ LookupFuncName(List *funcname, int nargs, const\n> Oid *argtypes, bool noError)\n> {\n> if (clist->next)\n> {\n> - if (!noError)\n> ereport(ERROR,\n> (errcode(ERRCODE_AMBIGUOUS_FUNCTION),\n> errmsg(\"function name \\\"%s\\\" is not unique\",\n>\n> But I may overlook something of course.\n\nI had the same thoughts so I did that in the original patch, but see\nTom's comment which starts with \"I don't like that a bit\"\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 11:15:19 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Tue, 12 Feb 2019 at 16:09, Michael Paquier <michael@paquier.xyz> wrote:\n> FWIW, it makes me a bit uneasy to change this function signature in\n> back-branches if that's the intention as I suspect that it gets used\n> in extensions.. For HEAD that's fine of course.\n\nI wondered about this too and questioned Tom about it above. There\nwas no response.\n\nI just assumed Tom didn't think it was worth fiddling with in back-branches.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 11:17:52 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Tue, 12 Feb 2019 at 16:09, Michael Paquier <michael@paquier.xyz> wrote:\n>> FWIW, it makes me a bit uneasy to change this function signature in\n>> back-branches if that's the intention as I suspect that it gets used\n>> in extensions.. For HEAD that's fine of course.\n\n> I wondered about this too and questioned Tom about it above. There\n> was no response.\n\nSorry, I didn't realize you'd asked a question.\n\n> I just assumed Tom didn't think it was worth fiddling with in back-branches.\n\nYeah, exactly. Not only do I not feel a need to change this behavior\nin the back branches, but the original patch is *also* an API change,\nin that it changes the behavior of what appears to be a well-defined\nboolean parameter. The fact that none of the call sites found in\ncore today would care doesn't change that; you'd still be risking\nbreaking extensions, and/or future back-patches.\n\nSo I think targeting this for HEAD only is fine.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Feb 2019 17:31:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 05:31:43PM -0500, Tom Lane wrote:\n> So I think targeting this for HEAD only is fine.\n\nOK, thanks for helping me catching up, Tom and David!\n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 08:38:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Tue, 12 Feb 2019 at 16:09, Michael Paquier <michael@paquier.xyz> wrote:\n> >> FWIW, it makes me a bit uneasy to change this function signature in\n> >> back-branches if that's the intention as I suspect that it gets used\n> >> in extensions.. For HEAD that's fine of course.\n>\n> > I wondered about this too and questioned Tom about it above. There\n> > was no response.\n>\n> Sorry, I didn't realize you'd asked a question.\n>\n> > I just assumed Tom didn't think it was worth fiddling with in back-branches.\n>\n> Yeah, exactly. Not only do I not feel a need to change this behavior\n> in the back branches, but the original patch is *also* an API change,\n> in that it changes the behavior of what appears to be a well-defined\n> boolean parameter. The fact that none of the call sites found in\n> core today would care doesn't change that; you'd still be risking\n> breaking extensions, and/or future back-patches.\n\nExtensions calling those functions with old true/false values probably\nwon't get any warning or error during compile. Is is something we\nshould worry about or is it enough to keep the same behavior in this\ncase?\n\n@david: small typo, you removed a space in this chunk\n\n- * LookupFuncName and let it make any error messages. Otherwise, we make\n+ * LookupFuncNameand let it make any error messages. Otherwise, we make\n\n",
"msg_date": "Tue, 19 Feb 2019 17:00:26 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Feb 17, 2019 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, exactly. Not only do I not feel a need to change this behavior\n>> in the back branches, but the original patch is *also* an API change,\n>> in that it changes the behavior of what appears to be a well-defined\n>> boolean parameter. The fact that none of the call sites found in\n>> core today would care doesn't change that; you'd still be risking\n>> breaking extensions, and/or future back-patches.\n\n> Extensions calling those functions with old true/false values probably\n> won't get any warning or error during compile. Is is something we\n> should worry about or is it enough to keep the same behavior in this\n> case?\n\nYeah, I thought about that. We can avoid such problems by assigning\nthe enum values such that 0 and 1 correspond to the old behaviors.\nI didn't look to see if the proposed patch does it like that right\nnow, but it should be an easy fix if not.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 11:45:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> >\n> > Extensions calling those functions with old true/false values probably\n> > won't get any warning or error during compile. Is is something we\n> > should worry about or is it enough to keep the same behavior in this\n> > case?\n>\n> Yeah, I thought about that. We can avoid such problems by assigning\n> the enum values such that 0 and 1 correspond to the old behaviors.\n> I didn't look to see if the proposed patch does it like that right\n> now, but it should be an easy fix if not.\n\nIt does, I was just wondering whether that was a good enough solution.\n\nThinking more about it, I'm not sure if there's a general policy for\nenums, but should we have an AssertArg() in LookupFuncName[WithArgs]\nto check that a correct value was passed?\n\n",
"msg_date": "Tue, 19 Feb 2019 18:48:04 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 05:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> @david: small typo, you removed a space in this chunk\n>\n> - * LookupFuncName and let it make any error messages. Otherwise, we make\n> + * LookupFuncNameand let it make any error messages. Otherwise, we make\n\nThanks. Fixed in the attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 20 Feb 2019 09:01:45 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 06:48, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Feb 19, 2019 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > >\n> > > Extensions calling those functions with old true/false values probably\n> > > won't get any warning or error during compile. Is is something we\n> > > should worry about or is it enough to keep the same behavior in this\n> > > case?\n> >\n> > Yeah, I thought about that. We can avoid such problems by assigning\n> > the enum values such that 0 and 1 correspond to the old behaviors.\n> > I didn't look to see if the proposed patch does it like that right\n> > now, but it should be an easy fix if not.\n>\n> It does, I was just wondering whether that was a good enough solution.\n>\n> Thinking more about it, I'm not sure if there's a general policy for\n> enums, but should we have an AssertArg() in LookupFuncName[WithArgs]\n> to check that a correct value was passed?\n\nI think since the original argument was a bool then it's pretty\nunlikely that such an assert would ever catch anything, given 0 and 1\nare both valid values for this enum type.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 09:03:50 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 9:04 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Wed, 20 Feb 2019 at 06:48, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Tue, Feb 19, 2019 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > > >\n> > > > Extensions calling those functions with old true/false values probably\n> > > > won't get any warning or error during compile. Is is something we\n> > > > should worry about or is it enough to keep the same behavior in this\n> > > > case?\n> > >\n> > > Yeah, I thought about that. We can avoid such problems by assigning\n> > > the enum values such that 0 and 1 correspond to the old behaviors.\n> > > I didn't look to see if the proposed patch does it like that right\n> > > now, but it should be an easy fix if not.\n> >\n> > It does, I was just wondering whether that was a good enough solution.\n> >\n> > Thinking more about it, I'm not sure if there's a general policy for\n> > enums, but should we have an AssertArg() in LookupFuncName[WithArgs]\n> > to check that a correct value was passed?\n>\n> I think since the original argument was a bool then it's pretty\n> unlikely that such an assert would ever catch anything, given 0 and 1\n> are both valid values for this enum type.\n\nIndeed. It looks all fine to me in v6, so I'm marking the patch as\nready for committer.\n\nThanks!\n\n",
"msg_date": "Tue, 19 Feb 2019 21:21:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 09:20, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Feb 19, 2019 at 9:04 PM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n> > I think since the original argument was a bool then it's pretty\n> > unlikely that such an assert would ever catch anything, given 0 and 1\n> > are both valid values for this enum type.\n>\n> Indeed. It looks all fine to me in v6, so I'm marking the patch as\n> ready for committer.\n\nGreat. Thanks for reviewing it.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 09:57:19 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 09:57:19AM +1300, David Rowley wrote:\n> Great. Thanks for reviewing it.\n\nYou forgot to change a call of LookupFuncWithArgs() in\nCreateTransform().\n\n- address.objectId = LookupFuncWithArgs(objtype, castNode(ObjectWithArgs, object), missing_ok);\n+ address.objectId = LookupFuncWithArgs(objtype, castNode(ObjectWithArgs, object),\n+ missing_ok ? FUNCLOOKUP_ERRIFAMBIGUOUS :\n+ FUNCLOOKUP_NORMAL);\n\nLookupFuncWithArgs() calls itself LookupFuncName(), which may not use\nthe check type provided by the caller.. I think that the existing API\nis already confusing enough, and this patch makes it a bit more\nconfusing by adding an extra error layer handling on top of it.\nWouldn't it be more simple from an error handling point of view to\nmove all the error handling into LookupFuncName() and let the caller\ndecide what kind of function type handling it expects from the start? \nI think that the right call is to add the object type into the\narguments of LookupFuncName().\n--\nMichael",
"msg_date": "Wed, 20 Feb 2019 14:56:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 18:56, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 20, 2019 at 09:57:19AM +1300, David Rowley wrote:\n> > Great. Thanks for reviewing it.\n>\n> You forgot to change a call of LookupFuncWithArgs() in\n> CreateTransform().\n\nYikes, Arthur did mention that, but I somehow managed to stumble over\nit when I checked. The attached fixes.\n\n> - address.objectId = LookupFuncWithArgs(objtype, castNode(ObjectWithArgs, object), missing_ok);\n> + address.objectId = LookupFuncWithArgs(objtype, castNode(ObjectWithArgs, object),\n> + missing_ok ? FUNCLOOKUP_ERRIFAMBIGUOUS :\n> + FUNCLOOKUP_NORMAL);\n>\n> LookupFuncWithArgs() calls itself LookupFuncName(), which may not use\n> the check type provided by the caller.. I think that the existing API\n> is already confusing enough, and this patch makes it a bit more\n> confusing by adding an extra error layer handling on top of it.\n> Wouldn't it be more simple from an error handling point of view to\n> move all the error handling into LookupFuncName() and let the caller\n> decide what kind of function type handling it expects from the start?\n> I think that the right call is to add the object type into the\n> arguments of LookupFuncName().\n\nBut there are plenty of callers that use LookupFuncName() directly. Do\nyou happen to know it's okay for all those to error out with the\nadditional error conditions that such a change would move into that\nfunction? I certainly don't know that.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 20 Feb 2019 21:34:15 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Wed, 20 Feb 2019 at 18:56, Michael Paquier <michael@paquier.xyz> wrote:\n>> I think that the right call is to add the object type into the\n>> arguments of LookupFuncName().\n\nI'm not clear how that helps exactly?\n\n> But there are plenty of callers that use LookupFuncName() directly. Do\n> you happen to know it's okay for all those to error out with the\n> additional error conditions that such a change would move into that\n> function? I certainly don't know that.\n\nThe real problem here is that you've unilaterally decided that all callers\nof get_object_address() need a particular behavior --- and not only that,\nbut a behavior that seems fairly surprising and unprincipled, given that\nget_object_address's option is documented as \"missing_ok\" (something the\npatch doesn't even bother to change). It's not very apparent to me why\nfunction-related lookups should start behaving differently from other\nlookups in that function, and it's sure not apparent that all callers of\nget_object_address() are on board with it.\n\nShould we be propagating that 3-way flag further up, to\nget_object_address() callers? I dunno.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 15:36:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Thu, 21 Feb 2019 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Wed, 20 Feb 2019 at 18:56, Michael Paquier <michael@paquier.xyz> wrote:\n> >> I think that the right call is to add the object type into the\n> >> arguments of LookupFuncName().\n>\n> I'm not clear how that helps exactly?\n>\n> > But there are plenty of callers that use LookupFuncName() directly. Do\n> > you happen to know it's okay for all those to error out with the\n> > additional error conditions that such a change would move into that\n> > function? I certainly don't know that.\n>\n> The real problem here is that you've unilaterally decided that all callers\n> of get_object_address() need a particular behavior --- and not only that,\n> but a behavior that seems fairly surprising and unprincipled, given that\n> get_object_address's option is documented as \"missing_ok\" (something the\n> patch doesn't even bother to change). It's not very apparent to me why\n> function-related lookups should start behaving differently from other\n> lookups in that function, and it's sure not apparent that all callers of\n> get_object_address() are on board with it.\n\nI assume you're talking about:\n\n * If the object is not found, an error is thrown, unless missing_ok is\n * true. In this case, no lock is acquired, relp is set to NULL, and the\n * returned address has objectId set to InvalidOid.\n\nWell, I didn't update that comment because the code I've changed does\nnothing different for the missing_ok case. The missing function error\nis still raised or not raised correctly depending on the value of that\nflag.\n\nI understand your original gripe with the patch where I had changed\nthe meaning of noError to mean\n\"noError-Apart-From-If-Its-An-Ambiguous-Function\", without much of any\ndocumentation to mention that fact, but it seems to me that this time\naround your confusing missing_ok with noError. To me noError means\ndon't raise an error, and missing_ok is intended for use with IF [NOT]\nEXISTS... Yes, it might be getting used for something else, but since\nwe still raise an error when the function is missing when the flag is\nset to false and don't when it's set to true. I fail to see why that\nbreaks the contract that's documented in the above comment. If you\nthink it does then please explain why.\n\n> Should we be propagating that 3-way flag further up, to\n> get_object_address() callers? I dunno.\n\nI don't see why that's needed given what's mentioned above.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 21 Feb 2019 14:29:33 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Thu, 21 Feb 2019 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The real problem here is that you've unilaterally decided that all callers\n>> of get_object_address() need a particular behavior --- and not only that,\n>> but a behavior that seems fairly surprising and unprincipled, given that\n>> get_object_address's option is documented as \"missing_ok\" (something the\n>> patch doesn't even bother to change).\n>> ...\n>> Should we be propagating that 3-way flag further up, to\n>> get_object_address() callers? I dunno.\n\n> I don't see why that's needed given what's mentioned above.\n\nWell, if we're not going to propagate the option further up, then I think\nwe might as well do it like you originally suggested, ie leave the\nsignature of LookupFuncName alone and just document that the\nambiguous-function case is not covered by noError. As far as I can tell,\njust about all the other callers besides get_object_address() have no\ninterest in this issue because they're not passing nargs == -1.\nWhat's more, a lot of them look like this example in\nfindRangeSubtypeDiffFunction:\n\n procOid = LookupFuncName(procname, 2, argList, FUNCLOOKUP_NOERROR);\n\n if (!OidIsValid(procOid))\n ereport(ERROR,\n (errcode(ERRCODE_UNDEFINED_FUNCTION),\n errmsg(\"function %s does not exist\",\n func_signature_string(procname, 2, NIL, argList))));\n\nso that if some day in the future FUNCLOOKUP_NOERROR could actually\nsuppress an ambiguous-function error here, the caller would proceed\nto report an incorrect/misleading error message. It doesn't seem to\nmake much sense to allow callers to separately suppress or not\nsuppress ambiguous-function unless we also change the return\nconvention so that the callers can tell which case happened.\nAnd that's looking a bit pointless, at least for now.\n\nSo, sorry for making you chase down this dead end, but it wasn't\nobvious until now (to me anyway) that it was a dead end.\n\nI did notice though that the patch fails to cover the same problem\nright next door for procedures:\n\nregression=# create procedure funcp(param1 text) language sql as 'select $1';\nCREATE PROCEDURE\nregression=# create procedure funcp(param1 int) language sql as 'select $1';\nCREATE PROCEDURE\nregression=# drop procedure funcp;\nERROR: could not find a procedure named \"funcp\"\n\nThis should surely be complaining about ambiguity, rather than giving\nthe same error text as if there were zero matches.\n\nPossibly the same occurs for aggregates, though I'm not sure if that\ncode is reachable --- DROP AGGREGATE, at least, won't let you omit the\narguments.\n\nI think the underlying cause of this is that LookupFuncWithArgs is in\nthe same situation I just complained for outside callers: it cannot tell\nwhether its noError request suppressed a not-found or ambiguous-function\ncase. Maybe the way to proceed here is to refactor within parse_func.c\nso that there's an underlying function that returns an indicator of what\nhappened, and both LookupFuncName and LookupFuncWithArgs call it, each\nwith their own ideas about how to phrase the error reports. It's\ncertainly mighty ugly that LookupFuncWithArgs farms out the actual\nerror report in some cases and not others.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 03 Mar 2019 15:14:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Mon, 4 Mar 2019 at 09:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Thu, 21 Feb 2019 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Should we be propagating that 3-way flag further up, to\n> >> get_object_address() callers? I dunno.\n>\n> > I don't see why that's needed given what's mentioned above.\n>\n> Well, if we're not going to propagate the option further up, then I think\n> we might as well do it like you originally suggested, ie leave the\n> signature of LookupFuncName alone and just document that the\n> ambiguous-function case is not covered by noError. As far as I can tell,\n> just about all the other callers besides get_object_address() have no\n> interest in this issue because they're not passing nargs == -1.\n\nOkay.\n\n> I think the underlying cause of this is that LookupFuncWithArgs is in\n> the same situation I just complained for outside callers: it cannot tell\n> whether its noError request suppressed a not-found or ambiguous-function\n> case. Maybe the way to proceed here is to refactor within parse_func.c\n> so that there's an underlying function that returns an indicator of what\n> happened, and both LookupFuncName and LookupFuncWithArgs call it, each\n> with their own ideas about how to phrase the error reports. It's\n> certainly mighty ugly that LookupFuncWithArgs farms out the actual\n> error report in some cases and not others.\n\nI started working on this, but... damage control... I don't want to\ntake it too far without you having a glance at it first.\n\nI've invented a new function by the name of LookupFuncNameInternal().\nThis attempts to find the function, but if it can't or the name is\nambiguous then it returns InvalidOid and sets an error code parameter.\nI've made both LookupFuncName and LookupFuncWithArgs use this.\n\nIn my travels, I discovered something else that does not seem very great.\n\npostgres=# create procedure abc(int) as $$ begin end; $$ language plpgsql;\nCREATE PROCEDURE\npostgres=# drop function if exists abc(int);\nNOTICE: function abc(pg_catalog.int4) does not exist, skipping\nDROP FUNCTION\n\nI think it would be better to ERROR in that case. So with the attached\nwe now get:\n\npostgres=# create procedure abc(int) as $$ begin end; $$ language plpgsql;\nCREATE PROCEDURE\npostgres=# drop function if exists abc(int);\nERROR: abc(integer) is not a function\n\nI've also tried to have the error messages mention procedure when the\nobject is a procedure and function when its a function. It looks like\nthe previous code was calling LookupFuncName() with noError=true so it\ncould handle using \"procedure\" in the error messages itself, but it\nfailed to do that for an ambiguous procedure name. That should now be\nfixed.\n\nI also touched the too many function arguments case, but perhaps I\nneed to go further there and do something for aggregates. I've not\nthought too hard about that.\n\nI've not really read the patch back or done any polishing yet. Manual\ntesting done is minimal, and I didn't add tests for the new behaviour\nchange either. I can do more if the feedback is positive.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 11 Mar 2019 04:18:37 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI read a discussion and I think so currently implemented behave (by last patch) is correct in all details.\r\n\r\nI propose maybe more strongly comment fact so noError is applied only on \"not found\" event. In other cases, this flag is ignored and error is raised immediately there. I think so it is not good enough commented why.\r\nThis is significant change - in previous releases, noError was used like really noError, so should be commented more.\r\n\r\nRegress tests are enough.\r\nThe patch is possible to apply without problems and compile without warnings\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 19 Mar 2019 15:30:17 +0000",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\"\n on DB with functions having same name"
},
{
"msg_contents": "Thanks for reviewing this.\n\nOn Wed, 20 Mar 2019 at 04:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I propose maybe more strongly comment fact so noError is applied only on \"not found\" event. In other cases, this flag is ignored and error is raised immediately there. I think so it is not good enough commented why.\n> This is significant change - in previous releases, noError was used like really noError, so should be commented more.\n\nI've made a change to the comments in LookupFuncWithArgs() to make\nthis more clear. I also ended up renaming noError to missing_ok.\nHopefully this backs up the comments and reduces the chances of\nsurprises.\n\n> Regress tests are enough.\n> The patch is possible to apply without problems and compile without warnings\n\nThanks. I also fixed a bug that caused an Assert failure when\nperforming DROP ROUTINE ambiguous_name; test added for that case too.\n\n> The new status of this patch is: Ready for Committer\n\nGreat!\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 21 Mar 2019 00:43:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> [ drop_func_if_not_exists_fix_v9.patch ]\n\nPushed with mostly-cosmetic adjustments.\n\nI noticed a couple of loose ends that are somewhat outside the scope\nof the bug report, but maybe are worth considering now:\n\n1. There's some inconsistency in the wording of the error messages in\nthese routines, eg\n\n errmsg(\"%s is not a function\",\nvs\n errmsg(\"%s is not a procedure\",\nvs\n errmsg(\"function %s is not an aggregate\",\n\nAlso there's\n errmsg(\"function name \\\"%s\\\" is not unique\",\nwhere elsewhere in parse_func.c, we find\n errmsg(\"function %s is not unique\",\n\nYou didn't touch this and I didn't either, but maybe we should try to\nmake these consistent?\n\n2. Consider\n\nregression=# CREATE FUNCTION ambig(int) returns int as $$ select $1; $$ language sql;\nCREATE FUNCTION\nregression=# CREATE PROCEDURE ambig() as $$ begin end; $$ language plpgsql;\nCREATE PROCEDURE\nregression=# DROP PROCEDURE ambig;\nERROR: procedure name \"ambig\" is not unique\nHINT: Specify the argument list to select the procedure unambiguously.\n\nArguably, because I said \"drop procedure\", there's no ambiguity here;\nbut we don't account for objtype while doing the lookup.\n\nI'm inclined to leave point 2 alone, because we haven't had complaints\nabout it, and because I'm not sure we could make it behave in a clean\nway given the historical ambiguity about what OBJECT_FUNCTION should\nmatch. But perhaps it's worth discussing.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Mar 2019 12:04:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
},
{
"msg_contents": "On Fri, 22 Mar 2019 at 05:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Pushed with mostly-cosmetic adjustments.\n\nThanks for pushing this.\n\n> I noticed a couple of loose ends that are somewhat outside the scope\n> of the bug report, but maybe are worth considering now:\n>\n> 1. There's some inconsistency in the wording of the error messages in\n> these routines, eg\n>\n> errmsg(\"%s is not a function\",\n> vs\n> errmsg(\"%s is not a procedure\",\n> vs\n> errmsg(\"function %s is not an aggregate\",\n>\n> Also there's\n> errmsg(\"function name \\\"%s\\\" is not unique\",\n> where elsewhere in parse_func.c, we find\n> errmsg(\"function %s is not unique\",\n>\n> You didn't touch this and I didn't either, but maybe we should try to\n> make these consistent?\n\nI think aligning those is a good idea. I had just been wondering to\nmyself last night about how much binary space is taken up by needless\nadditional string constants that could be normalised a bit.\nTranslators might be happy if we did that.\n\n> 2. Consider\n>\n> regression=# CREATE FUNCTION ambig(int) returns int as $$ select $1; $$ language sql;\n> CREATE FUNCTION\n> regression=# CREATE PROCEDURE ambig() as $$ begin end; $$ language plpgsql;\n> CREATE PROCEDURE\n> regression=# DROP PROCEDURE ambig;\n> ERROR: procedure name \"ambig\" is not unique\n> HINT: Specify the argument list to select the procedure unambiguously.\n>\n> Arguably, because I said \"drop procedure\", there's no ambiguity here;\n> but we don't account for objtype while doing the lookup.\n\nYeah. I went with reporting the objtype that was specified in a\ncommand. I stayed well clear of allowing overlapping names between\nprocedures and functions. It would be hard to put that back if we\never discovered a reason we shouldn't have done it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 22 Mar 2019 17:20:07 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15572: Misleading message reported by \"Drop function\n operation\" on DB with functions having same name"
}
] |
[
{
"msg_contents": "Hi All,\n\nI've frequently seen an issue in applications which store titles (eg of books, events, user profiles) where duplicate values are not properly vetted. \n\nThe 'citext' type is helpful here, but I'd be keen to go further. \n\nI propose a 'titletext' type, which has the following properties when compared for equality:\n * Case insensitivity (like 'citext')\n * Only considers characters in [:alnum:] (that is, ignores spaces, punctuation, etc)\n\nThis would be useful for a range of situations where it's important to avoid entering duplicate values.\n\nGiven the discussion at https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com <https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj=fAB0tydvvtK7ibgFEx3tegbPWsGjJpg@mail.gmail.com> I'd lean towards making this type not automatically coerce to text (to avoid surprising behaviour when comparing text to titletext).\n\nIs a suitable patch likely to be accepted?\n\nThanks,\n\nDaniel Heath\n\n\nHi All,I've\n frequently seen an issue in applications which store titles (eg of \nbooks, events, user profiles) where duplicate values are not properly \nvetted. The 'citext' type is helpful here, but I'd be keen to go further. I propose a 'titletext' type, which has the following properties when compared for equality: * Case insensitivity (like 'citext') * Only considers characters in [:alnum:] (that is, ignores spaces, punctuation, etc)This would be useful for a range of situations where it's important to avoid entering duplicate values.Given the discussion at https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com I'd\n lean towards making this type not automatically coerce to text (to \navoid surprising behaviour when comparing text to titletext).Is a suitable patch likely to be accepted?Thanks,Daniel Heath",
"msg_date": "Fri, 4 Jan 2019 09:22:27 +1100",
"msg_from": "Daniel Heath <daniel@heath.cc>",
"msg_from_op": true,
"msg_subject": "Custom text type for title text"
},
{
"msg_contents": "Em qui, 3 de jan de 2019 às 20:22, Daniel Heath <daniel@heath.cc> escreveu:\n\n> Hi All,\n>\n> I've frequently seen an issue in applications which store titles (eg of\n> books, events, user profiles) where duplicate values are not properly\n> vetted.\n>\n> The 'citext' type is helpful here, but I'd be keen to go further.\n>\n> I propose a 'titletext' type, which has the following properties when\n> compared for equality:\n> * Case insensitivity (like 'citext')\n> * Only considers characters in [:alnum:] (that is, ignores spaces,\n> punctuation, etc)\n>\n> This would be useful for a range of situations where it's important to\n> avoid entering duplicate values.\n>\n> Given the discussion at\n> https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com\n> <https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj=fAB0tydvvtK7ibgFEx3tegbPWsGjJpg@mail.gmail.com> I'd\n> lean towards making this type not automatically coerce to text (to avoid\n> surprising behaviour when comparing text to titletext).\n>\n> Is a suitable patch likely to be accepted?\n>\n\n> You don’t need touch the core to do that. Just implement it as an\nextension and share throught some channel like pgxn.org.\n\nNote that citext also is an extension and released as a contrib module.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nEm qui, 3 de jan de 2019 às 20:22, Daniel Heath <daniel@heath.cc> escreveu:Hi All,I've\n frequently seen an issue in applications which store titles (eg of \nbooks, events, user profiles) where duplicate values are not properly \nvetted. The 'citext' type is helpful here, but I'd be keen to go further. I propose a 'titletext' type, which has the following properties when compared for equality: * Case insensitivity (like 'citext') * Only considers characters in [:alnum:] (that is, ignores spaces, punctuation, etc)This would be useful for a range of situations where it's important to avoid entering duplicate values.Given the discussion at https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com I'd\n lean towards making this type not automatically coerce to text (to \navoid surprising behaviour when comparing text to titletext).Is a suitable patch likely to be accepted?You don’t need touch the core to do that. Just implement it as an extension and share throught some channel like pgxn.org.Note that citext also is an extension and released as a contrib module.Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 3 Jan 2019 20:47:27 -0200",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom text type for title text"
},
{
"msg_contents": "Would this also be appropriate for inclusion as contrib? I'm unfamiliar\nwith the policy for what is / is not included there.\nThanks,\nDaniel Heath\n\n\nOn Fri, Jan 4, 2019, at 9:47 AM, Fabrízio de Royes Mello wrote:\n> \n> \n> Em qui, 3 de jan de 2019 às 20:22, Daniel Heath <daniel@heath.cc>\n> escreveu:>> Hi All,\n>> \n>> I've frequently seen an issue in applications which store titles (eg\n>> of books, events, user profiles) where duplicate values are not\n>> properly vetted.>> \n>> The 'citext' type is helpful here, but I'd be keen to go further. \n>> \n>> I propose a 'titletext' type, which has the following properties when\n>> compared for equality:>> * Case insensitivity (like 'citext')\n>> * Only considers characters in [:alnum:] (that is, ignores spaces,\n>> punctuation, etc)>> \n>> This would be useful for a range of situations where it's important\n>> to avoid entering duplicate values.>> \n>> Given the discussion at\n>> https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com[1]\n>> I'd lean towards making this type not automatically coerce to text\n>> (to avoid surprising behaviour when comparing text to titletext).>> \n>> Is a suitable patch likely to be accepted?\n>> \n> You don’t need touch the core to do that. Just implement it as an\n> extension and share throught some channel like pgxn.org.> \n> Note that citext also is an extension and released as a contrib\n> module.> \n> Regards,\n> \n> -- \n> Fabrízio de Royes Mello Timbira -\n> http://www.timbira.com.br/> PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e\n> Treinamento\n\nLinks:\n\n 1. https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj=fAB0tydvvtK7ibgFEx3tegbPWsGjJpg@mail.gmail.com\n\n\n\n\n\n\nWould this also be appropriate for inclusion as contrib? I'm unfamiliar with the policy for what is / is not included there.\n\nThanks,\nDaniel Heath\n\n\n\nOn Fri, Jan 4, 2019, at 9:47 AM, Fabrízio de Royes Mello wrote:\n\n\nEm qui, 3 de jan de 2019 às 20:22, Daniel Heath <daniel@heath.cc> escreveu:\nHi All,\n\nI've\n frequently seen an issue in applications which store titles (eg of \nbooks, events, user profiles) where duplicate values are not properly \nvetted. \n\nThe 'citext' type is helpful here, but I'd be keen to go further. \n\nI propose a 'titletext' type, which has the following properties when compared for equality:\n * Case insensitivity (like 'citext')\n * Only considers characters in [:alnum:] (that is, ignores spaces, punctuation, etc)\n\nThis would be useful for a range of situations where it's important to avoid entering duplicate values.\n\nGiven the discussion at https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com I'd\n lean towards making this type not automatically coerce to text (to \navoid surprising behaviour when comparing text to titletext).\n\nIs a suitable patch likely to be accepted?\n\n\n\n\n\nYou don’t need touch the core to do that. Just implement it as an extension and share throught some channel like pgxn.org.\n\nNote that citext also is an extension and released as a contrib module.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Fri, 04 Jan 2019 09:53:57 +1100",
"msg_from": "Daniel Heath <daniel@heath.cc>",
"msg_from_op": true,
"msg_subject": "Re: Custom text type for title text"
},
{
"msg_contents": "Em qui, 3 de jan de 2019 às 20:53, Daniel Heath <daniel@heath.cc> escreveu:\n\n> Would this also be appropriate for inclusion as contrib? I'm unfamiliar\n> with the policy for what is / is not included there.\n>\n>\nPlease do not top post.\n\nAt first I recommend you implement it as an extension (using gitlab,\ngithub, bitbucket or something else) and after you have a stable working\ncode maybe you should try to send it as a contrib module and then the\ncommunity will decide to accept it or not.\n\nPostgreSQL is extensible enough to you provide this piece of work without\ncare with the community decisions. What I mean is you necessarily don’t\nneed to send it as a contrib module, just maintain it as a separate\nextension project.\n\nRegards,\n\n\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nEm qui, 3 de jan de 2019 às 20:53, Daniel Heath <daniel@heath.cc> escreveu:\nWould this also be appropriate for inclusion as contrib? I'm unfamiliar with the policy for what is / is not included there.\n\nPlease do not top post.At first I recommend you implement it as an extension (using gitlab, github, bitbucket or something else) and after you have a stable working code maybe you should try to send it as a contrib module and then the community will decide to accept it or not.PostgreSQL is extensible enough to you provide this piece of work without care with the community decisions. What I mean is you necessarily don’t need to send it as a contrib module, just maintain it as a separate extension project.Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 3 Jan 2019 21:52:57 -0200",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom text type for title text"
},
{
"msg_contents": "On 03/01/2019 23:22, Daniel Heath wrote:\n> I propose a 'titletext' type, which has the following properties when\n> compared for equality:\n> �* Case insensitivity (like 'citext')\n> �* Only considers characters in [:alnum:] (that is, ignores spaces,\n> punctuation, etc)\n\nMy work on insensitive/non-deterministic collations[0] might cover this.\n\n[0]:\nhttps://www.postgresql.org/message-id/flat/1ccc668f-4cbc-0bef-af67-450b47cdfee7%402ndquadrant.com\n\nFor example:\n\nCREATE COLLATION yournamehere (provider = icu,\n locale = 'und-u-ks-level2-ka-shifted', deterministic = false);\n\n(Roughly, ks-level2 means ignore case, ka-shifted means ignore punctuation.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 00:58:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom text type for title text"
},
{
"msg_contents": "Policy on contrib has shifted over time. But generally we want to encourage\na lively ecosystem of extensions maintained outside of the Postgres source\ntree so we avoid adding things to contrib when there's no particular\nadvantage.\n\nThe most common reason things are added to contrib is when the extension is\nclosely tied to internals and needs to be maintained along with changes to\ninternals. Modules like that are hard to maintain separately. But modules\nthat use documented general extensibility APIs should be able to be stable\nacross versions and live outside contrib.\n\nOn Thu 3 Jan 2019, 23:54 Daniel Heath <daniel@heath.cc wrote:\n\n> Would this also be appropriate for inclusion as contrib? I'm unfamiliar\n> with the policy for what is / is not included there.\n>\n> Thanks,\n> Daniel Heath\n>\n>\n> On Fri, Jan 4, 2019, at 9:47 AM, Fabrízio de Royes Mello wrote:\n>\n>\n>\n> Em qui, 3 de jan de 2019 às 20:22, Daniel Heath <daniel@heath.cc>\n> escreveu:\n>\n> Hi All,\n>\n> I've frequently seen an issue in applications which store titles (eg of\n> books, events, user profiles) where duplicate values are not properly\n> vetted.\n>\n> The 'citext' type is helpful here, but I'd be keen to go further.\n>\n> I propose a 'titletext' type, which has the following properties when\n> compared for equality:\n> * Case insensitivity (like 'citext')\n> * Only considers characters in [:alnum:] (that is, ignores spaces,\n> punctuation, etc)\n>\n> This would be useful for a range of situations where it's important to\n> avoid entering duplicate values.\n>\n> Given the discussion at\n> https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com\n> <https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj=fAB0tydvvtK7ibgFEx3tegbPWsGjJpg@mail.gmail.com> I'd\n> lean towards making this type not automatically coerce to text (to avoid\n> surprising behaviour when comparing text to titletext).\n>\n> Is a suitable patch likely to be accepted?\n>\n>\n> You don’t need touch the core to do that. Just implement it as an\n> extension and share throught some channel like pgxn.org.\n>\n> Note that citext also is an extension and released as a contrib module.\n>\n> Regards,\n>\n> --\n> Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n> PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n>\n>\n>\n\nPolicy on contrib has shifted over time. But generally we want to encourage a lively ecosystem of extensions maintained outside of the Postgres source tree so we avoid adding things to contrib when there's no particular advantage. The most common reason things are added to contrib is when the extension is closely tied to internals and needs to be maintained along with changes to internals. Modules like that are hard to maintain separately. But modules that use documented general extensibility APIs should be able to be stable across versions and live outside contrib.On Thu 3 Jan 2019, 23:54 Daniel Heath <daniel@heath.cc wrote:\nWould this also be appropriate for inclusion as contrib? I'm unfamiliar with the policy for what is / is not included there.\n\nThanks,\nDaniel Heath\n\n\n\nOn Fri, Jan 4, 2019, at 9:47 AM, Fabrízio de Royes Mello wrote:\n\n\nEm qui, 3 de jan de 2019 às 20:22, Daniel Heath <daniel@heath.cc> escreveu:\nHi All,\n\nI've\n frequently seen an issue in applications which store titles (eg of \nbooks, events, user profiles) where duplicate values are not properly \nvetted. \n\nThe 'citext' type is helpful here, but I'd be keen to go further. \n\nI propose a 'titletext' type, which has the following properties when compared for equality:\n * Case insensitivity (like 'citext')\n * Only considers characters in [:alnum:] (that is, ignores spaces, punctuation, etc)\n\nThis would be useful for a range of situations where it's important to avoid entering duplicate values.\n\nGiven the discussion at https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com I'd\n lean towards making this type not automatically coerce to text (to \navoid surprising behaviour when comparing text to titletext).\n\nIs a suitable patch likely to be accepted?\n\n\n\n\n\nYou don’t need touch the core to do that. Just implement it as an extension and share throught some channel like pgxn.org.\n\nNote that citext also is an extension and released as a contrib module.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Fri, 4 Jan 2019 14:30:37 +0100",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Custom text type for title text"
}
] |
[
{
"msg_contents": "Hi,\n\nI've noticed a change in the behaviour in triggers / hstores in Postgres\n11.1 when compared to Postgres 10.5.\nThe following won't work on Postgres 10.5 but in Postgres 11.1 it works\njust fine:\n\nCREATE EXTENSION hstore;\n\nCREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);\nCREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);\n\nCREATE FUNCTION test1_trigger ()\nRETURNS TRIGGER\nLANGUAGE plpgsql\nAS\n$BODY$\nBEGIN\nINSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) -\nhstore(NEW));\nRETURN NEW;\nEND\n$BODY$;\n\nCREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1\nFOR EACH ROW EXECUTE PROCEDURE test1_trigger();\n\nINSERT INTO _tmp_test1 (val) VALUES (5);\nERROR: record \"old\" is not assigned yet\nDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\nCONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id, changes)\nVALUES (NEW.id, hstore(OLD) - hstore(NEW))\"\nPL/pgSQL function test1_trigger() line 3 at SQL statement\n\nI couldn't find anything about this in the release notes (\nhttps://www.postgresql.org/docs/11/release-11.html), but maybe I just\ndidn't know what to look for.\n\nHi,I've noticed a change in the behaviour in triggers / hstores in Postgres 11.1 when compared to Postgres 10.5.The following won't work on Postgres 10.5 but in Postgres 11.1 it works just fine:CREATE EXTENSION hstore;CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);CREATE FUNCTION test1_trigger () RETURNS TRIGGER LANGUAGE plpgsql AS$BODY$BEGIN INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW)); RETURN NEW;END$BODY$;CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1 FOR EACH ROW EXECUTE PROCEDURE test1_trigger();INSERT INTO _tmp_test1 (val) VALUES (5);ERROR: record \"old\" is not assigned yetDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.CONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW))\"PL/pgSQL function test1_trigger() line 3 at SQL statementI couldn't find anything about this in the release notes (https://www.postgresql.org/docs/11/release-11.html), but maybe I just didn't know what to look for.",
"msg_date": "Fri, 4 Jan 2019 12:45:42 +0200",
"msg_from": "Kristjan Tammekivi <kristjantammekivi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Potentially undocumented behaviour change in Postgres 11 concerning\n OLD record in an after insert trigger"
},
{
"msg_contents": "Hello\n\n \n\nFrom: Kristjan Tammekivi <kristjantammekivi@gmail.com> \nSent: Freitag, 4. Januar 2019 11:46\nTo: pgsql-general@postgresql.org\nSubject: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger\n\n \n\nHi,\n\n \n\nI've noticed a change in the behaviour in triggers / hstores in Postgres 11.1 when compared to Postgres 10.5.\n\nThe following won't work on Postgres 10.5 but in Postgres 11.1 it works just fine:\n\n \n\nCREATE EXTENSION hstore;\n\nCREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);\nCREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);\n\nCREATE FUNCTION test1_trigger ()\nRETURNS TRIGGER\nLANGUAGE plpgsql\nAS\n$BODY$\nBEGIN\nINSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW));\nRETURN NEW;\nEND\n$BODY$;\n\nCREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1\nFOR EACH ROW EXECUTE PROCEDURE test1_trigger();\n\n \n\nINSERT INTO _tmp_test1 (val) VALUES (5);\n\nERROR: record \"old\" is not assigned yet\n\nDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\n\nCONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW))\"\n\nPL/pgSQL function test1_trigger() line 3 at SQL statement\n\n \n\nI couldn't find anything about this in the release notes (https://www.postgresql.org/docs/11/release-11.html), but maybe I just didn't know what to look for.\n\n \n\nI doubt that this works on any PG version for INSERT.\n\n \n\nAccording to the documentation:\n\n \n\nhttps://www.postgresql.org/docs/10/plpgsql-trigger.html and https://www.postgresql.org/docs/11/plpgsql-trigger.html\n\n \n\nOLD: Data type RECORD; variable holding the old database row for UPDATE/DELETE operations in row-level triggers. This variable is unassigned in statement-level triggers and for INSERT operations.\n\n \n\nRegards\n\nCharles\n\n\nHello From: Kristjan Tammekivi <kristjantammekivi@gmail.com> Sent: Freitag, 4. Januar 2019 11:46To: pgsql-general@postgresql.orgSubject: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger Hi, I've noticed a change in the behaviour in triggers / hstores in Postgres 11.1 when compared to Postgres 10.5.The following won't work on Postgres 10.5 but in Postgres 11.1 it works just fine: CREATE EXTENSION hstore;CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);CREATE FUNCTION test1_trigger ()RETURNS TRIGGERLANGUAGE plpgsqlAS$BODY$BEGININSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW));RETURN NEW;END$BODY$;CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1FOR EACH ROW EXECUTE PROCEDURE test1_trigger(); INSERT INTO _tmp_test1 (val) VALUES (5);ERROR: record \"old\" is not assigned yetDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.CONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW))\"PL/pgSQL function test1_trigger() line 3 at SQL statement I couldn't find anything about this in the release notes (https://www.postgresql.org/docs/11/release-11.html), but maybe I just didn't know what to look for. I doubt that this works on any PG version for INSERT. According to the documentation: https://www.postgresql.org/docs/10/plpgsql-trigger.html and https://www.postgresql.org/docs/11/plpgsql-trigger.html OLD: Data type RECORD; variable holding the old database row for UPDATE/DELETE operations in row-level triggers. This variable is unassigned in statement-level triggers and for INSERT operations. RegardsCharles",
"msg_date": "Fri, 4 Jan 2019 11:56:22 +0100",
"msg_from": "\"Charles Clavadetscher\" <clavadetscher@swisspug.org>",
"msg_from_op": false,
"msg_subject": "RE: Potentially undocumented behaviour change in Postgres 11\n concerning OLD record in an after insert trigger"
},
{
"msg_contents": "Hi,\nI've read the documentation, that's why I said this might be undocumented.\nTry the SQL in Postgres 11 and see that it works for yourself.\nI have an analogous trigger in production from yesterday and I've tested it\nin local environment as well.\n\nOn Fri, Jan 4, 2019 at 12:56 PM Charles Clavadetscher <\nclavadetscher@swisspug.org> wrote:\n\n> Hello\n>\n>\n>\n> *From:* Kristjan Tammekivi <kristjantammekivi@gmail.com>\n> *Sent:* Freitag, 4. Januar 2019 11:46\n> *To:* pgsql-general@postgresql.org\n> *Subject:* Potentially undocumented behaviour change in Postgres 11\n> concerning OLD record in an after insert trigger\n>\n>\n>\n> Hi,\n>\n>\n>\n> I've noticed a change in the behaviour in triggers / hstores in Postgres\n> 11.1 when compared to Postgres 10.5.\n>\n> The following won't work on Postgres 10.5 but in Postgres 11.1 it works\n> just fine:\n>\n>\n>\n> CREATE EXTENSION hstore;\n>\n> CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);\n> CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);\n>\n> CREATE FUNCTION test1_trigger ()\n> RETURNS TRIGGER\n> LANGUAGE plpgsql\n> AS\n> $BODY$\n> BEGIN\n> INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) -\n> hstore(NEW));\n> RETURN NEW;\n> END\n> $BODY$;\n>\n> CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1\n> FOR EACH ROW EXECUTE PROCEDURE test1_trigger();\n>\n>\n>\n> INSERT INTO _tmp_test1 (val) VALUES (5);\n>\n> ERROR: record \"old\" is not assigned yet\n>\n> DETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\n>\n> CONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id, changes)\n> VALUES (NEW.id, hstore(OLD) - hstore(NEW))\"\n>\n> PL/pgSQL function test1_trigger() line 3 at SQL statement\n>\n>\n>\n> I couldn't find anything about this in the release notes (\n> https://www.postgresql.org/docs/11/release-11.html), but maybe I just\n> didn't know what to look for.\n>\n>\n>\n> *I doubt that this works on any PG version for INSERT.*\n>\n>\n>\n> *According to the documentation:*\n>\n>\n>\n> *https://www.postgresql.org/docs/10/plpgsql-trigger.html\n> <https://www.postgresql.org/docs/10/plpgsql-trigger.html> and\n> https://www.postgresql.org/docs/11/plpgsql-trigger.html\n> <https://www.postgresql.org/docs/11/plpgsql-trigger.html>*\n>\n>\n>\n> *OLD: **Data type **RECORD**; variable holding the old database row for *\n> *UPDATE**/**DELETE** operations in row-level triggers. This variable is\n> unassigned in statement-level triggers and for **INSERT** operations.*\n>\n>\n>\n> *Regards*\n>\n> *Charles*\n>\n\nHi,I've read the documentation, that's why I said this might be undocumented. Try the SQL in Postgres 11 and see that it works for yourself.I have an analogous trigger in production from yesterday and I've tested it in local environment as well.On Fri, Jan 4, 2019 at 12:56 PM Charles Clavadetscher <clavadetscher@swisspug.org> wrote:Hello From: Kristjan Tammekivi <kristjantammekivi@gmail.com> Sent: Freitag, 4. Januar 2019 11:46To: pgsql-general@postgresql.orgSubject: Potentially undocumented behaviour change in Postgres 11 concerning OLD record in an after insert trigger Hi, I've noticed a change in the behaviour in triggers / hstores in Postgres 11.1 when compared to Postgres 10.5.The following won't work on Postgres 10.5 but in Postgres 11.1 it works just fine: CREATE EXTENSION hstore;CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);CREATE FUNCTION test1_trigger ()RETURNS TRIGGERLANGUAGE plpgsqlAS$BODY$BEGININSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW));RETURN NEW;END$BODY$;CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1FOR EACH ROW EXECUTE PROCEDURE test1_trigger(); INSERT INTO _tmp_test1 (val) VALUES (5);ERROR: record \"old\" is not assigned yetDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.CONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW))\"PL/pgSQL function test1_trigger() line 3 at SQL statement I couldn't find anything about this in the release notes (https://www.postgresql.org/docs/11/release-11.html), but maybe I just didn't know what to look for. I doubt that this works on any PG version for INSERT. According to the documentation: https://www.postgresql.org/docs/10/plpgsql-trigger.html and https://www.postgresql.org/docs/11/plpgsql-trigger.html OLD: Data type RECORD; variable holding the old database row for UPDATE/DELETE operations in row-level triggers. This variable is unassigned in statement-level triggers and for INSERT operations. RegardsCharles",
"msg_date": "Fri, 4 Jan 2019 14:20:55 +0200",
"msg_from": "Kristjan Tammekivi <kristjantammekivi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Potentially undocumented behaviour change in Postgres 11\n concerning OLD record in an after insert trigger"
},
{
"msg_contents": "On 1/4/19 4:20 AM, Kristjan Tammekivi wrote:\n> Hi,\n> I've read the documentation, that's why I said this might be \n> undocumented. Try the SQL in Postgres 11 and see that it works for yourself.\n> I have an analogous trigger in production from yesterday and I've tested \n> it in local environment as well.\n\nI can confirm:\n\nselect version();\n version \n\n------------------------------------------------------------------------------------\n PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) \n4.8.5, 64-bit\n\n\nINSERT INTO _tmp_test1 (val) VALUES (5);\nINSERT 0 1\n\nselect * from _tmp_test1_changes ;\n id | changes\n----+-------------------------\n 1 | \"id\"=>NULL, \"val\"=>NULL\n(1 row)\n\nI would file a bug report:\n\nhttps://www.postgresql.org/account/submitbug/\n\n> \n> On Fri, Jan 4, 2019 at 12:56 PM Charles Clavadetscher \n> <clavadetscher@swisspug.org <mailto:clavadetscher@swisspug.org>> wrote:\n> \n> Hello____\n> \n> __ __\n> \n> *From:*Kristjan Tammekivi <kristjantammekivi@gmail.com\n> <mailto:kristjantammekivi@gmail.com>>\n> *Sent:* Freitag, 4. Januar 2019 11:46\n> *To:* pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>\n> *Subject:* Potentially undocumented behaviour change in Postgres 11\n> concerning OLD record in an after insert trigger____\n> \n> __ __\n> \n> Hi,____\n> \n> __ __\n> \n> I've noticed a change in the behaviour in triggers / hstores in\n> Postgres 11.1 when compared to Postgres 10.5.____\n> \n> The following won't work on Postgres 10.5 but in Postgres 11.1 it\n> works just fine:____\n> \n> __ __\n> \n> CREATE EXTENSION hstore;\n> \n> CREATE TABLE _tmp_test1 (id serial PRIMARY KEY, val INTEGER);\n> CREATE TABLE _tmp_test1_changes (id INTEGER, changes HSTORE);\n> \n> CREATE FUNCTION test1_trigger ()\n> RETURNS TRIGGER\n> LANGUAGE plpgsql\n> AS\n> $BODY$\n> BEGIN\n> INSERT INTO _tmp_test1_changes (id, changes) VALUES (NEW.id,\n> hstore(OLD) - hstore(NEW));\n> RETURN NEW;\n> END\n> $BODY$;\n> \n> CREATE TRIGGER table_update AFTER INSERT OR UPDATE ON _tmp_test1\n> FOR EACH ROW EXECUTE PROCEDURE test1_trigger();____\n> \n> __ __\n> \n> INSERT INTO _tmp_test1 (val) VALUES (5);____\n> \n> ERROR: record \"old\" is not assigned yet____\n> \n> DETAIL: The tuple structure of a not-yet-assigned record is\n> indeterminate.____\n> \n> CONTEXT: SQL statement \"INSERT INTO _tmp_test1_changes (id,\n> changes) VALUES (NEW.id, hstore(OLD) - hstore(NEW))\"____\n> \n> PL/pgSQL function test1_trigger() line 3 at SQL statement____\n> \n> __ __\n> \n> I couldn't find anything about this in the release notes\n> (https://www.postgresql.org/docs/11/release-11.html), but maybe I\n> just didn't know what to look for.____\n> \n> __ __\n> \n> *I doubt that this works on any PG version for INSERT.____*\n> \n> *__ __*\n> \n> *According to the documentation:____*\n> \n> *__ __*\n> \n> *https://www.postgresql.org/docs/10/plpgsql-trigger.html and\n> https://www.postgresql.org/docs/11/plpgsql-trigger.html____*\n> \n> *__ __*\n> \n> *OLD: **Data type **RECORD**; variable holding the old database row\n> for **UPDATE**/**DELETE**operations in row-level triggers. This\n> variable is unassigned in statement-level triggers and for\n> **INSERT**operations.**____*\n> \n> *__ __*\n> \n> *Regards____*\n> \n> *Charles____*\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n",
"msg_date": "Fri, 4 Jan 2019 06:32:59 -0800",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: Potentially undocumented behaviour change in Postgres 11\n concerning OLD record in an after insert trigger"
},
{
"msg_contents": "Kristjan Tammekivi <kristjantammekivi@gmail.com> writes:\n> I've noticed a change in the behaviour in triggers / hstores in Postgres\n> 11.1 when compared to Postgres 10.5.\n> [ reference to OLD in an insert trigger doesn't throw error anymore ]\n\nHmm. This seems to be a side effect of the changes we (I) made in v11\nto rationalize the handling of NULL vs ROW(NULL,NULL,...) composite\nvalues in plpgsql. The \"unassigned\" trigger row variables are now\nacting as though they are plain NULL values. I'm not sure now whether\nI realized that this would happen --- it looks like there are not and\nwere not any regression test cases that would throw an error for the\ndisallowed-reference case, so it's quite possible that it just escaped\nnotice.\n\nThe old behavior was pretty darn squirrely; in particular it would let\nyou parse but not execute a reference to \"OLD.column\", a behavior that\ncould not occur for any other composite variable. Now that'll just\nreturn NULL. In a green field I don't think there'd be complaints\nabout this behavior --- I know lots of people have spent considerable\neffort programming around the other behavior.\n\nWhile I haven't looked closely, I think duplicating the old behavior\nwould require adding a special-purpose flag to plpgsql DTYPE_REC\nvariables, which'd cost a little performance (extra error checks\nin very hot code paths) and possibly break compatibility with\npldebugger, if there's been a v11 release of that.\n\nSo I'm a bit inclined to accept this behavior change and adjust\nthe documentation to say that OLD/NEW are \"null\" rather than\n\"unassigned\" when not relevant.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 04 Jan 2019 11:44:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Potentially undocumented behaviour change in Postgres 11\n concerning OLD record in an after insert trigger"
},
{
"msg_contents": "pá 4. 1. 2019 v 17:44 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Kristjan Tammekivi <kristjantammekivi@gmail.com> writes:\n> > I've noticed a change in the behaviour in triggers / hstores in Postgres\n> > 11.1 when compared to Postgres 10.5.\n> > [ reference to OLD in an insert trigger doesn't throw error anymore ]\n>\n> Hmm. This seems to be a side effect of the changes we (I) made in v11\n> to rationalize the handling of NULL vs ROW(NULL,NULL,...) composite\n> values in plpgsql. The \"unassigned\" trigger row variables are now\n> acting as though they are plain NULL values. I'm not sure now whether\n> I realized that this would happen --- it looks like there are not and\n> were not any regression test cases that would throw an error for the\n> disallowed-reference case, so it's quite possible that it just escaped\n> notice.\n>\n> The old behavior was pretty darn squirrely; in particular it would let\n> you parse but not execute a reference to \"OLD.column\", a behavior that\n> could not occur for any other composite variable. Now that'll just\n> return NULL. In a green field I don't think there'd be complaints\n> about this behavior --- I know lots of people have spent considerable\n> effort programming around the other behavior.\n>\n> While I haven't looked closely, I think duplicating the old behavior\n> would require adding a special-purpose flag to plpgsql DTYPE_REC\n> variables, which'd cost a little performance (extra error checks\n> in very hot code paths) and possibly break compatibility with\n> pldebugger, if there's been a v11 release of that.\n>\n> So I'm a bit inclined to accept this behavior change and adjust\n> the documentation to say that OLD/NEW are \"null\" rather than\n> \"unassigned\" when not relevant.\n>\n\nIt is maybe unwanted effect, but it is not bad from my view. new behave is\nconsistent - a initial value of variables is just NULL\n\n+1\n\nPavel\n\n\n> Thoughts?\n>\n> regards, tom lane\n>\n>\n\npá 4. 1. 2019 v 17:44 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Kristjan Tammekivi <kristjantammekivi@gmail.com> writes:\n> I've noticed a change in the behaviour in triggers / hstores in Postgres\n> 11.1 when compared to Postgres 10.5.\n> [ reference to OLD in an insert trigger doesn't throw error anymore ]\n\nHmm. This seems to be a side effect of the changes we (I) made in v11\nto rationalize the handling of NULL vs ROW(NULL,NULL,...) composite\nvalues in plpgsql. The \"unassigned\" trigger row variables are now\nacting as though they are plain NULL values. I'm not sure now whether\nI realized that this would happen --- it looks like there are not and\nwere not any regression test cases that would throw an error for the\ndisallowed-reference case, so it's quite possible that it just escaped\nnotice.\n\nThe old behavior was pretty darn squirrely; in particular it would let\nyou parse but not execute a reference to \"OLD.column\", a behavior that\ncould not occur for any other composite variable. Now that'll just\nreturn NULL. In a green field I don't think there'd be complaints\nabout this behavior --- I know lots of people have spent considerable\neffort programming around the other behavior.\n\nWhile I haven't looked closely, I think duplicating the old behavior\nwould require adding a special-purpose flag to plpgsql DTYPE_REC\nvariables, which'd cost a little performance (extra error checks\nin very hot code paths) and possibly break compatibility with\npldebugger, if there's been a v11 release of that.\n\nSo I'm a bit inclined to accept this behavior change and adjust\nthe documentation to say that OLD/NEW are \"null\" rather than\n\"unassigned\" when not relevant.It is maybe unwanted effect, but it is not bad from my view. new behave is consistent - a initial value of variables is just NULL+1Pavel\n\nThoughts?\n\n regards, tom lane",
"msg_date": "Fri, 4 Jan 2019 17:47:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Potentially undocumented behaviour change in Postgres 11\n concerning OLD record in an after insert trigger"
}
] |
[
{
"msg_contents": "Why does the commit fest app not automatically fill in the author for a\nnew patch?\n\nAnd relatedly, every commit fest, there are a few patches registered\nwithout authors, probably because of the above behavior.\n\nCould this be improved?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 4 Jan 2019 12:25:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "commit fest app: Authors"
},
{
"msg_contents": "On Fri, Jan 4, 2019 at 12:26 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Why does the commit fest app not automatically fill in the author for a\n> new patch?\n>\n> And relatedly, every commit fest, there are a few patches registered\n> without authors, probably because of the above behavior.\n>\n> Could this be improved?\n>\n\nCan't say I recall why it was done that way originally. I assume you're\nsuggesting author should default to whomever adds it to the app? Or to\nsomehow try to match it out of the attached email?\n\nI'm guessing the original discussion could have something to do with a time\nwhen a lot of authors didn't register their own patches (think good old\nwiki days), it was just a third party who filled it all out. But today\nthere is probably no reason not to default it to that, as long as it can be\nchanged.\n\nI'm not sure it's a good idea to *enforce* it -- because you can't register\nan author until they have logged into the CF app once. And I think\nregistering it with no author is better than registering it with the wrong\nauthor.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jan 4, 2019 at 12:26 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Why does the commit fest app not automatically fill in the author for a\nnew patch?\n\nAnd relatedly, every commit fest, there are a few patches registered\nwithout authors, probably because of the above behavior.\n\nCould this be improved?Can't say I recall why it was done that way originally. I assume you're suggesting author should default to whomever adds it to the app? Or to somehow try to match it out of the attached email?I'm guessing the original discussion could have something to do with a time when a lot of authors didn't register their own patches (think good old wiki days), it was just a third party who filled it all out. But today there is probably no reason not to default it to that, as long as it can be changed. I'm not sure it's a good idea to *enforce* it -- because you can't register an author until they have logged into the CF app once. And I think registering it with no author is better than registering it with the wrong author.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 4 Jan 2019 12:34:58 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: commit fest app: Authors"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jan 4, 2019 at 12:26 PM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>> Why does the commit fest app not automatically fill in the author for a\n>> new patch?\n\n> I'm not sure it's a good idea to *enforce* it -- because you can't register\n> an author until they have logged into the CF app once. And I think\n> registering it with no author is better than registering it with the wrong\n> author.\n\nYeah --- I've not checked in detail, but I supposed that most/all of the\ncases of that correspond to authors with no community account to connect\nthe patch to.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 04 Jan 2019 10:05:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: commit fest app: Authors"
},
{
"msg_contents": "On Fri, Jan 04, 2019 at 10:05:25AM -0500, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> On Fri, Jan 4, 2019 at 12:26 PM Peter Eisentraut <\n>> peter.eisentraut@2ndquadrant.com> wrote:\n>>> Why does the commit fest app not automatically fill in the author for a\n>>> new patch?\n> \n>> I'm not sure it's a good idea to *enforce* it -- because you can't register\n>> an author until they have logged into the CF app once. And I think\n>> registering it with no author is better than registering it with the wrong\n>> author.\n> \n> Yeah --- I've not checked in detail, but I supposed that most/all of the\n> cases of that correspond to authors with no community account to connect\n> the patch to.\n\nAgreed. It is worse to not track a patch than to track it without an\nauthor, whom can be guessed from the thread attached to a patch\nanyway...\n--\nMichael",
"msg_date": "Sat, 5 Jan 2019 10:15:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: commit fest app: Authors"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI encountered a surprising error when writing a migration that both added a\nprimary key to a table and added a new NOT NULL column. It threw the error \"\ncolumn \"col_d\" contains null values\", even though I supplied a default. The\nmigration looks like this:\nCREATE TABLE new_table AS SELECT col_a, col_b, col_c from existing_table;\nALTER TABLE new_table\n ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4(),\n ADD PRIMARY KEY (col_a, col_b, col_c);\n\nBecause of the `DEFAULT uuid_generate_v4()`, I wouldn't expect it to be\npossible for the new column to have null values, so I was surprised to get\nan integrity error with the message \"column \"col_d\" contains null values\".\n\nI found two workarounds that don't produce the error. First, if I instead\nset the NOT NULL last, I get no error:\nALTER TABLE new_table\n ADD COLUMN col_d UUID UNIQUE DEFAULT uuid_generate_v4(),\n ADD PRIMARY KEY (col_a, col_b, col_c),\n ALTER COLUMN col_d SET NOT NULL;\n\nSecond, if I do the two steps in two ALTER TABLE statements, I also get no\nerror.\nALTER TABLE new_table\n ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4();\nALTER TABLE new_table\n ADD PRIMARY KEY (col_a, col_b, col_c);\n\nI'm running postgres 9.6.2.\n\nI know that adding a column with a default requires the table & its indexes\nto be rewritten, and I know that adding a primary key on a column that\ndoesn't have an existing NOT NULL constraint does ALTER COLUMN SET NOT NULL\non each column in the primary key. So I'm wondering if Postgres is\nreordering the SET NOT NULL operations in a way that causes it to attempt\nsetting col_d to NOT NULL before the default values are supplied.\n\nMy understanding from the docs is that I should be able to combine any\nALTER TABLE statements into one if they don't involve RENAME or SET SCHEMA\n(or a few other things in v10, which I'm not using).\n\nSo my questions are:\n- Is there a way I can see what Postgres is doing under the hood? I wanted\nto use EXPLAIN ANALYZE but it doesn't appear to work on alter table\nstatements.\n- Am I missing something about my original migration, or is there a reason\nI shouldn't expect it to work?\n\nThanks,\nAllison Kaptur\n\nHi folks,I encountered a surprising error when writing a migration that both added a primary key to a table and added a new NOT NULL column. It threw the error \"column \"col_d\" contains null values\", even though I supplied a default. The migration looks like this:CREATE TABLE new_table AS SELECT col_a, col_b, col_c from existing_table;ALTER TABLE new_table ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4(), ADD PRIMARY KEY (col_a, col_b, col_c);Because of the `DEFAULT uuid_generate_v4()`, I wouldn't expect it to be possible for the new column to have null values, so I was surprised to get an integrity error with the message \"column \"col_d\" contains null values\".I found two workarounds that don't produce the error. First, if I instead set the NOT NULL last, I get no error:ALTER TABLE new_table ADD COLUMN col_d UUID UNIQUE DEFAULT uuid_generate_v4(), ADD PRIMARY KEY (col_a, col_b, col_c), ALTER COLUMN col_d SET NOT NULL;Second, if I do the two steps in two ALTER TABLE statements, I also get no error.ALTER TABLE new_table ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4();ALTER TABLE new_table ADD PRIMARY KEY (col_a, col_b, col_c);I'm running postgres 9.6.2.I know that adding a column with a default requires the table & its indexes to be rewritten, and I know that adding a primary key on a column that doesn't have an existing NOT NULL constraint does ALTER COLUMN SET NOT NULL on each column in the primary key. So I'm wondering if Postgres is reordering the SET NOT NULL operations in a way that causes it to attempt setting col_d to NOT NULL before the default values are supplied.My understanding from the docs is that I should be able to combine any ALTER TABLE statements into one if they don't involve RENAME or SET SCHEMA (or a few other things in v10, which I'm not using).So my questions are:- Is there a way I can see what Postgres is doing under the hood? I wanted to use EXPLAIN ANALYZE but it doesn't appear to work on alter table statements.- Am I missing something about my original migration, or is there a reason I shouldn't expect it to work?Thanks,Allison Kaptur",
"msg_date": "Fri, 4 Jan 2019 16:28:51 -0800",
"msg_from": "Allison Kaptur <allison.kaptur@gmail.com>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE with multiple SET NOT NULL"
},
{
"msg_contents": "Allison Kaptur <allison.kaptur@gmail.com> writes:\n> I encountered a surprising error when writing a migration that both added a\n> primary key to a table and added a new NOT NULL column. It threw the error \"\n> column \"col_d\" contains null values\", even though I supplied a default. The\n> migration looks like this:\n> CREATE TABLE new_table AS SELECT col_a, col_b, col_c from existing_table;\n> ALTER TABLE new_table\n> ADD COLUMN col_d UUID UNIQUE NOT NULL DEFAULT uuid_generate_v4(),\n> ADD PRIMARY KEY (col_a, col_b, col_c);\n\nHm, this can be made a good deal more self-contained:\n\nregression=# create table t1 (a int);\nCREATE TABLE\nregression=# insert into t1 values(1);\nINSERT 0 1\nregression=# alter table t1 add column b float8 not null default random(),\nadd primary key(a);\nERROR: column \"b\" contains null values\n\nIt fails like that as far back as I tried (8.4). I'm guessing that we're\ndoing the ALTER steps in the wrong order, but haven't looked closer than\nthat.\n\nInterestingly, in v11 and HEAD it works if you use a constant default,\nsuggesting that the fast-default feature is at least adjacent to the\nproblem.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 04 Jan 2019 20:05:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE with multiple SET NOT NULL"
},
{
"msg_contents": "Hello\n\nI investigate this bug and found reason:\n> alter table t1 add column b float8 not null default random(), add primary key(a);\n\nHere we call ATController (src/backend/commands/tablecmds.c) with two cmds: AT_AddColumn and AT_AddIndex\nThen we go to phase 2 in ATRewriteCatalogs:\n- succesful add new attribute, but without table rewrite - it will be later in phase 3\n- call ATExecAddIndex, we want add primary key, so we call index_check_primary_key.\nindex_check_primary_key call AlterTableInternal and therefore another ATController with independent one AT_SetNotNull command.\nATController will call phase 2, and then its own phase 3 with validation all constraints. But at this nested level we have no AlteredTableInfo->newvals and we do not proper transform tuple.\n\nnot sure how we can proper rewrite this case.\n\nregards, Sergei\n\n",
"msg_date": "Wed, 09 Jan 2019 14:11:58 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE with multiple SET NOT NULL"
}
] |
[
{
"msg_contents": "There is a sentence in btree.sgml:\n\n <productname>PostgreSQL</productname> includes an implementation of the\n standard <acronym>btree</acronym> (multi-way binary tree) index data\n structure.\n\nI think the term \"btree\" here means \"multi-way balanced tree\", rather\nthan \"multi-way binary tree\". In fact in our btree, there could be\nmore than one key in a node. Patch attached.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sat, 05 Jan 2019 18:35:32 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "btree.sgml typo?"
},
{
"msg_contents": "On Sat, Jan 5, 2019 at 1:35 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> <productname>PostgreSQL</productname> includes an implementation of the\n> standard <acronym>btree</acronym> (multi-way binary tree) index data\n> structure.\n>\n> I think the term \"btree\" here means \"multi-way balanced tree\", rather\n> than \"multi-way binary tree\". In fact in our btree, there could be\n> more than one key in a node. Patch attached.\n\n+1 for applying this patch. The existing wording is highly confusing,\nespecially because many people already incorrectly think that a B-Tree\nis just like a self-balancing binary search tree.\n\nThere is no consensus on exactly what the \"b\" actually stands for, but\nit's definitely not \"binary\". I suppose that the original author meant\nthat a B-Tree is a generalization of a binary tree, which is basically\ntrue -- though that's a very academic point.\n-- \nPeter Geoghegan\n\n",
"msg_date": "Sat, 5 Jan 2019 09:41:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: btree.sgml typo?"
},
{
"msg_contents": "> On Sat, Jan 5, 2019 at 1:35 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> <productname>PostgreSQL</productname> includes an implementation of the\n>> standard <acronym>btree</acronym> (multi-way binary tree) index data\n>> structure.\n>>\n>> I think the term \"btree\" here means \"multi-way balanced tree\", rather\n>> than \"multi-way binary tree\". In fact in our btree, there could be\n>> more than one key in a node. Patch attached.\n> \n> +1 for applying this patch. The existing wording is highly confusing,\n> especially because many people already incorrectly think that a B-Tree\n> is just like a self-balancing binary search tree.\n> \n> There is no consensus on exactly what the \"b\" actually stands for, but\n> it's definitely not \"binary\". I suppose that the original author meant\n> that a B-Tree is a generalization of a binary tree, which is basically\n> true -- though that's a very academic point.\n\nAny objection for this? If not, I will commit the patch to master and\nREL_11_STABLE branches (btree.sgml first appeared in PostgreSQL 11).\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Mon, 07 Jan 2019 14:10:42 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: btree.sgml typo?"
},
{
"msg_contents": ">> On Sat, Jan 5, 2019 at 1:35 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>>> <productname>PostgreSQL</productname> includes an implementation of the\n>>> standard <acronym>btree</acronym> (multi-way binary tree) index data\n>>> structure.\n>>>\n>>> I think the term \"btree\" here means \"multi-way balanced tree\", rather\n>>> than \"multi-way binary tree\". In fact in our btree, there could be\n>>> more than one key in a node. Patch attached.\n>> \n>> +1 for applying this patch. The existing wording is highly confusing,\n>> especially because many people already incorrectly think that a B-Tree\n>> is just like a self-balancing binary search tree.\n>> \n>> There is no consensus on exactly what the \"b\" actually stands for, but\n>> it's definitely not \"binary\". I suppose that the original author meant\n>> that a B-Tree is a generalization of a binary tree, which is basically\n>> true -- though that's a very academic point.\n> \n> Any objection for this? If not, I will commit the patch to master and\n> REL_11_STABLE branches (btree.sgml first appeared in PostgreSQL 11).\n\nDone.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Tue, 08 Jan 2019 10:00:14 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: btree.sgml typo?"
}
] |
[
{
"msg_contents": "12dev and 11.1:\n\npostgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\npostgres=# CREATE INDEX ON t(i) WITH(fillfactor=11);\npostgres=# ALTER INDEX t_i_idx SET (fillfactor=12);\nERROR: 42809: \"t_i_idx\" is not a table, view, materialized view, or index\nLOCATION: ATWrongRelkindError, tablecmds.c:5031\n\nI can't see that's deliberate, but I found an earlier problem report; however,\ndiscussion regarding the ALTER behavior seems to have been eclipsed due to 2nd,\nseparate issue with pageinspect.\n\nhttps://www.postgresql.org/message-id/flat/CAKcux6mb6AZjMVyohnta6M%2BfdkUB720Gq8Wb6KPZ24FPDs7qzg%40mail.gmail.com\n\nJustin\n\n",
"msg_date": "Sat, 5 Jan 2019 12:59:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On 2019-Jan-05, Justin Pryzby wrote:\n\n> 12dev and 11.1:\n> \n> postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\n> postgres=# CREATE INDEX ON t(i) WITH(fillfactor=11);\n> postgres=# ALTER INDEX t_i_idx SET (fillfactor=12);\n> ERROR: 42809: \"t_i_idx\" is not a table, view, materialized view, or index\n> LOCATION: ATWrongRelkindError, tablecmds.c:5031\n> \n> I can't see that's deliberate,\n\nWell, I deliberately ignored that aspect of the report at the time as it\nseemed to me (per discussion in thread [1]) that this behavior was\nintentional. However, if I think in terms of things like\npages_per_range in BRIN indexes, this decision seems to be a mistake,\nbecause surely we should propagate that value to children.\n\n[1] https://www.postgresql.org/message-id/flat/CAH2-WzkOKptQiE51Bh4_xeEHhaBwHkZkGtKizrFMgEkfUuRRQg%40mail.gmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 7 Jan 2019 16:23:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On Mon, Jan 07, 2019 at 04:23:30PM -0300, Alvaro Herrera wrote:\n> On 2019-Jan-05, Justin Pryzby wrote:\n> \n> > 12dev and 11.1:\n> > \n> > postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\n> > postgres=# CREATE INDEX ON t(i) WITH(fillfactor=11);\n> > postgres=# ALTER INDEX t_i_idx SET (fillfactor=12);\n> > ERROR: 42809: \"t_i_idx\" is not a table, view, materialized view, or index\n> > LOCATION: ATWrongRelkindError, tablecmds.c:5031\n> > \n> > I can't see that's deliberate,\n> \n> Well, I deliberately ignored that aspect of the report at the time as it\n> seemed to me (per discussion in thread [1]) that this behavior was\n> intentional. However, if I think in terms of things like\n> pages_per_range in BRIN indexes, this decision seems to be a mistake,\n> because surely we should propagate that value to children.\n> \n> [1] https://www.postgresql.org/message-id/flat/CAH2-WzkOKptQiE51Bh4_xeEHhaBwHkZkGtKizrFMgEkfUuRRQg%40mail.gmail.com\n\nI don't see any discussion regarding ALTER (?)\n\nActually, I ran into this while trying to set pages_per_range.\nBut shouldn't it also work for fillfactor ?\n\nThanks,\nJustin\n\n",
"msg_date": "Mon, 7 Jan 2019 13:34:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On Mon, Jan 07, 2019 at 01:34:08PM -0600, Justin Pryzby wrote:\n> I don't see any discussion regarding ALTER (?)\n> \n> Actually, I ran into this while trying to set pages_per_range.\n> But shouldn't it also work for fillfactor ?\n\nLike ALTER TABLE, the take for ALTER INDEX is that we are still\nlacking a ALTER INDEX ONLY flavor which would apply only to single\npartitioned indexes instead of applying it down to a full set of\npartitions below the partitioned entry on which the DDL is defined.\nThat would be useful for SET STATISTICS as well. So Alvaro's decision\nlooks right to me as of what has been done in v11.\n--\nMichael",
"msg_date": "Tue, 8 Jan 2019 10:24:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On 2019-Jan-05, Justin Pryzby wrote:\n\n> 12dev and 11.1:\n> \n> postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\n> postgres=# CREATE INDEX ON t(i) WITH(fillfactor=11);\n> postgres=# ALTER INDEX t_i_idx SET (fillfactor=12);\n> ERROR: 42809: \"t_i_idx\" is not a table, view, materialized view, or index\n> LOCATION: ATWrongRelkindError, tablecmds.c:5031\n> \n> I can't see that's deliberate,\n\nSo do you have a proposed patch?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 6 Feb 2019 14:32:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On Mon, Jan 07, 2019 at 04:23:30PM -0300, Alvaro Herrera wrote:\n> On 2019-Jan-05, Justin Pryzby wrote:\n> > postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\n> > postgres=# CREATE INDEX ON t(i) WITH(fillfactor=11);\n> > postgres=# ALTER INDEX t_i_idx SET (fillfactor=12);\n> > ERROR: 42809: \"t_i_idx\" is not a table, view, materialized view, or index\n> > LOCATION: ATWrongRelkindError, tablecmds.c:5031\n> > \n> > I can't see that's deliberate,\n> \n> Well, I deliberately ignored that aspect of the report at the time as it\n> seemed to me (per discussion in thread [1]) that this behavior was\n> intentional. However, if I think in terms of things like\n> pages_per_range in BRIN indexes, this decision seems to be a mistake,\n> because surely we should propagate that value to children.\n> \n> [1] https://www.postgresql.org/message-id/flat/CAH2-WzkOKptQiE51Bh4_xeEHhaBwHkZkGtKizrFMgEkfUuRRQg%40mail.gmail.com\n\nPossibly attached should be backpatched through v11 ?\n\nThis allows SET on the parent index, which is used for newly created child\nindexes, but doesn't itself recurse to children.\n\nI noticed recursive \"*\" doesn't seem to be allowed for \"alter INDEX\":\npostgres=# ALTER INDEX p_i2* SET (fillfactor = 22);\nERROR: syntax error at or near \"*\"\nLINE 1: ALTER INDEX p_i2* SET (fillfactor = 22);\n\nAlso, I noticed this \"doesn't fail\", but setting is neither recursively applied\nnor used for new partitions.\n\npostgres=# ALTER INDEX p_i_idx ALTER COLUMN 1 SET STATISTICS 123;\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581",
"msg_date": "Thu, 26 Dec 2019 21:51:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 10:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Possibly attached should be backpatched through v11 ?\n>\n> This allows SET on the parent index, which is used for newly created child\n> indexes, but doesn't itself recurse to children.\n>\n> I noticed recursive \"*\" doesn't seem to be allowed for \"alter INDEX\":\n> postgres=# ALTER INDEX p_i2* SET (fillfactor = 22);\n> ERROR: syntax error at or near \"*\"\n> LINE 1: ALTER INDEX p_i2* SET (fillfactor = 22);\n>\n> Also, I noticed this \"doesn't fail\", but setting is neither recursively applied\n> nor used for new partitions.\n>\n> postgres=# ALTER INDEX p_i_idx ALTER COLUMN 1 SET STATISTICS 123;\n\nSeems a little hard to believe that this needs no other code changes.\nAnd what about documentation updates?\n\nBTW, if we don't do this, we should at least try to improve the error\nmessage. Telling somebody that something they created using CREATE\nINDEX is not an index will not win us any friends. A more specific\nerror message, saying that the operation is not supported for\npartitioned indexes, seems better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Dec 2019 07:43:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "The attached allows CREATE/ALTER to specify reloptions on a partitioned table\nwhich are used as defaults for future children.\n\nI think that's a desirable behavior, same as for tablespaces. Michael\nmentioned that ALTER INDEX ONLY doesn't exist, but that's only an issue if\nALTER acts recursively, which isn't the case here.\n\nThe current behavior seems unreasonable: CREATE allows specifying fillfactor,\nwhich does nothing, and unable to alter it, either:\n\npostgres=# CREATE TABLE tt(i int)PARTITION BY RANGE (i);;\nCREATE TABLE\npostgres=# CREATE INDEX ON tt(i)WITH(fillfactor=11);\nCREATE INDEX\npostgres=# \\d tt\n...\n \"tt_i_idx\" btree (i) WITH (fillfactor='11')\npostgres=# ALTER INDEX tt_i_idx SET (fillfactor=12);\nERROR: \"tt_i_idx\" is not a table, view, materialized view, or index\n\nMaybe there are other ALTER commands to handle (UNLOGGED currently does nothing\non a partitioned table?, STATISTICS, ...).\n\nThe first patch makes a prettier message, per Robert's suggestion.\n\n-- \nJustin",
"msg_date": "Thu, 27 Feb 2020 17:25:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On 2020-Feb-27, Justin Pryzby wrote:\n\n> The attached allows CREATE/ALTER to specify reloptions on a partitioned table\n> which are used as defaults for future children.\n> \n> I think that's a desirable behavior, same as for tablespaces. Michael\n> mentioned that ALTER INDEX ONLY doesn't exist, but that's only an issue if\n> ALTER acts recursively, which isn't the case here.\n\nI think ALTER not acting recursively is a bug that we would do well not\nto propagate any further. Enough effort we have to spend trying to fix\nthat already. Let's add ALTER .. ONLY if needed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Feb 2020 21:11:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 05:25:13PM -0600, Justin Pryzby wrote:\n> /*\n> - * Option parser for partitioned tables\n> - */\n> -bytea *\n> -partitioned_table_reloptions(Datum reloptions, bool validate)\n> -{\n> -\t/*\n> -\t * There are no options for partitioned tables yet, but this is able to do\n> -\t * some validation.\n> -\t */\n> -\treturn (bytea *) build_reloptions(reloptions, validate,\n> -\t\t\t\t\t\t\t\t\t RELOPT_KIND_PARTITIONED,\n> -\t\t\t\t\t\t\t\t\t 0, NULL, 0);\n> -}\n\nPlease don't undo that. You can look at 1bbd608 for all the details.\n--\nMichael",
"msg_date": "Fri, 28 Feb 2020 17:09:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 09:11:14PM -0300, Alvaro Herrera wrote:\n> On 2020-Feb-27, Justin Pryzby wrote:\n> > The attached allows CREATE/ALTER to specify reloptions on a partitioned table\n> > which are used as defaults for future children.\n> > \n> > I think that's a desirable behavior, same as for tablespaces. Michael\n> > mentioned that ALTER INDEX ONLY doesn't exist, but that's only an issue if\n> > ALTER acts recursively, which isn't the case here.\n> \n> I think ALTER not acting recursively is a bug that we would do well not\n> to propagate any further. Enough effort we have to spend trying to fix\n> that already. Let's add ALTER .. ONLY if needed.\n\nI was modeling after the behavior for tablespaces, and didn't realize that\nnon-recursive alter was considered discouraged. \n\nOn Thu, Feb 27, 2020 at 05:25:13PM -0600, Justin Pryzby wrote:\n> The first patch makes a prettier message, per Robert's suggestion.\n\nIs there any interest in this one ?\n\n> From e5bb363f514d768a4f540d9c82ad5745944b1486 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Mon, 30 Dec 2019 09:31:03 -0600\n> Subject: [PATCH v2 1/2] More specific error message when failing to alter a\n> partitioned index..\n> \n> \"..is not an index\" is deemed to be unfriendly\n> \n> https://www.postgresql.org/message-id/CA%2BTgmobq8_-DS7qDEmMi-4ARP1_0bkgFEjYfiK97L2eXq%2BQ%2Bnw%40mail.gmail.com\n> ---\n> src/backend/commands/tablecmds.c | 23 +++++++++++++++++------\n> 1 file changed, 17 insertions(+), 6 deletions(-)\n> \n> diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\n> index b7c8d66..1b271af 100644\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -366,7 +366,7 @@ static void ATRewriteTables(AlterTableStmt *parsetree,\n> static void ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode);\n> static AlteredTableInfo *ATGetQueueEntry(List **wqueue, Relation rel);\n> static void ATSimplePermissions(Relation rel, int allowed_targets);\n> -static void ATWrongRelkindError(Relation rel, int allowed_targets);\n> +static void ATWrongRelkindError(Relation rel, int allowed_targets, int actual_target);\n> static void ATSimpleRecursion(List **wqueue, Relation rel,\n> \t\t\t\t\t\t\t AlterTableCmd *cmd, bool recurse, LOCKMODE lockmode,\n> \t\t\t\t\t\t\t AlterTableUtilityContext *context);\n> @@ -5458,7 +5458,7 @@ ATSimplePermissions(Relation rel, int allowed_targets)\n> \n> \t/* Wrong target type? */\n> \tif ((actual_target & allowed_targets) == 0)\n> -\t\tATWrongRelkindError(rel, allowed_targets);\n> +\t\tATWrongRelkindError(rel, allowed_targets, actual_target);\n> \n> \t/* Permissions checks */\n> \tif (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))\n> @@ -5479,7 +5479,7 @@ ATSimplePermissions(Relation rel, int allowed_targets)\n> * type.\n> */\n> static void\n> -ATWrongRelkindError(Relation rel, int allowed_targets)\n> +ATWrongRelkindError(Relation rel, int allowed_targets, int actual_target)\n> {\n> \tchar\t *msg;\n> \n> @@ -5527,9 +5527,20 @@ ATWrongRelkindError(Relation rel, int allowed_targets)\n> \t\t\tbreak;\n> \t}\n> \n> -\tereport(ERROR,\n> -\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> -\t\t\t errmsg(msg, RelationGetRelationName(rel))));\n> +\tif (actual_target == ATT_PARTITIONED_INDEX &&\n> +\t\t\t(allowed_targets&ATT_INDEX) &&\n> +\t\t\t!(allowed_targets&ATT_PARTITIONED_INDEX))\n> +\t\t/* Add a special errhint for this case, since \"is not an index\" message is unfriendly */\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> +\t\t\t\t errmsg(msg, RelationGetRelationName(rel)),\n> +\t\t\t\t // errhint(\"\\\"%s\\\" is a partitioned index\", RelationGetRelationName(rel))));\n> +\t\t\t\t errhint(\"operation is not supported on partitioned indexes\")));\n> +\telse\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> +\t\t\t\t errmsg(msg, RelationGetRelationName(rel))));\n> +\n> }\n> \n> /*\n> -- \n> 2.7.4\n> \n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 23 Mar 2020 16:47:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER INDEX fails on partitioned index"
}
] |
[
{
"msg_contents": "Hey Postgres Team!\n\nI wanted to bring to everyone's attention a very interesting database,\ncalled Noria, written in Rust. It offers a compelling alternative to common\ndev patterns intended to boost performance of relational database\ninteractions. The author only implemented a mysql backend at the moment\nbut a postgres one could be created as well.\n\nPostgresql developers have successfully responded to NoSQL alternatives in\nthe past by incorporating differentiating functionality. I hope that you\nfind inspiration and sufficient cause to address that which Noria offers,\nwhether by partial materialize review refreshes, integration with Noria, or\notherwise.\n\nA talk about Noria and Rust: https://www.youtube.com/watch?v=s19G6n0UjsM\nNoria project: https://github.com/mit-pdos/noria\n\n\nRegards,\nDarin\n\nHey Postgres Team!I wanted to bring to everyone's attention a very interesting database, called Noria, written in Rust. It offers a compelling alternative to common dev patterns intended to boost performance of relational database interactions. The author only implemented a mysql backend at the moment but a postgres one could be created as well. Postgresql developers have successfully responded to NoSQL alternatives in the past by incorporating differentiating functionality. I hope that you find inspiration and sufficient cause to address that which Noria offers, whether by partial materialize review refreshes, integration with Noria, or otherwise.A talk about Noria and Rust: https://www.youtube.com/watch?v=s19G6n0UjsMNoria project: https://github.com/mit-pdos/noriaRegards,Darin",
"msg_date": "Sun, 6 Jan 2019 08:09:59 -0500",
"msg_from": "Darin Gordon <darinc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Noria and Postgres"
}
] |
[
{
"msg_contents": "Hi!\n\nI have read around the Internet a lot about the idea of using /dev/shm\nfor a tablespace to put tables in and issues with that. But I still\nhave not managed to get a good grasp why would that be a bad idea for\nusing it for temporary objects. I understand that for regular tables\nthis might prevent database startup and recovery because tables and\nall files associated with tables would be missing. While operations\nfor those tables could reside in the oplog. (Not sure if this means\nthat unlogged tables can be stored on such tablesspace.)\n\nI have experimented a bit and performance really improves if /dev/shm\nis used. I have experimented with creating temporary tables inside a\nregular (SSD backed) tablespace /dev/shm and I have seen at least 2x\nimprovement in time it takes for a set of modification+select queries\nto complete.\n\nI have also tested what happens if I kill all processes with KILL and\nrestart it. There is noise in logs about missing files, but it does\nstart up. Dropping and recreating the tablespace works.\n\nSo I wonder, should we add a TEMPORARY flag to a TABLESPACE which\nwould mark a tablespace such that if at startup its location is empty,\nit is automatically recreated, without warnings/errors? (Maybe some\nother term could be used for this.)\n\nIdeally, such tablespace could be set as temp_tablespaces and things\nshould work out: PostgreSQL should recreate the tablespace before\ntrying to use temp_tablespaces for the first time.\n\nWe could even make it so that only temporary objects are allowed to be\ncreated in a TEMPORARY TABLESPACE, to make sure user does not make a\nmistake.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Sun, 6 Jan 2019 11:01:52 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adding a concept of TEMPORARY TABLESPACE for the use in\n temp_tablespaces"
},
{
"msg_contents": "Hi!\n\nOn Sun, Jan 6, 2019 at 11:01 AM Mitar <mmitar@gmail.com> wrote:\n> I have experimented a bit and performance really improves if /dev/shm\n> is used. I have experimented with creating temporary tables inside a\n> regular (SSD backed) tablespace /dev/shm and I have seen at least 2x\n> improvement in time it takes for a set of modification+select queries\n> to complete.\n\nI also tried just to increase temp_buffers to half the memory, and\nthings are better, but not to the same degree as using a /dev/shm\ntablespace. Why is that? (All my temporary objects in my experiments\nare small, few 10k rows, few MBs.)\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Sun, 6 Jan 2019 11:27:36 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a concept of TEMPORARY TABLESPACE for the use in\n temp_tablespaces"
},
{
"msg_contents": "On Sun, Jan 6, 2019 at 11:01:52AM -0800, Mitar wrote:\n> Hi!\n> \n> I have read around the Internet a lot about the idea of using /dev/shm\n> for a tablespace to put tables in and issues with that. But I still\n> have not managed to get a good grasp why would that be a bad idea for\n> using it for temporary objects. I understand that for regular tables\n> this might prevent database startup and recovery because tables and\n> all files associated with tables would be missing. While operations\n> for those tables could reside in the oplog. (Not sure if this means\n> that unlogged tables can be stored on such tablesspace.)\n> \n> I have experimented a bit and performance really improves if /dev/shm\n> is used. I have experimented with creating temporary tables inside a\n> regular (SSD backed) tablespace /dev/shm and I have seen at least 2x\n> improvement in time it takes for a set of modification+select queries\n> to complete.\n> \n> I have also tested what happens if I kill all processes with KILL and\n> restart it. There is noise in logs about missing files, but it does\n> start up. Dropping and recreating the tablespace works.\n> \n> So I wonder, should we add a TEMPORARY flag to a TABLESPACE which\n> would mark a tablespace such that if at startup its location is empty,\n> it is automatically recreated, without warnings/errors? (Maybe some\n> other term could be used for this.)\n> \n> Ideally, such tablespace could be set as temp_tablespaces and things\n> should work out: PostgreSQL should recreate the tablespace before\n> trying to use temp_tablespaces for the first time.\n> \n> We could even make it so that only temporary objects are allowed to be\n> created in a TEMPORARY TABLESPACE, to make sure user does not make a\n> mistake.\n\nI wrote a blog entry about this:\n\n\thttps://momjian.us/main/blogs/pgblog/2017.html#June_2_2017\n\nThis is certainly an area we can improve, but it would require changes\nin several parts of the system to handle cases where the tablespace\ndisappears.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Fri, 25 Jan 2019 17:32:21 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding a concept of TEMPORARY TABLESPACE for the use in\n temp_tablespaces"
},
{
"msg_contents": "Hi!\n\nOn Fri, Jan 25, 2019 at 2:32 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I wrote a blog entry about this:\n>\n> https://momjian.us/main/blogs/pgblog/2017.html#June_2_2017\n>\n> This is certainly an area we can improve, but it would require changes\n> in several parts of the system to handle cases where the tablespace\n> disappears.\n\nYes, I read the discussion thread you point at the end of your blog\npost. [1] This is why I posted an e-mail to the mailing list because\nsome statements from that thread do not hold anymore. For example, in\nthe thread it is stated:\n\n\"Just pointing the tablespace to non'restart'safe storage will get you\nan installation that fails to boot after a restart, since there's a\ntree structure that is expected to survive, and when it's not found,\npostgres just fails to boot.\"\n\nThis does not seem to be true (anymore?) based on my testing. You get\nnoise in logs, but installation boots without a problem.\n\nSo maybe we are closer to this than we realize?\n\n[1] https://www.postgresql.org/message-id/flat/20170529185308.GB28209%40momjian.us\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Thu, 14 Mar 2019 00:53:02 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a concept of TEMPORARY TABLESPACE for the use in\n temp_tablespaces"
},
{
"msg_contents": "On Thu, Mar 14, 2019 at 12:53:02AM -0700, Mitar wrote:\n> Hi!\n> \n> On Fri, Jan 25, 2019 at 2:32 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I wrote a blog entry about this:\n> >\n> > https://momjian.us/main/blogs/pgblog/2017.html#June_2_2017\n> >\n> > This is certainly an area we can improve, but it would require changes\n> > in several parts of the system to handle cases where the tablespace\n> > disappears.\n> \n> Yes, I read the discussion thread you point at the end of your blog\n> post. [1] This is why I posted an e-mail to the mailing list because\n> some statements from that thread do not hold anymore. For example, in\n> the thread it is stated:\n> \n> \"Just pointing the tablespace to non'restart'safe storage will get you\n> an installation that fails to boot after a restart, since there's a\n> tree structure that is expected to survive, and when it's not found,\n> postgres just fails to boot.\"\n> \n> This does not seem to be true (anymore?) based on my testing. You get\n> noise in logs, but installation boots without a problem.\n> \n> So maybe we are closer to this than we realize?\n\nInteresting. What happens when you references objects that were in the\ntablespace? What would we want to happen?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 8 Apr 2019 20:15:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding a concept of TEMPORARY TABLESPACE for the use in\n temp_tablespaces"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is documentation patch: doc_client_min_messages_v1.patch\n\nDocument that INFO severity messages are always sent\nto the client. This also adds hyperlinks to the\ntable of severity levels where those levels are\nreferenced in the docs.\n\nThe patch was discussed on the #postgresql IRC channel\nwith RhodiumToad.\n\nThe patch is against master. It passes the xmllint,\nbuilds html, and generally appears to work.\n\nThe motivation for this patch is that the\nclient_min_messages documentation does not mention\nthe INFO level.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Sun, 6 Jan 2019 21:17:03 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@meme.com>",
"msg_from_op": true,
"msg_subject": "Doc client_min_messages patch vis. INFO message severity"
},
{
"msg_contents": ">>>>> \"Karl\" == Karl O Pinc <kop@meme.com> writes:\n\n Karl> Hi,\n Karl> Attached is documentation patch: doc_client_min_messages_v1.patch\n\n Karl> Document that INFO severity messages are always sent\n Karl> to the client.\n\nPushed, thanks.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Mon, 07 Jan 2019 19:03:02 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Doc client_min_messages patch vis. INFO message severity"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile looking at another bug I have noticed that it is possible to\ncreate an extension directly using a temporary schema, which is\ncrazy. A simple example:\n=# create extension pg_prewarm with schema pg_temp_3;\nCREATE EXTENSION\n=# \\dx pg_prewarm\n List of installed extensions\n Name | Version | Schema | Description\n------------+---------+-----------+-----------------------\n pg_prewarm | 1.2 | pg_temp_3 | prewarm relation data\n(1 row)\n\nWhen also creating some extensions, like pageinspect, then the error\nmessage gets a bit crazier, complaining about things not existing.\nThis combination makes no actual sense, so wouldn't it be better to\nrestrict the case? When trying to use ALTER EXTENSION SET SCHEMA we\nalready have a similar error:\n=# alter extension pageinspect set schema pg_temp_3;\nERROR: 0A000: cannot move objects into or out of temporary schemas\nLOCATION: CheckSetNamespace, namespace.c:2954\n\nAttached is an idea of patch, the test case is a bit bulky to remain\nportable though.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 7 Jan 2019 12:26:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Sun, Jan 6, 2019 at 10:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> This combination makes no actual sense, so wouldn't it be better to\n> restrict the case?\n\nHmm. What exactly doesn't make sense about it?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 14:22:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 02:22:01PM -0500, Robert Haas wrote:\n> On Sun, Jan 6, 2019 at 10:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> This combination makes no actual sense, so wouldn't it be better to\n>> restrict the case?\n> \n> Hmm. What exactly doesn't make sense about it?\n\nIn my mind, extensions are designed to be database-wide objects which\nare visible to all sessions. Perhaps I am just confused by what I\nthink they should be, and I can see no trace on the archives about\nconcept of extensions + temp schema as base (adding Dimitri in CC if\nhe has an idea). I can see as well that there have been stuff about\nusing temporary objects in extension script though (\"Fix bugs with\ntemporary or transient tables used in extension scripts\" in release\nnotes of 9.1).\n\nFor most of extensions, this can randomly finish with strange error\nmessages, say that:\n=# create extension file_fdw with schema pg_temp_3;\nERROR: 42883: function file_fdw_handler() does not exist\nLOCATION: LookupFuncName, parse_func.c:2088\n\nThere are cases where the extension can be created:\n=# create extension pgcrypto with schema pg_temp_3;\nCREATE EXTENSION\nTime: 36.567 ms\n=# \\dx pgcrypto\n List of installed extensions\n Name | Version | Schema | Description\n----------+---------+-----------+-------------------------\n pgcrypto | 1.3 | pg_temp_3 | cryptographic functions\n(1 row)\n\nThen the extension is showing up as beginning to be present for other\nusers. I am mainly wondering if this case has actually been thought\nabout in the past or discussed, and what to do about that and if we\nneed to do something. Temporary extensions can exist as long as the\nextension script does not include for example REVOKE queries on the\nfunctions it creates (which should actually work?), and there is a\nseparate thread about restraining 2PC when touching the temporary\nnamespace for the creation of many objects, and extensions are one\ncase discussed. Still the concept looks a bit wider, so I spawned a\nseparate thread.\n--\nMichael",
"msg_date": "Sat, 12 Jan 2019 08:34:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Sat, Jan 12, 2019 at 08:34:37AM +0900, Michael Paquier wrote:\n> Then the extension is showing up as beginning to be present for other\n> users. I am mainly wondering if this case has actually been thought\n> about in the past or discussed, and what to do about that and if we\n> need to do something.\n\nThe point here is about the visibility in \\dx.\n--\nMichael",
"msg_date": "Sat, 12 Jan 2019 08:47:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "This could probably use a quick note in the docs.",
"msg_date": "Mon, 04 Feb 2019 11:54:00 +0000",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Sat, Jan 12, 2019 at 12:48 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Sat, Jan 12, 2019 at 08:34:37AM +0900, Michael Paquier wrote:\n> > Then the extension is showing up as beginning to be present for other\n> > users. I am mainly wondering if this case has actually been thought\n> > about in the past or discussed, and what to do about that and if we\n> > need to do something.\n>\n> The point here is about the visibility in \\dx.\n>\n\nIf the point is visibility in \\dx it seems to me we want to fix the \\dx\nquery.\n\nThis is actually a very interesting set of problems and behavior is not\nintuitive here in PostgreSQL. I wonder how much more inconsistency we want\nto add.\n\nFor example: suppose I create a type in pg_temp and create a table in\npublic with a column using that type.\n\nWhat is the expected visibility in other sessions?\n\nWhat happens to the table when I log out?\n\nI went ahead and tested that case and I found the behavior to be, well,\nunintuitive. The temporary type is visible to other sessions and the\ncolumn is implicitly dropped when the type falls out of session scope.\nWhether or not we want to prevent that, I think that having special casing\nhere for extensions makes this behavior even more inconsistent. I guess I\nwould vote against accepting this patch as it is.\n\n> --\n> Michael\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Sat, Jan 12, 2019 at 12:48 AM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Jan 12, 2019 at 08:34:37AM +0900, Michael Paquier wrote:\n> Then the extension is showing up as beginning to be present for other\n> users. I am mainly wondering if this case has actually been thought\n> about in the past or discussed, and what to do about that and if we\n> need to do something.\n\nThe point here is about the visibility in \\dx.If the point is visibility in \\dx it seems to me we want to fix the \\dx query.This is actually a very interesting set of problems and behavior is not intuitive here in PostgreSQL. I wonder how much more inconsistency we want to add.For example: suppose I create a type in pg_temp and create a table in public with a column using that type.What is the expected visibility in other sessions?What happens to the table when I log out?I went ahead and tested that case and I found the behavior to be, well, unintuitive. The temporary type is visible to other sessions and the column is implicitly dropped when the type falls out of session scope. Whether or not we want to prevent that, I think that having special casing here for extensions makes this behavior even more inconsistent. I guess I would vote against accepting this patch as it is.\n--\nMichael\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Wed, 13 Feb 2019 12:08:50 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 12:08:50PM +0100, Chris Travers wrote:\n> If the point is visibility in \\dx it seems to me we want to fix the \\dx\n> query.\n\nYes, I got to think a bit more about that case, and there are cases\nwhere this actually works properly as this depends on the objects\ndefined in the extension. Fixing \\dx to not show up extensions\ndefined in temp schemas of other sessions is definitely a must in my\nopinion, and I would rather drop the rest of the proposal for now. A\nsimilar treatment is needed for \\dx+.\n\n> For example: suppose I create a type in pg_temp and create a table in\n> public with a column using that type.\n\nI am wondering if this scenario could make sense to populate data on\nother, existing, relations for a schema migration, and that a two-step\nprocess is done, with temporary tables used as intermediates. But\nthat sounds like the thoughts of a crazy man..\n\n> What is the expected visibility in other sessions?\n> \n> What happens to the table when I log out?\n\nAnything depending on a temporary object will be dropped per\ndependency links once the session is over.\n\nAttached is a patch to adjust \\dx and \\dx+. What do you think?\n--\nMichael",
"msg_date": "Thu, 14 Feb 2019 16:56:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Dear Michael,\n\nI seem this patch is enough, but could you explain the reason \nyou drop initial proposal more detail?\nI'm not sure why extensions contained by temporary schemas are acceptable.\n\n> Anything depending on a temporary object will be dropped per\n> dependency links once the session is over.\n\nExtensions locate at pg_temp_* schemas are temporary objects IMO.\nHow do you think? Would you implement this functionality in future?\n\nHayato Kuroda\nFujitsu LIMITED\n\n\n\n",
"msg_date": "Mon, 18 Feb 2019 05:39:09 +0000",
"msg_from": "\"Kuroda, Hayato\" <kuroda.hayato@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 6:40 AM Kuroda, Hayato <kuroda.hayato@jp.fujitsu.com>\nwrote:\n\n> Dear Michael,\n>\n> I seem this patch is enough, but could you explain the reason\n> you drop initial proposal more detail?\n> I'm not sure why extensions contained by temporary schemas are acceptable.\n>\n\nHere's my objection.\n\nEverything a relocatable extension can create can be created normally in a\ntemporary schema currently. This includes types, functions, etc.\n\nSo I can create a type in a temporary schema and then create a table in a\npublic schema using that type as a column. This behaves oddly (when I log\nout of my session the column gets implicitly dropped) but it works\nconsistently. Adding special cases to extensions strikes me as adding more\nfunny corners to the behavior of the db in this regard.\n\nNow there are times I could imagine using temporary schemas with\nextensions. This could include testing multiple versions of an extension\nso that multiple concurrent test runs don't see each other's versions.\nThis could be done with normal schemas but the guarantees are not as strong\nregarding cleanup.\n\n>\n> > Anything depending on a temporary object will be dropped per\n> > dependency links once the session is over.\n>\n> Extensions locate at pg_temp_* schemas are temporary objects IMO.\n> How do you think? Would you implement this functionality in future?\n>\n\nThat's the way things are now as far as I understand it, or do I\nmisunderstand your question?\n\n>\n> Hayato Kuroda\n> Fujitsu LIMITED\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Mon, Feb 18, 2019 at 6:40 AM Kuroda, Hayato <kuroda.hayato@jp.fujitsu.com> wrote:Dear Michael,\n\nI seem this patch is enough, but could you explain the reason \nyou drop initial proposal more detail?\nI'm not sure why extensions contained by temporary schemas are acceptable.Here's my objection.Everything a relocatable extension can create can be created normally in a temporary schema currently. This includes types, functions, etc.So I can create a type in a temporary schema and then create a table in a public schema using that type as a column. This behaves oddly (when I log out of my session the column gets implicitly dropped) but it works consistently. Adding special cases to extensions strikes me as adding more funny corners to the behavior of the db in this regard.Now there are times I could imagine using temporary schemas with extensions. This could include testing multiple versions of an extension so that multiple concurrent test runs don't see each other's versions. This could be done with normal schemas but the guarantees are not as strong regarding cleanup.\n\n> Anything depending on a temporary object will be dropped per\n> dependency links once the session is over.\n\nExtensions locate at pg_temp_* schemas are temporary objects IMO.\nHow do you think? Would you implement this functionality in future?That's the way things are now as far as I understand it, or do I misunderstand your question? \n\nHayato Kuroda\nFujitsu LIMITED\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Mon, 18 Feb 2019 10:51:47 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 13, 2019 at 12:08:50PM +0100, Chris Travers wrote:\n> > If the point is visibility in \\dx it seems to me we want to fix the \\dx\n> > query.\n>\n> Yes, I got to think a bit more about that case, and there are cases\n> where this actually works properly as this depends on the objects\n> defined in the extension. Fixing \\dx to not show up extensions\n> defined in temp schemas of other sessions is definitely a must in my\n> opinion, and I would rather drop the rest of the proposal for now. A\n> similar treatment is needed for \\dx+.\n\nI'd vote for accepting the extension creation in temporary schemas and\nfixing \\dx and \\dx+. However the error raised by creating extensions\nin temporary schema still looks strange to me. Since we don't search\nfunctions and operators defined in temporary schemas (which is stated\nby the doc) unless we use qualified function name we cannot create\nextensions in temporary schema whose functions refer theirs other\nfunctions. I'd like to fix it or to find a workaround but cannot come\nup with a good idea yet.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Mon, 18 Feb 2019 20:02:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 05:39:09AM +0000, Kuroda, Hayato wrote:\n> I seem this patch is enough, but could you explain the reason \n> you drop initial proposal more detail?\n> I'm not sure why extensions contained by temporary schemas are\n> acceptable.\n\nBecause there are cases where they actually work. We have some of\nthese in core.\n\n>> Anything depending on a temporary object will be dropped per\n>> dependency links once the session is over.\n> \n> Extensions locate at pg_temp_* schemas are temporary objects IMO.\n> How do you think? Would you implement this functionality in future?\n\nPer the game of dependencies, extensions located in a temporary schema\nwould get automatically dropped at session end.\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 13:35:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 08:02:54PM +0900, Masahiko Sawada wrote:\n> I'd vote for accepting the extension creation in temporary schemas and\n> fixing \\dx and \\dx+.\n\nThanks.\n\n> However the error raised by creating extensions\n> in temporary schema still looks strange to me. Since we don't search\n> functions and operators defined in temporary schemas (which is stated\n> by the doc) unless we use qualified function name we cannot create\n> extensions in temporary schema whose functions refer theirs other\n> functions. I'd like to fix it or to find a workaround but cannot come\n> up with a good idea yet.\n\nAgreed. Getting a schema mismatch is kind of disappointing, and it\ndepends on the DDL used in the extension SQL script. I would suspect\nthat getting that addressed correctly may add quite some facility, for\nlittle gain. But I may be wrong, that's only the feeling coming from\na shiver in my back.\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 13:38:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Feb 18, 2019 at 05:39:09AM +0000, Kuroda, Hayato wrote:\n>> I'm not sure why extensions contained by temporary schemas are\n>> acceptable.\n\n> Because there are cases where they actually work.\n\nMore to the point, it doesn't seem that hard to think of cases\nwhere this would be useful. PG extensions are very general\nthings. If you want to create a whole pile of temporary objects\nand do that repeatedly, wrapping them up into an extension is\na nice way to do that, nicer really than anything else we offer.\nSo I'd be sad if we decided to forbid this.\n\n> Per the game of dependencies, extensions located in a temporary schema\n> would get automatically dropped at session end.\n\nYeah, it doesn't seem like there's actually any missing functionality\nthere, at least not any that's specific to extensions.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 00:09:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Dear Michael, Chris and Tom,\n\n> Adding special cases to extensions strikes me as adding more\n> funny corners to the behavior of the db in this regard.\n\nI understand your arguments and its utility.\n\n> For most of extensions, this can randomly finish with strange error\n> messages, say that:\n> =# create extension file_fdw with schema pg_temp_3;\n> ERROR: 42883: function file_fdw_handler() does not exist\n> LOCATION: LookupFuncName, parse_func.c:2088\n\nI found that this strange error appears after making\ntemporary tables. \n\ntest=> CREATE TEMPORARY TABLE temp (id int);\nCREATE TABLE\ntest=> CREATE EXTENSION file_fdw WITH SCHEMA pg_temp_3;\nERROR: function file_fdw_handler() does not exist\n\nI would try to understand this problem for community and\nmy experience.\n\nBest Regards,\nHayato Kuroda\nFujitsu LIMITED\n\n\n\n\n",
"msg_date": "Thu, 28 Feb 2019 06:13:40 +0000",
"msg_from": "\"Kuroda, Hayato\" <kuroda.hayato@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Hi\n\n> I found that this strange error appears after making\n> temporary tables.\n>\n> test=> CREATE TEMPORARY TABLE temp (id int);\n> CREATE TABLE\n> test=> CREATE EXTENSION file_fdw WITH SCHEMA pg_temp_3;\n> ERROR: function file_fdw_handler() does not exist\n>\n> I would try to understand this problem for community and\n> my experience.\n\nThis behavior seems as not related to extensions infrastructure:\n\npostgres=# CREATE TEMPORARY TABLE temp (id int);\nCREATE TABLE\npostgres=# set search_path to 'pg_temp_3';\nSET\npostgres=# create function foo() returns int as 'select 1' language sql;\nCREATE FUNCTION\npostgres=# select pg_temp_3.foo();\n foo \n-----\n 1\n(1 row)\n\npostgres=# select foo();\nERROR: function foo() does not exist\nLINE 1: select foo();\n ^\nHINT: No function matches the given name and argument types. You might need to add explicit type casts.\npostgres=# show search_path ;\n search_path \n-------------\n pg_temp_3\n(1 row)\n\nregards, Sergei\n\n",
"msg_date": "Thu, 28 Feb 2019 11:15:24 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n>> test=> CREATE EXTENSION file_fdw WITH SCHEMA pg_temp_3;\n>> ERROR: function file_fdw_handler() does not exist\n\n> This behavior seems as not related to extensions infrastructure:\n\nYeah, I think it's just because we won't search the pg_temp schema\nfor function or operator names, unless the calling SQL command\nexplicitly writes \"pg_temp.foo(...)\" or equivalent. That's an\nancient security decision, which we're unlikely to undo. It\ncertainly puts a crimp in the usefulness of putting extensions into\npg_temp, but I don't think it totally destroys the usefulness.\nYou could still use an extension to package, say, the definitions\nof a bunch of temp tables and views that you need to create often.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 10:13:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 10:13:17AM -0500, Tom Lane wrote:\n> Yeah, I think it's just because we won't search the pg_temp schema\n> for function or operator names, unless the calling SQL command\n> explicitly writes \"pg_temp.foo(...)\" or equivalent. That's an\n> ancient security decision, which we're unlikely to undo. It\n> certainly puts a crimp in the usefulness of putting extensions into\n> pg_temp, but I don't think it totally destroys the usefulness.\n> You could still use an extension to package, say, the definitions\n> of a bunch of temp tables and views that you need to create often.\n\nEven with that, it should still be possible to enforce search_path\nwithin the extension script to allow such objects to be created\ncorrectly, no? That would be a bit hacky, still for the purpose of\ntemp object handling that looks kind of enough to live with when\ncreating an extension.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2019 11:43:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Feb 28, 2019 at 10:13:17AM -0500, Tom Lane wrote:\n>> Yeah, I think it's just because we won't search the pg_temp schema\n>> for function or operator names, unless the calling SQL command\n>> explicitly writes \"pg_temp.foo(...)\" or equivalent. That's an\n>> ancient security decision, which we're unlikely to undo. It\n>> certainly puts a crimp in the usefulness of putting extensions into\n>> pg_temp, but I don't think it totally destroys the usefulness.\n>> You could still use an extension to package, say, the definitions\n>> of a bunch of temp tables and views that you need to create often.\n\n> Even with that, it should still be possible to enforce search_path\n> within the extension script to allow such objects to be created\n> correctly, no? That would be a bit hacky, still for the purpose of\n> temp object handling that looks kind of enough to live with when\n> creating an extension.\n\nIf you're suggesting that we disable that security restriction\nduring extension creation, I really can't see how that'd be a\ngood thing ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 22:52:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 10:52:52PM -0500, Tom Lane wrote:\n> If you're suggesting that we disable that security restriction\n> during extension creation, I really can't see how that'd be a\n> good thing ...\n\nNo, I don't mean that. I was just wondering if someone can set\nsearch_path within the SQL script which includes the extension\ncontents to bypass the restriction and the error. They can always\nprefix such objects with pg_temp anyway if need be...\n--\nMichael",
"msg_date": "Fri, 1 Mar 2019 15:16:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Feb 28, 2019 at 10:52:52PM -0500, Tom Lane wrote:\n>> If you're suggesting that we disable that security restriction\n>> during extension creation, I really can't see how that'd be a\n>> good thing ...\n\n> No, I don't mean that. I was just wondering if someone can set\n> search_path within the SQL script which includes the extension\n> contents to bypass the restriction and the error. They can always\n> prefix such objects with pg_temp anyway if need be...\n\nYou'd have to look in namespace.c to be sure, but I *think* that\nwe don't consult the temp schema during function/operator lookup\neven if it's explicitly listed in search_path.\n\nIt might be possible for an extension script to get around this with\ncode like, say,\n\nCREATE TRIGGER ... EXECUTE PROCEDURE @extschema@.myfunc();\n\nalthough you'd have to give up relocatability of the extension\nto use @extschema@. (Maybe it was a bad idea to not provide\nthat symbol in relocatable extensions? A usage like this doesn't\nprevent the extension from being relocated later.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 11:35:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI ran make checkworld and everything passed.\r\n\r\nI tried installing a test extension into a temp schema. I found this was remarkably difficult to do because pg_temp did not work (I had to create a temporary table and then locate the actual table it was created in). While that might also be a bug it is not in the scope of this patch so mostly noting in terms of future work.\r\n\r\nAfter creating the extension I did as follows:\r\n\\dx in the current session shows the extension\r\n\\dx in a stock psql shows the extension in a separate session\r\n\\dx with a patched psql in a separate session does not show the extension.\r\n\r\nIn terms of the scope of this patch, I think this correctly and fully solves the problem at hand.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 05 Mar 2019 12:47:54 +0000",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Tue, Mar 05, 2019 at 12:47:54PM +0000, Chris Travers wrote:\n> I tried installing a test extension into a temp schema. I found\n> this was remarkably difficult to do because pg_temp did not work (I\n> had to create a temporary table and then locate the actual table it\n> was created in). While that might also be a bug it is not in the\n> scope of this patch so mostly noting in terms of future work.\n\npgcrypto works in this case.\n\n> After creating the extension I did as follows:\n> \\dx in the current session shows the extension\n> \\dx in a stock psql shows the extension in a separate session\n> \\dx with a patched psql in a separate session does not show the\n> extension.\n> \n> In terms of the scope of this patch, I think this correctly and\n> fully solves the problem at hand. \n\nI was just looking at this patch this morning with fresh eyes, and I\nthink that I have found one argument to *not* apply it. Imagine the\nfollowing in one session:\n=# create extension pgcrypto with schema pg_temp_3;\nCREATE EXTENSION\n=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n----------+---------+------------+------------------------------\n pgcrypto | 1.3 | pg_temp_3 | cryptographic functions\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(2 rows)\n\nThat's all good, we see that the session which created this extension\nhas it listed. Now let's use in parallel a second session:\n=# create extension pgcrypto with schema pg_temp_4;\nERROR: 42710: extension \"pgcrypto\" already exists\nLOCATION: CreateExtension, extension.c:1664\n=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n----------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\n\nThis is actually also good, because the extension of the temporary\nschema of the first session does not show up. Now I think that this\ncan bring some confusion to the user actually, because the extension\nbecomes not listed via \\dx, but trying to create it with a different\nschema fails.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 6 Mar 2019 11:19:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Wed, Mar 6, 2019 at 3:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 05, 2019 at 12:47:54PM +0000, Chris Travers wrote:\n> > I tried installing a test extension into a temp schema. I found\n> > this was remarkably difficult to do because pg_temp did not work (I\n> > had to create a temporary table and then locate the actual table it\n> > was created in). While that might also be a bug it is not in the\n> > scope of this patch so mostly noting in terms of future work.\n>\n> pgcrypto works in this case.\n>\n\nSo the issue here is in finding the pg temp schema to install into. The\nextension is less of an issue.\n\nThe point of my note above is that there are other sharp corners that have\nto be rounded off in order to make this work really well.\n\n>\n> > After creating the extension I did as follows:\n> > \\dx in the current session shows the extension\n> > \\dx in a stock psql shows the extension in a separate session\n> > \\dx with a patched psql in a separate session does not show the\n> > extension.\n> >\n> > In terms of the scope of this patch, I think this correctly and\n> > fully solves the problem at hand.\n>\n> I was just looking at this patch this morning with fresh eyes, and I\n> think that I have found one argument to *not* apply it. Imagine the\n> following in one session:\n> =# create extension pgcrypto with schema pg_temp_3;\n> CREATE EXTENSION\n> =# \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> ----------+---------+------------+------------------------------\n> pgcrypto | 1.3 | pg_temp_3 | cryptographic functions\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (2 rows)\n>\n> That's all good, we see that the session which created this extension\n> has it listed. Now let's use in parallel a second session:\n> =# create extension pgcrypto with schema pg_temp_4;\n> ERROR: 42710: extension \"pgcrypto\" already exists\n> LOCATION: CreateExtension, extension.c:1664\n> =# \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> ----------+---------+------------+------------------------------\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (1 row)\n>\n> This is actually also good, because the extension of the temporary\n> schema of the first session does not show up. Now I think that this\n> can bring some confusion to the user actually, because the extension\n> becomes not listed via \\dx, but trying to create it with a different\n> schema fails.\n>\n\nOk so at present I see three distinct issues here, where maybe three\ndifferent patches over time might be needed.\n\nThe issues are:\n\n1. create extension pgcrypto with schema pg_temp; fails because there is\nno schema actually named pg_temp.\n2. If you work around this, the \\dx shows temporary extensions in other\nsessions. This is probably the most minor issue of the three.\n3. You cannot create the same extension in two different schemas.\n\nMy expectation is that this may be a situation where other sharp corners\nare discovered over time. My experience is that where things are difficult\nto do in PostgreSQL and hence not common, these sharp corners exist\n(domains vs constraints in table-based composite types for example,\nmultiple inheritance being another).\n\nIt is much easier to review patches if they make small, well defined\nchanges to the code. I don't really have an opinion on whether this should\nbe applied as is, or moved to next commitfest in the hope we can fix issue\n#3 there too. But I would recommend not fixing the pg_temp naming (#1\nabove) until at least the other two are fixed. There is no sense in making\nthis easy yet. But I would prefer to review or write patches that address\nthese issues one at a time rather than try to get them all reviewed and\nincluded together.\n\nBut I don't think there is likely to be a lot of user confusion here. It\nis hard enough to install extensions in temporary schemas that those who do\ncan be expected to know more that \\dx commands.\n\n>\n> Thoughts?\n> --\n> Michael\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Wed, Mar 6, 2019 at 3:19 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Mar 05, 2019 at 12:47:54PM +0000, Chris Travers wrote:\n> I tried installing a test extension into a temp schema. I found\n> this was remarkably difficult to do because pg_temp did not work (I\n> had to create a temporary table and then locate the actual table it\n> was created in). While that might also be a bug it is not in the\n> scope of this patch so mostly noting in terms of future work.\n\npgcrypto works in this case.So the issue here is in finding the pg temp schema to install into. The extension is less of an issue.The point of my note above is that there are other sharp corners that have to be rounded off in order to make this work really well. \n\n> After creating the extension I did as follows:\n> \\dx in the current session shows the extension\n> \\dx in a stock psql shows the extension in a separate session\n> \\dx with a patched psql in a separate session does not show the\n> extension.\n> \n> In terms of the scope of this patch, I think this correctly and\n> fully solves the problem at hand. \n\nI was just looking at this patch this morning with fresh eyes, and I\nthink that I have found one argument to *not* apply it. Imagine the\nfollowing in one session:\n=# create extension pgcrypto with schema pg_temp_3;\nCREATE EXTENSION\n=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n----------+---------+------------+------------------------------\n pgcrypto | 1.3 | pg_temp_3 | cryptographic functions\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(2 rows)\n\nThat's all good, we see that the session which created this extension\nhas it listed. Now let's use in parallel a second session:\n=# create extension pgcrypto with schema pg_temp_4;\nERROR: 42710: extension \"pgcrypto\" already exists\nLOCATION: CreateExtension, extension.c:1664\n=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n----------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\n\nThis is actually also good, because the extension of the temporary\nschema of the first session does not show up. Now I think that this\ncan bring some confusion to the user actually, because the extension\nbecomes not listed via \\dx, but trying to create it with a different\nschema fails.Ok so at present I see three distinct issues here, where maybe three different patches over time might be needed.The issues are:1. create extension pgcrypto with schema pg_temp; fails because there is no schema actually named pg_temp.2. If you work around this, the \\dx shows temporary extensions in other sessions. This is probably the most minor issue of the three.3. You cannot create the same extension in two different schemas.My expectation is that this may be a situation where other sharp corners are discovered over time. My experience is that where things are difficult to do in PostgreSQL and hence not common, these sharp corners exist (domains vs constraints in table-based composite types for example, multiple inheritance being another). It is much easier to review patches if they make small, well defined changes to the code. I don't really have an opinion on whether this should be applied as is, or moved to next commitfest in the hope we can fix issue #3 there too. But I would recommend not fixing the pg_temp naming (#1 above) until at least the other two are fixed. There is no sense in making this easy yet. But I would prefer to review or write patches that address these issues one at a time rather than try to get them all reviewed and included together.But I don't think there is likely to be a lot of user confusion here. It is hard enough to install extensions in temporary schemas that those who do can be expected to know more that \\dx commands.\n\nThoughts?\n--\nMichael\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Wed, 6 Mar 2019 09:33:55 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Wed, Mar 6, 2019 at 9:33 AM Chris Travers <chris.travers@adjust.com>\nwrote:\n\n>\n>\n>> Thoughts?\n>>\n>\nTo re-iterate, my experience with PostgreSQL is that people doing\nparticularly exotic work in PostgreSQL can expect to hit equally exotic\nbugs. I have a list that I will not bore people with here.\n\nI think there is a general consensus here that creating extensions in temp\nschemas is something we would like to support. So I think we should fix\nthese bugs before we make it easy to do. And this patch addresses one of\nthose.\n\n--\n>> Michael\n>>\n>\n>\n> --\n> Best Regards,\n> Chris Travers\n> Head of Database\n>\n> Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\n> Saarbrücker Straße 37a, 10405 Berlin\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Wed, Mar 6, 2019 at 9:33 AM Chris Travers <chris.travers@adjust.com> wrote:\nThoughts?To re-iterate, my experience with PostgreSQL is that people doing particularly exotic work in PostgreSQL can expect to hit equally exotic bugs. I have a list that I will not bore people with here.I think there is a general consensus here that creating extensions in temp schemas is something we would like to support. So I think we should fix these bugs before we make it easy to do. And this patch addresses one of those. \n--\nMichael\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Wed, 6 Mar 2019 09:42:50 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
},
{
"msg_contents": "On Wed, Mar 06, 2019 at 09:33:55AM +0100, Chris Travers wrote:\n> Ok so at present I see three distinct issues here, where maybe three\n> different patches over time might be needed.\n> \n> The issues are:\n> \n> 1. create extension pgcrypto with schema pg_temp; fails because there is\n> no schema actually named pg_temp.\n\nYes, I agree that being able to accept pg_temp as an alias for the\ntemporary schema for extensions would be kind of nice. Perhaps one\nreason why this has not actually happened is that out user base does\nnot really have use cases for it though.\n\n> 2. If you work around this, the \\dx shows temporary extensions in other\n> sessions. This is probably the most minor issue of the three.\n> 3. You cannot create the same extension in two different schemas.\n\nI would like to think that it should be possible to create the same\nextension linked to a temporary schema in multiple sessions in\nparallel, as much as it is possible to create the same extension\nacross multiple schemas. Both are actually linked as temp schemas as\nbased on connection slots. This would require some changes in the way\nconstraints are defined in catalogs for extensions. Perhaps there is\neither no demand for it, I don't know.\n\n> But I don't think there is likely to be a lot of user confusion here. It\n> is hard enough to install extensions in temporary schemas that those who do\n> can be expected to know more that \\dx commands.\n\nThe patch as it stands does not actually solve the root problem and\nmakes things a bit confusing, so I am marking it as returned with\nfeedback. Working on this set of problems may be interesting, though\nthe efforts necessary to make that may not be worth the actual user\nbenefits.\n--\nMichael",
"msg_date": "Thu, 7 Mar 2019 15:20:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Prevent extension creation in temporary schemas"
}
] |
[
{
"msg_contents": "Hi,\n\nLogical replication enables us to replicate data changes to different\nmajor version PostgreSQL as the doc says[1]. However the current\nlogical replication can work fine only if replicating to a newer major\nversion PostgreSQL such as from 10 to 11. Regarding using logical\nreplication with older major version, say sending from 11 to 10, it\nwill stop when a subscriber receives a truncate change because it's\nnot supported at PostgreSQL 10. I think there are use cases where\nusing logical replication with a subscriber of an older version\nPostgreSQL but I'm not sure we should support it.\n\nOf course in such case we can set the publication with publish =\n'insert, update, delete' to not send truncate changes but it requres\nusers to recognize the feature differences between major vesions and\nin the future it will get more complex. So I think it would be better\nto be configured autometically by PostgreSQL.\n\nTo fix it we can make subscribers send its supporting message types to\nthe publisher at a startup time so that the publisher doesn't send\nunsupported message types on the subscriber. Or as an another idea, we\ncan make subscribers ignore unsupported logical replication message\ntypes instead of raising an error. Feedback is very welcome.\n\n[1] https://www.postgresql.org/docs/devel/logical-replication.html\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Mon, 7 Jan 2019 17:00:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using logical replication with older version subscribers"
},
{
"msg_contents": "On Mon, Jan 7, 2019 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> Hi,\n>\n> Logical replication enables us to replicate data changes to different\n> major version PostgreSQL as the doc says[1]. However the current\n> logical replication can work fine only if replicating to a newer major\n> version PostgreSQL such as from 10 to 11. Regarding using logical\n> replication with older major version, say sending from 11 to 10, it\n> will stop when a subscriber receives a truncate change because it's\n> not supported at PostgreSQL 10. I think there are use cases where\n> using logical replication with a subscriber of an older version\n> PostgreSQL but I'm not sure we should support it.\n>\n> Of course in such case we can set the publication with publish =\n> 'insert, update, delete' to not send truncate changes but it requres\n> users to recognize the feature differences between major vesions and\n> in the future it will get more complex. So I think it would be better\n> to be configured autometically by PostgreSQL.\n>\n> To fix it we can make subscribers send its supporting message types to\n> the publisher at a startup time so that the publisher doesn't send\n> unsupported message types on the subscriber. Or as an another idea, we\n> can make subscribers ignore unsupported logical replication message\n> types instead of raising an error. Feedback is very welcome.\n>\n> [1] https://www.postgresql.org/docs/devel/logical-replication.html\n\n\nHow would that work in practice?\n\nIf an 11 server is sent a message saying \"client does not support\ntruncate\", and immediately generates an error, then you can no longer\nreplicate even if you turn off truncate. And if it delays it until the\nactual replication of the item, then you just get the error on the primary\nìnstead of the standby?\n\nI assume you are not suggesting a publication with truncation enabled\nshould just ignore replicating truncation if the downstream server doesn't\nsupport it? Because if that's the suggestion, then a strong -1 from me on\nthat.\n\nAnd definitely -1 for having a subscriber ignore messages it doesn't know\nabout. That's setting oneself up for getting invalid data on the\nsubscriber, because it skipped something that the publisher expected to be\ndone.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jan 7, 2019 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:Hi,\n\nLogical replication enables us to replicate data changes to different\nmajor version PostgreSQL as the doc says[1]. However the current\nlogical replication can work fine only if replicating to a newer major\nversion PostgreSQL such as from 10 to 11. Regarding using logical\nreplication with older major version, say sending from 11 to 10, it\nwill stop when a subscriber receives a truncate change because it's\nnot supported at PostgreSQL 10. I think there are use cases where\nusing logical replication with a subscriber of an older version\nPostgreSQL but I'm not sure we should support it.\n\nOf course in such case we can set the publication with publish =\n'insert, update, delete' to not send truncate changes but it requres\nusers to recognize the feature differences between major vesions and\nin the future it will get more complex. So I think it would be better\nto be configured autometically by PostgreSQL.\n\nTo fix it we can make subscribers send its supporting message types to\nthe publisher at a startup time so that the publisher doesn't send\nunsupported message types on the subscriber. Or as an another idea, we\ncan make subscribers ignore unsupported logical replication message\ntypes instead of raising an error. Feedback is very welcome.\n\n[1] https://www.postgresql.org/docs/devel/logical-replication.htmlHow would that work in practice?If an 11 server is sent a message saying \"client does not support truncate\", and immediately generates an error, then you can no longer replicate even if you turn off truncate. And if it delays it until the actual replication of the item, then you just get the error on the primary ìnstead of the standby?I assume you are not suggesting a publication with truncation enabled should just ignore replicating truncation if the downstream server doesn't support it? Because if that's the suggestion, then a strong -1 from me on that. And definitely -1 for having a subscriber ignore messages it doesn't know about. That's setting oneself up for getting invalid data on the subscriber, because it skipped something that the publisher expected to be done.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 7 Jan 2019 10:54:16 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Using logical replication with older version subscribers"
},
{
"msg_contents": "On 07/01/2019 10:54, Magnus Hagander wrote:\n> I assume you are not suggesting a publication with truncation enabled\n> should just ignore replicating truncation if the downstream server\n> doesn't support it? Because if that's the suggestion, then a strong -1\n> from me on that. \n\nYes, that's the reason why we intentionally left it as it is now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 7 Jan 2019 13:48:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Using logical replication with older version subscribers"
},
{
"msg_contents": "On Mon, Jan 7, 2019 at 6:54 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Mon, Jan 7, 2019 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> Logical replication enables us to replicate data changes to different\n>> major version PostgreSQL as the doc says[1]. However the current\n>> logical replication can work fine only if replicating to a newer major\n>> version PostgreSQL such as from 10 to 11. Regarding using logical\n>> replication with older major version, say sending from 11 to 10, it\n>> will stop when a subscriber receives a truncate change because it's\n>> not supported at PostgreSQL 10. I think there are use cases where\n>> using logical replication with a subscriber of an older version\n>> PostgreSQL but I'm not sure we should support it.\n>>\n>> Of course in such case we can set the publication with publish =\n>> 'insert, update, delete' to not send truncate changes but it requres\n>> users to recognize the feature differences between major vesions and\n>> in the future it will get more complex. So I think it would be better\n>> to be configured autometically by PostgreSQL.\n>>\n>> To fix it we can make subscribers send its supporting message types to\n>> the publisher at a startup time so that the publisher doesn't send\n>> unsupported message types on the subscriber. Or as an another idea, we\n>> can make subscribers ignore unsupported logical replication message\n>> types instead of raising an error. Feedback is very welcome.\n>>\n>> [1] https://www.postgresql.org/docs/devel/logical-replication.html\n>\n>\n> How would that work in practice?\n>\n> If an 11 server is sent a message saying \"client does not support truncate\", and immediately generates an error, then you can no longer replicate even if you turn off truncate. And if it delays it until the actual replication of the item, then you just get the error on the primary ìnstead of the standby?\n>\n> I assume you are not suggesting a publication with truncation enabled should just ignore replicating truncation if the downstream server doesn't support it? Because if that's the suggestion, then a strong -1 from me on that.\n>\n\nI'm thinking that the we can make the pgoutput plugin recognize that\nthe downstream server doesn't support it and not send it. For example,\neven if we create a publication with publish = 'truncate' we send\nnothing due to checking supported message types by pgoutput plugin if\nthe downstream server is PostgreSQL server and its version is older\nthan 10.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Mon, 7 Jan 2019 23:36:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using logical replication with older version subscribers"
},
{
"msg_contents": "On Mon, Jan 7, 2019 at 3:37 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Jan 7, 2019 at 6:54 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > On Mon, Jan 7, 2019 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Logical replication enables us to replicate data changes to different\n> >> major version PostgreSQL as the doc says[1]. However the current\n> >> logical replication can work fine only if replicating to a newer major\n> >> version PostgreSQL such as from 10 to 11. Regarding using logical\n> >> replication with older major version, say sending from 11 to 10, it\n> >> will stop when a subscriber receives a truncate change because it's\n> >> not supported at PostgreSQL 10. I think there are use cases where\n> >> using logical replication with a subscriber of an older version\n> >> PostgreSQL but I'm not sure we should support it.\n> >>\n> >> Of course in such case we can set the publication with publish =\n> >> 'insert, update, delete' to not send truncate changes but it requres\n> >> users to recognize the feature differences between major vesions and\n> >> in the future it will get more complex. So I think it would be better\n> >> to be configured autometically by PostgreSQL.\n> >>\n> >> To fix it we can make subscribers send its supporting message types to\n> >> the publisher at a startup time so that the publisher doesn't send\n> >> unsupported message types on the subscriber. Or as an another idea, we\n> >> can make subscribers ignore unsupported logical replication message\n> >> types instead of raising an error. Feedback is very welcome.\n> >>\n> >> [1] https://www.postgresql.org/docs/devel/logical-replication.html\n> >\n> >\n> > How would that work in practice?\n> >\n> > If an 11 server is sent a message saying \"client does not support\n> truncate\", and immediately generates an error, then you can no longer\n> replicate even if you turn off truncate. And if it delays it until the\n> actual replication of the item, then you just get the error on the primary\n> ìnstead of the standby?\n> >\n> > I assume you are not suggesting a publication with truncation enabled\n> should just ignore replicating truncation if the downstream server doesn't\n> support it? Because if that's the suggestion, then a strong -1 from me on\n> that.\n> >\n>\n> I'm thinking that the we can make the pgoutput plugin recognize that\n> the downstream server doesn't support it and not send it. For example,\n> even if we create a publication with publish = 'truncate' we send\n> nothing due to checking supported message types by pgoutput plugin if\n> the downstream server is PostgreSQL server and its version is older\n> than 10.\n>\n\nThat's the idea I definitely say a strong -1 to.\n\nIgnoring the truncate message isn't going to make it work. It's just going\nto mean that the downstream data is incorrect vs what the publisher\nthought. The correct solution here is to not publish the truncate, which we\nalready have. I can see the point in changing it so the error message\nbecomes more obvious (already when the subscriber connects, and not a\nrandom time later when the first truncate replicates), but *silently*\nignoring it seems like a terrible choice.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jan 7, 2019 at 3:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Jan 7, 2019 at 6:54 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Mon, Jan 7, 2019 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> Logical replication enables us to replicate data changes to different\n>> major version PostgreSQL as the doc says[1]. However the current\n>> logical replication can work fine only if replicating to a newer major\n>> version PostgreSQL such as from 10 to 11. Regarding using logical\n>> replication with older major version, say sending from 11 to 10, it\n>> will stop when a subscriber receives a truncate change because it's\n>> not supported at PostgreSQL 10. I think there are use cases where\n>> using logical replication with a subscriber of an older version\n>> PostgreSQL but I'm not sure we should support it.\n>>\n>> Of course in such case we can set the publication with publish =\n>> 'insert, update, delete' to not send truncate changes but it requres\n>> users to recognize the feature differences between major vesions and\n>> in the future it will get more complex. So I think it would be better\n>> to be configured autometically by PostgreSQL.\n>>\n>> To fix it we can make subscribers send its supporting message types to\n>> the publisher at a startup time so that the publisher doesn't send\n>> unsupported message types on the subscriber. Or as an another idea, we\n>> can make subscribers ignore unsupported logical replication message\n>> types instead of raising an error. Feedback is very welcome.\n>>\n>> [1] https://www.postgresql.org/docs/devel/logical-replication.html\n>\n>\n> How would that work in practice?\n>\n> If an 11 server is sent a message saying \"client does not support truncate\", and immediately generates an error, then you can no longer replicate even if you turn off truncate. And if it delays it until the actual replication of the item, then you just get the error on the primary ìnstead of the standby?\n>\n> I assume you are not suggesting a publication with truncation enabled should just ignore replicating truncation if the downstream server doesn't support it? Because if that's the suggestion, then a strong -1 from me on that.\n>\n\nI'm thinking that the we can make the pgoutput plugin recognize that\nthe downstream server doesn't support it and not send it. For example,\neven if we create a publication with publish = 'truncate' we send\nnothing due to checking supported message types by pgoutput plugin if\nthe downstream server is PostgreSQL server and its version is older\nthan 10.That's the idea I definitely say a strong -1 to.Ignoring the truncate message isn't going to make it work. It's just going to mean that the downstream data is incorrect vs what the publisher thought. The correct solution here is to not publish the truncate, which we already have. I can see the point in changing it so the error message becomes more obvious (already when the subscriber connects, and not a random time later when the first truncate replicates), but *silently* ignoring it seems like a terrible choice. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 7 Jan 2019 17:12:45 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Using logical replication with older version subscribers"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 1:12 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Mon, Jan 7, 2019 at 3:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Mon, Jan 7, 2019 at 6:54 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> >\n>> > On Mon, Jan 7, 2019 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> Logical replication enables us to replicate data changes to different\n>> >> major version PostgreSQL as the doc says[1]. However the current\n>> >> logical replication can work fine only if replicating to a newer major\n>> >> version PostgreSQL such as from 10 to 11. Regarding using logical\n>> >> replication with older major version, say sending from 11 to 10, it\n>> >> will stop when a subscriber receives a truncate change because it's\n>> >> not supported at PostgreSQL 10. I think there are use cases where\n>> >> using logical replication with a subscriber of an older version\n>> >> PostgreSQL but I'm not sure we should support it.\n>> >>\n>> >> Of course in such case we can set the publication with publish =\n>> >> 'insert, update, delete' to not send truncate changes but it requres\n>> >> users to recognize the feature differences between major vesions and\n>> >> in the future it will get more complex. So I think it would be better\n>> >> to be configured autometically by PostgreSQL.\n>> >>\n>> >> To fix it we can make subscribers send its supporting message types to\n>> >> the publisher at a startup time so that the publisher doesn't send\n>> >> unsupported message types on the subscriber. Or as an another idea, we\n>> >> can make subscribers ignore unsupported logical replication message\n>> >> types instead of raising an error. Feedback is very welcome.\n>> >>\n>> >> [1] https://www.postgresql.org/docs/devel/logical-replication.html\n>> >\n>> >\n>> > How would that work in practice?\n>> >\n>> > If an 11 server is sent a message saying \"client does not support truncate\", and immediately generates an error, then you can no longer replicate even if you turn off truncate. And if it delays it until the actual replication of the item, then you just get the error on the primary ìnstead of the standby?\n>> >\n>> > I assume you are not suggesting a publication with truncation enabled should just ignore replicating truncation if the downstream server doesn't support it? Because if that's the suggestion, then a strong -1 from me on that.\n>> >\n>>\n>> I'm thinking that the we can make the pgoutput plugin recognize that\n>> the downstream server doesn't support it and not send it. For example,\n>> even if we create a publication with publish = 'truncate' we send\n>> nothing due to checking supported message types by pgoutput plugin if\n>> the downstream server is PostgreSQL server and its version is older\n>> than 10.\n>\n>\n> That's the idea I definitely say a strong -1 to.\n>\n> Ignoring the truncate message isn't going to make it work. It's just going to mean that the downstream data is incorrect vs what the publisher thought. The correct solution here is to not publish the truncate, which we already have. I can see the point in changing it so the error message becomes more obvious (already when the subscriber connects, and not a random time later when the first truncate replicates), but *silently* ignoring it seems like a terrible choice.\n\nI understood that that makes more sense. And the raising the error\nwhen connection seems good to me. Thank you!\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Wed, 9 Jan 2019 10:14:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using logical replication with older version subscribers"
}
] |
[
{
"msg_contents": "В письме от четверг, 3 января 2019 г. 17:15:08 MSK пользователь Alvaro Herrera \nнаписал:\n\n> I would have liked to get a StaticAssert in the definition, but I don't\n> think it's possible. A standard Assert() should be possible, though.\n\nAsserts are cool thing. I found some unexpected stuff. \n\nparallel_workers option is claimed to be heap-only option.\n\nBut in src/backend/optimizer/util/plancat.c in get_relation_info \nRelationGetParallelWorkers is being called for both heap and toast tables (and \nnot only for them).\n\nBecause usually there are no reloptions for toast, it returns default -1 \nvalue. If some reloptions were set for toast table, RelationGetParallelWorkers \nwill return a value from uninitialized memory.\n\nThis will happen because StdRdOptions structure is filled with values in \nfillRelOptions function. And fillRelOptions first iterates on options list, and \nthen on elems. So if option is not in \"option list\" than it's value will not \nbe set.\n\nAnd options list comes from parseRelOptions where only options with proper \nrelation kind is selected from [type]RelOpts[] arrays. So parallel_workers \nwill not be added to options in the case of toast table because it is claimed \nto be applicable only to RELOPT_KIND_HEAP\n\nThus if toast has some options set, and rd_options in relation is not NULL, \nget_relation_info will use value from uninitialized memory as number of \nparallel workers.\n\nTo reproduce this Assert you can change RelationGetParallelWorkers macros in\nsrc/include/utils/rel.h\n\n#define IsHeapRelation(relation) \\ \n (relation->rd_rel->relkind == RELKIND_RELATION || \\ \n relation->rd_rel->relkind == RELKIND_MATVIEW ) \n\n#define RelationGetParallelWorkers(relation, defaultpw) \\ \n (AssertMacro(IsHeapRelation(relation)), \\ \n ((relation)->rd_options ? \\ \n ((StdRdOptions *) (relation)->rd_options)->parallel_workers : \\ \n (defaultpw))) \n\nand see how it asserts. It will happen just on the db_initiaisation phase of \nmake check.\n\nIf you add relation->rd_rel->relkind == RELKIND_TOASTEDVALUE to the Assertion, \nit will Assert in cases when get_relation_info is called for partitioned \ntable. This case is not a problem for now, because partitioned table has no \noptions and you will always have NULL in rd_options and get default value. But \nit will become a problem when somebody adds some options, especially it would \nbe a problem if this options do not use StdRdOptions structure.\nAlso it is called for foreign tables and sequences. \n\nSo my suggestion of a hotfix is to replcace \nrel->rel_parallel_workers = RelationGetParallelWorkers(relation,-1);\nfrom get_relation_info with the following code\n\n switch (relation->rd_rel->relkind) \n { \n case RELKIND_RELATION: \n case RELKIND_MATVIEW: \n rel->rel_parallel_workers = \n RelationGetParallelWorkers(relation,-1); \n break; \n case RELKIND_TOASTVALUE: \n case RELKIND_PARTITIONED_TABLE: \n case RELKIND_SEQUENCE: \n case REPLICA_IDENTITY_FULL: /* Foreign table */ \n rel->rel_parallel_workers = -1; \n break; \n default: \n /* Other relkinds are not supported */ \n Assert(false); \n }\n\nBut I am not familiar with get_relation_info and parallel_workers specific. So \nI suspect real fix may be quite different.\n\nAlso I would suggest to fix it in all supported stable branches that have \nparallel_workers option, because this bug may give something unexpected when \nsome toast options are set.\n\n\n",
"msg_date": "Mon, 07 Jan 2019 15:04:08 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Problem with parallel_workers option (Was Re: [PATCH] get rid of\n StdRdOptions,\n use individual binary reloptions representation for each relation kind\n instead)"
},
{
"msg_contents": "On 2019-Jan-07, Nikolay Shaplov wrote:\n\n> Asserts are cool thing. I found some unexpected stuff. \n> \n> parallel_workers option is claimed to be heap-only option.\n> \n> But in src/backend/optimizer/util/plancat.c in get_relation_info \n> RelationGetParallelWorkers is being called for both heap and toast tables (and \n> not only for them).\n\nUgh.\n\nI wonder if it makes sense for a toast table to have parallel_workers.\nI suppose it's not useful, since a toast table is not supposed to be\nscanned in bulk, only accessed through the tuptoaster interface. But on\nthe other hand, you *can* do \"select * from pg_toast_NNN\", and it almost\nall respects a toast table is just like a regular heap table.\n\n> Because usually there are no reloptions for toast, it returns default -1 \n> value. If some reloptions were set for toast table, RelationGetParallelWorkers \n> will return a value from uninitialized memory.\n\nWell, if it returns a negative number or zero, the rest of the server\nshould behave identically to it returning the -1 that was intended. And\nif it returns a positive number, the worst that will happen is that a\nPath structure somewhere will have a positive number of workers, but\nsince queries on toast tables are not planned in the regular way, most\nlikely those Paths will never exist anyway.\n\nSo while I agree that this is a bug, it seems pretty benign.\n\nUnless I overlook something.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 7 Jan 2019 13:56:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with parallel_workers option (Was Re: [PATCH] get rid of\n StdRdOptions, use individual binary reloptions representation for each\n relation kind instead)"
},
{
"msg_contents": "В письме от понедельник, 7 января 2019 г. 13:56:48 MSK пользователь Alvaro \nHerrera написал:\n\n> > Asserts are cool thing. I found some unexpected stuff.\n> > \n> > parallel_workers option is claimed to be heap-only option.\n> > \n> > But in src/backend/optimizer/util/plancat.c in get_relation_info\n> > RelationGetParallelWorkers is being called for both heap and toast tables\n> > (and not only for them).\n> \n> Ugh.\n> \n> I wonder if it makes sense for a toast table to have parallel_workers.\n> I suppose it's not useful, since a toast table is not supposed to be\n> scanned in bulk, only accessed through the tuptoaster interface. But on\n> the other hand, you *can* do \"select * from pg_toast_NNN\", and it almost\n> all respects a toast table is just like a regular heap table.\n \nIf parallel_workers is not intended to be used anywhere except heap and \nmatview, then may be better to make fix more relaxed. Like \n\nif (relation->rd_rel->relkind == RELKIND_RELATION || \n relation->rd_rel->relkind == RELKIND_MATVIEW ) \n rel->rel_parallel_workers = \n RelationGetParallelWorkers(relation,-1);\nelse\n rel->rel_parallel_workers = -1;\n\n> > Because usually there are no reloptions for toast, it returns default -1\n> > value. If some reloptions were set for toast table,\n> > RelationGetParallelWorkers will return a value from uninitialized memory.\n> \n> Well, if it returns a negative number or zero, the rest of the server\n> should behave identically to it returning the -1 that was intended. And\n> if it returns a positive number, the worst that will happen is that a\n> Path structure somewhere will have a positive number of workers, but\n> since queries on toast tables are not planned in the regular way, most\n> likely those Paths will never exist anyway.\n> \n> So while I agree that this is a bug, it seems pretty benign.\nIt is mild until somebody introduce PartitionedlRelOptions. Then \nPartitionedlRelOptions * will be converted to StdRdOptions* and we will get \nsegmentation fault...\n\nSo may be there is no need for back fix, but better to fix it now :-)\nMay be with the patch for StdRdOptions removal.\n\n\n",
"msg_date": "Mon, 07 Jan 2019 22:30:08 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: Problem with parallel_workers option (Was Re: [PATCH] get rid of\n StdRdOptions,\n use individual binary reloptions representation for each relation kind\n instead)"
}
] |
[
{
"msg_contents": "Greetings -hackers,\n\nGoogle Summer of Code is back for 2019! They have a similar set of\nrequirements, expectations, and timeline as last year.\n\nNow is the time to be working to get together a set of projects we'd\nlike to have GSoC students work on over the summer. Similar to last\nyear, we need to have a good set of projects for students to choose from\nin advance of the deadline for mentoring organizations.\n\nThe deadline for Mentoring organizations to apply is: February 6.\n\nThe list of accepted organization will be published around February 26.\n\nUnsurprisingly, we'll need to have an Ideas page again, so I've gone\nahead and created one (copying last year's):\n\nhttps://wiki.postgresql.org/wiki/GSoC_2019\n\nGoogle discusses what makes a good \"Ideas\" list here:\n\nhttps://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n\nAll the entries are marked with '2018' to indicate they were pulled from\nlast year. If the project from last year is still relevant, please\nupdate it to be '2019' and make sure to update all of the information\n(in particular, make sure to list yourself as a mentor and remove the\nother mentors, as appropriate).\n\nNew entries are certainly welcome and encouraged, just be sure to note\nthem as '2019' when you add it.\n\nProjects from last year which were worked on but have significant\nfollow-on work to be completed are absolutely welcome as well- simply\nupdate the description appropriately and mark it as being for '2019'.\n\nWhen we get closer to actually submitting our application, I'll clean\nout the '2018' entries that didn't get any updates.\n\nAs a reminder, each idea on the page should be in the format that the\nother entries are in and should include:\n\n- Project title/one-line description\n- Brief, 2-5 sentence, description of the project (remember, these are\n 12-week projects)\n- Description of programming skills needed and estimation of the\n difficulty level\n- List of potential mentors\n- Expected Outcomes\n\nAs with last year, please consider PostgreSQL to be an \"Umbrella\"\nproject and that anything which would be considered \"PostgreSQL Family\"\nper the News/Announce policy [2] is likely to be acceptable as a\nPostgreSQL GSoC project.\n\nIn other words, if you're a contributor or developer on barman,\npgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code\n(pgeu-website), pgAdmin4, PostgresXL, pgbouncer, Citus, pldebugger, the\nPG RPMs (pgrpms), the JDBC driver, the ODBC driver, or any of the many\nother PG Family projects, please feel free to add a project for\nconsideration! If we get quite a few, we can organize the page further\nbased on which project or maybe what skills are needed or similar.\n\nLet's have another great year of GSoC with PostgreSQL!\n\nThanks!\n\nStephen\n\n[1]: https://developers.google.com/open-source/gsoc/timeline\n[2]: https://wiki.postgresql.org/wiki/NewsEventsApproval",
"msg_date": "Mon, 7 Jan 2019 17:06:20 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "GSoC 2019"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 1:06 AM Stephen Frost <sfrost@snowman.net> wrote:\n> All the entries are marked with '2018' to indicate they were pulled from\n> last year. If the project from last year is still relevant, please\n> update it to be '2019' and make sure to update all of the information\n> (in particular, make sure to list yourself as a mentor and remove the\n> other mentors, as appropriate).\n\nI can confirm that I'm ready to mentor projects, where I'm listed as\npotential mentor.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Tue, 8 Jan 2019 20:57:37 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2019"
},
{
"msg_contents": "Hi!\n\n> 8 янв. 2019 г., в 22:57, Alexander Korotkov <a.korotkov@postgrespro.ru> написал(а):\n> \n> On Tue, Jan 8, 2019 at 1:06 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> All the entries are marked with '2018' to indicate they were pulled from\n>> last year. If the project from last year is still relevant, please\n>> update it to be '2019' and make sure to update all of the information\n>> (in particular, make sure to list yourself as a mentor and remove the\n>> other mentors, as appropriate).\n> \n> I can confirm that I'm ready to mentor projects, where I'm listed as\n> potential mentor.\n\nI've updated GiST API and amcheck project year, removed mentors (except Alexander). Please, put your names back if you still wish to mentor this projects.\n\nAlso, we are planning to add new WAL-G project, Vladimir Leskov is now composing multiple tasks WAL-G to single project. Vladimir had done 2018's WAL-G project during Yandex internship, so I'll remove this project from page.\n\nBest regards, Andrey Borodin.\n",
"msg_date": "Thu, 10 Jan 2019 13:20:39 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2019"
}
] |
[
{
"msg_contents": "Hi,\n\nOver in [1] we're discussing the development of the pluggable storage\npatchset, which allows different ways of storing table data.\n\nOne thing I'd like to discuss with a wider audience than the\nimplementation details is psql and pg_dump handling of table access\nmethods.\n\nCurrently the patchset neither dumps nor displays table access\nmethods . That's clearly not right.\n\nThe reason for that however is not that it's hard to dump/display\ncode-wise, but that to me the correct behaviour is not obvious.\n\nThe reason to make table storage pluggable is after all that the table\naccess method can be changed, and part of developing new table access\nmethods is being able to run the regression tests.\n\nA patch at [2] adds display of a table's access method to \\d+ - but that\nmeans that running the tests with a different default table access\nmethod (e.g. using PGOPTIONS='-c default_table_access_method=...)\nthere'll be a significant number of test failures, even though the test\nresults did not meaningfully differ.\n\nSimilarly, if pg_dump starts to dump table access methods either\nunconditionally, or for all non-heap AMS, the pg_dump tests fail due to\nunimportant differences.\n\nA third issue, less important in my opinion, is that specifying the\ntable access method means that it's harder to dump/restore into a table\nwith a different AM.\n\n\nOne way to solve this would be for psql/pg_dump to only define the table\naccess methods for tables that differ from the currently configured\ndefault_table_access_method. That'd mean that only a few tests that\nintentionally test AMs would display/dump the access method. On the\nother hand that seems like it's a bit too much magic.\n\nWhile I don't like that option, I haven't really come up with something\nbetter. Having alternative outputs for nearly every test file for\ne.g. zheap if/when we merge it, seems unmaintainable. It's less insane\nfor the pg_dump tests.\n\nAn alternative approach based on that would be to hack pg_regress to\nmagically ignore \"Access method: ...\" type differences, but that seems\nlike a bad idea to me.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20180703070645.wchpu5muyto5n647%40alap3.anarazel.de\n[2] https://postgr.es/m/CA+q6zcWMHSbLkKO7eq95t15m3R1X9FCcm0kT3dGS2gGSRO9kKw@mail.gmail.com\n[3] https://postgr.es/m/20181215193700.nov7bphxyge4qkez@alap3.anarazel.de\n\n",
"msg_date": "Mon, 7 Jan 2019 15:56:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Displaying and dumping of table access methods"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> Over in [1] we're discussing the development of the pluggable storage\n> patchset, which allows different ways of storing table data.\n> \n> One thing I'd like to discuss with a wider audience than the\n> implementation details is psql and pg_dump handling of table access\n> methods.\n> \n> Currently the patchset neither dumps nor displays table access\n> methods . That's clearly not right.\n\nAgreed.\n\n> The reason for that however is not that it's hard to dump/display\n> code-wise, but that to me the correct behaviour is not obvious.\n\nWhile it might be a lot of changes to the regression output results, I\ntend to feel that showng the access method is a very important aspect of\nthe relation and therefore should be in \\d output.\n\n> The reason to make table storage pluggable is after all that the table\n> access method can be changed, and part of developing new table access\n> methods is being able to run the regression tests.\n\nWe certainly want people to be able to run the regression tests, but it\nfeels like we will need more regression tests in the future as we wish\nto cover both the built-in heap AM and the new zheap AM, so we should\nreally have those both run independently. I don't think we'll be able\nto have just one set of regression tests that cover everything\ninteresting for both, sadly. Perhaps there's a way to have a set of\nregression tests which are run for both, and another set that's run for\neach, but building all of that logic is a fair bit of work and I'm not\nsure how much it's really saving us.\n\n> A patch at [2] adds display of a table's access method to \\d+ - but that\n> means that running the tests with a different default table access\n> method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> there'll be a significant number of test failures, even though the test\n> results did not meaningfully differ.\n\nYeah, I'm not really thrilled with this approach.\n\n> Similarly, if pg_dump starts to dump table access methods either\n> unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> unimportant differences.\n\nIn reality, pg_dump already depends on quite a few defaults to be in\nplace, so I don't see a particular issue with adding this into that set.\nNew tests would need to have new pg_dump checks, of course, but that's\ngenerally the case as well.\n\n> A third issue, less important in my opinion, is that specifying the\n> table access method means that it's harder to dump/restore into a table\n> with a different AM.\n\nI understand this concern but I view it as an independent consideration.\nThere are a lot of transformations which one might wish for when dumping\nand restoring data, a number of which are handled through various\noptions (--no-owner, --no-acls, etc) and it seems like we could do\nsomething similar here.\n\n> One way to solve this would be for psql/pg_dump to only define the table\n> access methods for tables that differ from the currently configured\n> default_table_access_method. That'd mean that only a few tests that\n> intentionally test AMs would display/dump the access method. On the\n> other hand that seems like it's a bit too much magic.\n\nI'm not a fan of depending on the currently set\ndefault_table_access_method.. When would that be set in the pg_restore\nprocess? Or in the SQL script that's created? Really though, that does\nstrike me as quite a bit of magic.\n\n> While I don't like that option, I haven't really come up with something\n> better. Having alternative outputs for nearly every test file for\n> e.g. zheap if/when we merge it, seems unmaintainable. It's less insane\n> for the pg_dump tests.\n\nI'm thinking less of alternative output files and more of additional\ntests for zheap cases...\n\n> An alternative approach based on that would be to hack pg_regress to\n> magically ignore \"Access method: ...\" type differences, but that seems\n> like a bad idea to me.\n\nI agree, that's not a good idea.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 7 Jan 2019 19:19:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-07 19:19:46 -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Andres Freund (andres@anarazel.de) wrote:\n> > Over in [1] we're discussing the development of the pluggable storage\n> > patchset, which allows different ways of storing table data.\n> > \n> > One thing I'd like to discuss with a wider audience than the\n> > implementation details is psql and pg_dump handling of table access\n> > methods.\n> > \n> > Currently the patchset neither dumps nor displays table access\n> > methods . That's clearly not right.\n> \n> Agreed.\n> \n> > The reason for that however is not that it's hard to dump/display\n> > code-wise, but that to me the correct behaviour is not obvious.\n> \n> While it might be a lot of changes to the regression output results, I\n> tend to feel that showng the access method is a very important aspect of\n> the relation and therefore should be in \\d output.\n\nI don't see how that'd be feasible. Imo it is/was absolutely crucial\nfor zheap development to be able to use the existing postgres tests.\n\n\n> > The reason to make table storage pluggable is after all that the table\n> > access method can be changed, and part of developing new table access\n> > methods is being able to run the regression tests.\n> \n> We certainly want people to be able to run the regression tests, but it\n> feels like we will need more regression tests in the future as we wish\n> to cover both the built-in heap AM and the new zheap AM, so we should\n> really have those both run independently. I don't think we'll be able\n> to have just one set of regression tests that cover everything\n> interesting for both, sadly. Perhaps there's a way to have a set of\n> regression tests which are run for both, and another set that's run for\n> each, but building all of that logic is a fair bit of work and I'm not\n> sure how much it's really saving us.\n\nI don't think there's any sort of contradiction here. I don't think it's\nfeasible to have tests tests for every feature duplicated for each\nsupported AM, we have enough difficulty maintaining our current\ntests. But that doesn't mean it's a problem to have individual test\n[files] run with an explicitly assigned AM - the test can just do a SET\ndefault_table_access_method = zheap; or explicitly say USING zheap.\n\n> > A patch at [2] adds display of a table's access method to \\d+ - but that\n> > means that running the tests with a different default table access\n> > method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> > there'll be a significant number of test failures, even though the test\n> > results did not meaningfully differ.\n> \n> Yeah, I'm not really thrilled with this approach.\n> \n> > Similarly, if pg_dump starts to dump table access methods either\n> > unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> > unimportant differences.\n> \n> In reality, pg_dump already depends on quite a few defaults to be in\n> place, so I don't see a particular issue with adding this into that set.\n> New tests would need to have new pg_dump checks, of course, but that's\n> generally the case as well.\n\nI am not sure what you mean here? Are you suggesting that there'd be a\nseparate set of pg_dump test for zheap and every other possible future\nAM?\n\n\nTo me the approach you're suggesting is going to lead to an explosion of\nredundant tests, which are really hard to maintain, especially for\nout-of-tree AMs. Out of tree AMs with the setup I propose can just\ninstall the AM into the template database and set PGOPTIONS, and they\nhave pretty good coverage.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 7 Jan 2019 16:31:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-01-07 19:19:46 -0500, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > Over in [1] we're discussing the development of the pluggable storage\n> > > patchset, which allows different ways of storing table data.\n> > > \n> > > One thing I'd like to discuss with a wider audience than the\n> > > implementation details is psql and pg_dump handling of table access\n> > > methods.\n> > > \n> > > Currently the patchset neither dumps nor displays table access\n> > > methods . That's clearly not right.\n> > \n> > Agreed.\n> > \n> > > The reason for that however is not that it's hard to dump/display\n> > > code-wise, but that to me the correct behaviour is not obvious.\n> > \n> > While it might be a lot of changes to the regression output results, I\n> > tend to feel that showng the access method is a very important aspect of\n> > the relation and therefore should be in \\d output.\n> \n> I don't see how that'd be feasible. Imo it is/was absolutely crucial\n> for zheap development to be able to use the existing postgres tests.\n\nI don't agree with the general assumption that \"we did this for\ndevelopment and therefore it should be done that way forever\".\n\nInstead, I would look at adding tests where there's a difference\nbetween the two, or possibly some difference, and make sure that there\nisn't, and make sure that the code paths are covered.\n\n> > > The reason to make table storage pluggable is after all that the table\n> > > access method can be changed, and part of developing new table access\n> > > methods is being able to run the regression tests.\n> > \n> > We certainly want people to be able to run the regression tests, but it\n> > feels like we will need more regression tests in the future as we wish\n> > to cover both the built-in heap AM and the new zheap AM, so we should\n> > really have those both run independently. I don't think we'll be able\n> > to have just one set of regression tests that cover everything\n> > interesting for both, sadly. Perhaps there's a way to have a set of\n> > regression tests which are run for both, and another set that's run for\n> > each, but building all of that logic is a fair bit of work and I'm not\n> > sure how much it's really saving us.\n> \n> I don't think there's any sort of contradiction here. I don't think it's\n> feasible to have tests tests for every feature duplicated for each\n> supported AM, we have enough difficulty maintaining our current\n> tests. But that doesn't mean it's a problem to have individual test\n> [files] run with an explicitly assigned AM - the test can just do a SET\n> default_table_access_method = zheap; or explicitly say USING zheap.\n\nI don't mean to suggest that there's a contradiction. I don't have any\nproblem with new tests being added which set the default AM to zheap, as\nlong as it's clear that such is happening for downstream tests.\n\n> > > A patch at [2] adds display of a table's access method to \\d+ - but that\n> > > means that running the tests with a different default table access\n> > > method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> > > there'll be a significant number of test failures, even though the test\n> > > results did not meaningfully differ.\n> > \n> > Yeah, I'm not really thrilled with this approach.\n> > \n> > > Similarly, if pg_dump starts to dump table access methods either\n> > > unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> > > unimportant differences.\n> > \n> > In reality, pg_dump already depends on quite a few defaults to be in\n> > place, so I don't see a particular issue with adding this into that set.\n> > New tests would need to have new pg_dump checks, of course, but that's\n> > generally the case as well.\n> \n> I am not sure what you mean here? Are you suggesting that there'd be a\n> separate set of pg_dump test for zheap and every other possible future\n> AM?\n\nI'm suggesting that pg_dump would have additional tests for zheap, in\naddition to the existing tests we already have. No more, no less,\nreally.\n\n> To me the approach you're suggesting is going to lead to an explosion of\n> redundant tests, which are really hard to maintain, especially for\n> out-of-tree AMs. Out of tree AMs with the setup I propose can just\n> install the AM into the template database and set PGOPTIONS, and they\n> have pretty good coverage.\n\nI'm frankly much less interested in out-of-tree AMs as we aren't going\nto have in-core regression tests for them anyway, because we can't as\nthey aren't in our tree and, ultimately, I don't find them to have\nanywhere near the same value that in-core AMs have.\n\nI don't have any problem with out-of-tree AMs hacking things up as they\nsee fit and then running whatever tests they want, but it is, and always\nwill be, a very different discussion and ultimately a different result\nwhen we're talking about what will be incorporated and supported as part\nof core.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 7 Jan 2019 20:30:13 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-07 20:30:13 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-01-07 19:19:46 -0500, Stephen Frost wrote:\n> > > * Andres Freund (andres@anarazel.de) wrote:\n> > > > Over in [1] we're discussing the development of the pluggable storage\n> > > > patchset, which allows different ways of storing table data.\n> > > >\n> > > > One thing I'd like to discuss with a wider audience than the\n> > > > implementation details is psql and pg_dump handling of table access\n> > > > methods.\n> > > >\n> > > > Currently the patchset neither dumps nor displays table access\n> > > > methods . That's clearly not right.\n> > >\n> > > Agreed.\n> > >\n> > > > The reason for that however is not that it's hard to dump/display\n> > > > code-wise, but that to me the correct behaviour is not obvious.\n> > >\n> > > While it might be a lot of changes to the regression output results, I\n> > > tend to feel that showng the access method is a very important aspect of\n> > > the relation and therefore should be in \\d output.\n> >\n> > I don't see how that'd be feasible. Imo it is/was absolutely crucial\n> > for zheap development to be able to use the existing postgres tests.\n>\n> I don't agree with the general assumption that \"we did this for\n> development and therefore it should be done that way forever\".\n>\n> Instead, I would look at adding tests where there's a difference\n> between the two, or possibly some difference, and make sure that there\n> isn't, and make sure that the code paths are covered.\n\nI think this approach makes no sense whatsover. It's entirely possible\nto encounter bugs in table AM relevant code in places one would not\nthink so. But even if one were, foolishly, to exclude those, the pieces\nof code we know are highly affected by the way the AM works are a a\nsignificant (at the very least 10-20% of tests) percentage of our\ntests. Duplicating them even just between heap and zheap would be a\nmajor technical debt. DML, ON CONFLICT, just about all isolationtester\ntest, etc all are absolutely crucial to test different AMs for\ncorrectness.\n\n\n> > To me the approach you're suggesting is going to lead to an explosion of\n> > redundant tests, which are really hard to maintain, especially for\n> > out-of-tree AMs. Out of tree AMs with the setup I propose can just\n> > install the AM into the template database and set PGOPTIONS, and they\n> > have pretty good coverage.\n>\n> I'm frankly much less interested in out-of-tree AMs as we aren't going\n> to have in-core regression tests for them anyway, because we can't as\n> they aren't in our tree and, ultimately, I don't find them to have\n> anywhere near the same value that in-core AMs have.\n\nI think you must be missing my point: Adding spurious differences due to\n\"Table Access Method: heap\" vs \"Table Access Method: blarg\" makes it\nunnecessarily hard to reuse the in-core tests for any additional AM, be\nit in-core or out of core. I fail to see what us not having explicit\ntests for such AMs in core has to do with my point.\n\nEven just having a psql variable that says HIDE_NONDEFAULT_TABLE_AMS or\nHIDE_TABLE_AMS that's set by default by pg_regress would be *vastly*\nbetter from a maintainability POV than including the AM in the output.\n\nAndres\n\n",
"msg_date": "Mon, 7 Jan 2019 17:43:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-01-07 20:30:13 -0500, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > On 2019-01-07 19:19:46 -0500, Stephen Frost wrote:\n> > > > * Andres Freund (andres@anarazel.de) wrote:\n> > > > > Over in [1] we're discussing the development of the pluggable storage\n> > > > > patchset, which allows different ways of storing table data.\n> > > > >\n> > > > > One thing I'd like to discuss with a wider audience than the\n> > > > > implementation details is psql and pg_dump handling of table access\n> > > > > methods.\n> > > > >\n> > > > > Currently the patchset neither dumps nor displays table access\n> > > > > methods . That's clearly not right.\n> > > >\n> > > > Agreed.\n> > > >\n> > > > > The reason for that however is not that it's hard to dump/display\n> > > > > code-wise, but that to me the correct behaviour is not obvious.\n> > > >\n> > > > While it might be a lot of changes to the regression output results, I\n> > > > tend to feel that showng the access method is a very important aspect of\n> > > > the relation and therefore should be in \\d output.\n> > >\n> > > I don't see how that'd be feasible. Imo it is/was absolutely crucial\n> > > for zheap development to be able to use the existing postgres tests.\n> >\n> > I don't agree with the general assumption that \"we did this for\n> > development and therefore it should be done that way forever\".\n> >\n> > Instead, I would look at adding tests where there's a difference\n> > between the two, or possibly some difference, and make sure that there\n> > isn't, and make sure that the code paths are covered.\n> \n> I think this approach makes no sense whatsover. It's entirely possible\n> to encounter bugs in table AM relevant code in places one would not\n> think so. But even if one were, foolishly, to exclude those, the pieces\n> of code we know are highly affected by the way the AM works are a a\n> significant (at the very least 10-20% of tests) percentage of our\n> tests. Duplicating them even just between heap and zheap would be a\n> major technical debt. DML, ON CONFLICT, just about all isolationtester\n> test, etc all are absolutely crucial to test different AMs for\n> correctness.\n\nI am generally on board with minimizing the amount of duplicate code\nthat we have, but we must run those tests independently, so it's really\na question of if we build a system where we can parametize a set of\ntests to run and then run them for every AM and compare the output to\neither the generalized output or the per-AM output, or if we build on\nthe existing system and simply have an independent set of tests. It's\nnot clear to me, at the current point, which will be the lower level of\nongoing effort, but when it comes to the effort required today, it seems\npretty clear to me that whacking around the current test environment to\nrerun tests is a larger amount of effort. If that's the requirement,\nthen so be it and I'm on-board, but I'm also open to considering a\nlesser requirement for a completely new feature.\n\n> > > To me the approach you're suggesting is going to lead to an explosion of\n> > > redundant tests, which are really hard to maintain, especially for\n> > > out-of-tree AMs. Out of tree AMs with the setup I propose can just\n> > > install the AM into the template database and set PGOPTIONS, and they\n> > > have pretty good coverage.\n> >\n> > I'm frankly much less interested in out-of-tree AMs as we aren't going\n> > to have in-core regression tests for them anyway, because we can't as\n> > they aren't in our tree and, ultimately, I don't find them to have\n> > anywhere near the same value that in-core AMs have.\n> \n> I think you must be missing my point: Adding spurious differences due to\n> \"Table Access Method: heap\" vs \"Table Access Method: blarg\" makes it\n> unnecessarily hard to reuse the in-core tests for any additional AM, be\n> it in-core or out of core. I fail to see what us not having explicit\n> tests for such AMs in core has to do with my point.\n\nI don't think I'm missing your point. If you believe that we should be\nswayed by this argument into agreeing to change what we believe the\nuser-facing psql \\d output should be, then I am very hopeful that you're\ncompletely wrong. The psql \\d output should be driven by what will be\nbest for our users, not by what's best by external AMs or, really,\nanything else.\n\n> Even just having a psql variable that says HIDE_NONDEFAULT_TABLE_AMS or\n> HIDE_TABLE_AMS that's set by default by pg_regress would be *vastly*\n> better from a maintainability POV than including the AM in the output.\n\nI'm pretty sure I said in my last reply that I'm alright with psql and\npg_dump not outputting a result for the default value, provided the\ndefault is understood to always really be the default, but, again, what\nwe should be concerned about here is what the end user is dealing with\nand I'm not particularly incensed to support something different even if\nit's around a variable of some kind for external AMs, or external\n*whatevers*.\n\nI'm also a bit confused as to why we are spending a good bit of time\narguing about external AMs without any discussion or definition of what\nthey are or what their requirements are. If such things seriously\nexist, then let us talk about them and try to come up with solutions for\nthem; if they don't, then we can talk about them when they come up.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 7 Jan 2019 21:08:58 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-07 21:08:58 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-01-07 20:30:13 -0500, Stephen Frost wrote:\n> > > I don't agree with the general assumption that \"we did this for\n> > > development and therefore it should be done that way forever\".\n> > >\n> > > Instead, I would look at adding tests where there's a difference\n> > > between the two, or possibly some difference, and make sure that there\n> > > isn't, and make sure that the code paths are covered.\n> >\n> > I think this approach makes no sense whatsover. It's entirely possible\n> > to encounter bugs in table AM relevant code in places one would not\n> > think so. But even if one were, foolishly, to exclude those, the pieces\n> > of code we know are highly affected by the way the AM works are a a\n> > significant (at the very least 10-20% of tests) percentage of our\n> > tests. Duplicating them even just between heap and zheap would be a\n> > major technical debt. DML, ON CONFLICT, just about all isolationtester\n> > test, etc all are absolutely crucial to test different AMs for\n> > correctness.\n>\n> I am generally on board with minimizing the amount of duplicate code\n> that we have, but we must run those tests independently, so it's really\n> a question of if we build a system where we can parametize a set of\n> tests to run and then run them for every AM and compare the output to\n> either the generalized output or the per-AM output, or if we build on\n> the existing system and simply have an independent set of tests. It's\n> not clear to me, at the current point, which will be the lower level of\n> ongoing effort\n\nHuh? It's absolutely *trivial* from a buildsystem POV to run the tests\nagain with a different default AM. That's precisely why I'm talking\nabout this. Just setting PGOPTIONS='-c\ndefault_table_access_method=zheap' in the new makefile target (the ms\nrun scripts are similar) is sufficient. And we don't need to force\neveryone to constantly run tests with e.g. both heap and zheap, it's\nsufficient to do so on a few buildfarm machines, and whenever changing\nAM level code. Rerunning all the tests with a different AM is just\nsetting the same environment variable, but running check-world as the\ntarget.\n\nObviously that does not preclude a few tests that explicitly test\nfeatures specific to an AM. E.g. the zheap branch's tests have an\nexplicit zheap regression file and a few zheap specific isolationtester\nspec files that a) test zheap specific beheaviour b) make sure that the\nmost basic zheap behaviour is tested even when not running the tests\nwith zheap as the default AM.\n\nAnd even if you were to successfully argue that it's sufficient during\nnormal development to only have a few zheap specific additional tests,\nwe'd certainly want to make it possible to occasionally explicitly run\nthe rest of the tests under zheap to see whether additional stuff has\nbeen broken - and that's much harder to sift through if there's a lot of\nspurious test failures due to \\d[+] outputting additional/differing\ndata.\n\n\n> ..., but when it comes to the effort required today, it seems pretty\n> clear to me that whacking around the current test environment to rerun\n> tests is a larger amount of effort.\n\nHow did you come to that conclusion? Adding a makefile and vcregress.pl\ntarget is pretty trivial.\n\n\n> > Even just having a psql variable that says HIDE_NONDEFAULT_TABLE_AMS or\n> > HIDE_TABLE_AMS that's set by default by pg_regress would be *vastly*\n> > better from a maintainability POV than including the AM in the output.\n>\n> I'm pretty sure I said in my last reply that I'm alright with psql and\n> pg_dump not outputting a result for the default value\n\nI don't see that anywhere in your replies. Are you referring to:\n\n> I don't have any problem with new tests being added which set the\n> default AM to zheap, as long as it's clear that such is happening for\n> downstream tests.\n\n? If so, that's not addressing the reason why I'm suggesting to have\nsomething like HIDE_TABLE_AMS. The point is that that'd allow us to\ncater the default psql output to humans, while still choosing not to\ndisplay the AM for regression tests (thereby allowing to run the tests).\n\n\n> provided the default is understood to always really be the default\n\nWhat do you mean by that? Are you arguing that it should be impossible\nin test scenarios to override default_table_access_method? Or that\npg_dump/psql should check for a hardcoded 'heap' AM (via a macro or\nwhatnot)?\n\n\n> I'm also a bit confused as to why we are spending a good bit of time\n> arguing about external AMs without any discussion or definition of what\n> they are or what their requirements are. If such things seriously\n> exist, then let us talk about them and try to come up with solutions for\n> them; if they don't, then we can talk about them when they come up.\n\nWe are working seriously hard on making AMs pluggable. Zheap is not yet,\nand won't be that soon, part of core. The concerns for an in-core zheap\n(which needs to maintain the test infrastructure during the remainder of\nits out-of-core development!) and out-of-core AMs are pretty aligned. I\ndon't get your confusion.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 7 Jan 2019 18:31:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Mon, Jan 07, 2019 at 06:31:52PM -0800, Andres Freund wrote:\n> Huh? It's absolutely *trivial* from a buildsystem POV to run the tests\n> again with a different default AM. That's precisely why I'm talking\n> about this. Just setting PGOPTIONS='-c\n> default_table_access_method=zheap' in the new makefile target (the ms\n> run scripts are similar) is sufficient. And we don't need to force\n> everyone to constantly run tests with e.g. both heap and zheap, it's\n> sufficient to do so on a few buildfarm machines, and whenever changing\n> AM level code. Rerunning all the tests with a different AM is just\n> setting the same environment variable, but running check-world as the\n> target.\n\nAnother point is that having default_table_access_method facilitates\nthe restore of tables across AMs similarly to tablespaces, so CREATE\nTABLE dumps should not include the AM part.\n\n> And even if you were to successfully argue that it's sufficient during\n> normal development to only have a few zheap specific additional tests,\n> we'd certainly want to make it possible to occasionally explicitly run\n> the rest of the tests under zheap to see whether additional stuff has\n> been broken - and that's much harder to sift through if there's a lot of\n> spurious test failures due to \\d[+] outputting additional/differing\n> data.\n\nThe specific-heap tests could be included as an extra module in\nsrc/test/modules easily, so removing from the main tests what is not\ncompletely transparent may make sense. Most users use make-check to\ntest a patch quickly, so we could miss some bugs because of that\nduring review. Still, people seem to be better-educated lately in the\nfact that they need to do an actual check-world when checking a patch\nat full. So personally I can live with a split where it makes sense.\nBeing able to easily validate an AM implementation would be nice.\nIsolation tests may be another deal though for DMLs.\n\n> We are working seriously hard on making AMs pluggable. Zheap is not yet,\n> and won't be that soon, part of core. The concerns for an in-core zheap\n> (which needs to maintain the test infrastructure during the remainder of\n> its out-of-core development!) and out-of-core AMs are pretty aligned. I\n> don't get your confusion.\n\nI would imagine that a full-fledged AM should be able to maintain\ncompatibility with the full set of queries that heap is able to\nsupport, so if you can make the tests transparent enough so as they\ncan be run for any AMs without alternate input in the core tree, then\nthat's a goal worth it. Don't you have plan inconsistencies as well\nwith zheap?\n\nIn short, improving portability incrementally is good for the\nlong-term prospective. From that point of view adding the AM to \\d+\noutput may be a bad idea, as there are modules out of core which \nrely on psql meta-commands, and it would be nice to be able to test\nthose tests as well for those plugins with different types of AMs.\n--\nMichael",
"msg_date": "Tue, 8 Jan 2019 13:02:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 3:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jan 07, 2019 at 06:31:52PM -0800, Andres Freund wrote:\n> > Huh? It's absolutely *trivial* from a buildsystem POV to run the tests\n> > again with a different default AM. That's precisely why I'm talking\n> > about this. Just setting PGOPTIONS='-c\n> > default_table_access_method=zheap' in the new makefile target (the ms\n> > run scripts are similar) is sufficient. And we don't need to force\n> > everyone to constantly run tests with e.g. both heap and zheap, it's\n> > sufficient to do so on a few buildfarm machines, and whenever changing\n> > AM level code. Rerunning all the tests with a different AM is just\n> > setting the same environment variable, but running check-world as the\n> > target.\n>\n\nPGOPTIONS or any similar options are good for the AM development\nto test their AM's with all the existing PostgreSQL features.\n\n\n> Another point is that having default_table_access_method facilitates\n> the restore of tables across AMs similarly to tablespaces, so CREATE\n> TABLE dumps should not include the AM part.\n>\n\n+1 to the above approach to dump \"set default_table_access_method\".\n\n\n> > And even if you were to successfully argue that it's sufficient during\n> > normal development to only have a few zheap specific additional tests,\n> > we'd certainly want to make it possible to occasionally explicitly run\n> > the rest of the tests under zheap to see whether additional stuff has\n> > been broken - and that's much harder to sift through if there's a lot of\n> > spurious test failures due to \\d[+] outputting additional/differing\n> > data.\n>\n> The specific-heap tests could be included as an extra module in\n> src/test/modules easily, so removing from the main tests what is not\n> completely transparent may make sense. Most users use make-check to\n> test a patch quickly, so we could miss some bugs because of that\n> during review. Still, people seem to be better-educated lately in the\n> fact that they need to do an actual check-world when checking a patch\n> at full. So personally I can live with a split where it makes sense.\n> Being able to easily validate an AM implementation would be nice.\n> Isolation tests may be another deal though for DMLs.\n>\n> > We are working seriously hard on making AMs pluggable. Zheap is not yet,\n> > and won't be that soon, part of core. The concerns for an in-core zheap\n> > (which needs to maintain the test infrastructure during the remainder of\n> > its out-of-core development!) and out-of-core AMs are pretty aligned. I\n> > don't get your confusion.\n>\n> I would imagine that a full-fledged AM should be able to maintain\n> compatibility with the full set of queries that heap is able to\n> support, so if you can make the tests transparent enough so as they\n> can be run for any AMs without alternate input in the core tree, then\n> that's a goal worth it. Don't you have plan inconsistencies as well\n> with zheap?\n>\n> In short, improving portability incrementally is good for the\n> long-term prospective. From that point of view adding the AM to \\d+\n> output may be a bad idea, as there are modules out of core which\n> rely on psql meta-commands, and it would be nice to be able to test\n> those tests as well for those plugins with different types of AMs.\n>\n\nI also agree that adding AM details to \\d+ will lead to many unnecessary\nfailures. Currently \\d+ also doesn't show all the details of the relation\nlike\nowner and etc.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Jan 8, 2019 at 3:02 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Jan 07, 2019 at 06:31:52PM -0800, Andres Freund wrote:\n> Huh? It's absolutely *trivial* from a buildsystem POV to run the tests\n> again with a different default AM. That's precisely why I'm talking\n> about this. Just setting PGOPTIONS='-c\n> default_table_access_method=zheap' in the new makefile target (the ms\n> run scripts are similar) is sufficient. And we don't need to force\n> everyone to constantly run tests with e.g. both heap and zheap, it's\n> sufficient to do so on a few buildfarm machines, and whenever changing\n> AM level code. Rerunning all the tests with a different AM is just\n> setting the same environment variable, but running check-world as the\n> target.PGOPTIONS or any similar options are good for the AM developmentto test their AM's with all the existing PostgreSQL features. \nAnother point is that having default_table_access_method facilitates\nthe restore of tables across AMs similarly to tablespaces, so CREATE\nTABLE dumps should not include the AM part.+1 to the above approach to dump \"set default_table_access_method\". \n> And even if you were to successfully argue that it's sufficient during\n> normal development to only have a few zheap specific additional tests,\n> we'd certainly want to make it possible to occasionally explicitly run\n> the rest of the tests under zheap to see whether additional stuff has\n> been broken - and that's much harder to sift through if there's a lot of\n> spurious test failures due to \\d[+] outputting additional/differing\n> data.\n\nThe specific-heap tests could be included as an extra module in\nsrc/test/modules easily, so removing from the main tests what is not\ncompletely transparent may make sense. Most users use make-check to\ntest a patch quickly, so we could miss some bugs because of that\nduring review. Still, people seem to be better-educated lately in the\nfact that they need to do an actual check-world when checking a patch\nat full. So personally I can live with a split where it makes sense.\nBeing able to easily validate an AM implementation would be nice.\nIsolation tests may be another deal though for DMLs.\n\n> We are working seriously hard on making AMs pluggable. Zheap is not yet,\n> and won't be that soon, part of core. The concerns for an in-core zheap\n> (which needs to maintain the test infrastructure during the remainder of\n> its out-of-core development!) and out-of-core AMs are pretty aligned. I\n> don't get your confusion.\n\nI would imagine that a full-fledged AM should be able to maintain\ncompatibility with the full set of queries that heap is able to\nsupport, so if you can make the tests transparent enough so as they\ncan be run for any AMs without alternate input in the core tree, then\nthat's a goal worth it. Don't you have plan inconsistencies as well\nwith zheap?\n\nIn short, improving portability incrementally is good for the\nlong-term prospective. From that point of view adding the AM to \\d+\noutput may be a bad idea, as there are modules out of core which \nrely on psql meta-commands, and it would be nice to be able to test\nthose tests as well for those plugins with different types of AMs.\nI also agree that adding AM details to \\d+ will lead to many unnecessaryfailures. Currently \\d+ also doesn't show all the details of the relation likeowner and etc.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 8 Jan 2019 19:09:12 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On 08/01/2019 00:56, Andres Freund wrote:\n> A patch at [2] adds display of a table's access method to \\d+ - but that\n> means that running the tests with a different default table access\n> method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> there'll be a significant number of test failures, even though the test\n> results did not meaningfully differ.\n\nFor psql, a variable that hides the access method if it's the default.\n\n> Similarly, if pg_dump starts to dump table access methods either\n> unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> unimportant differences.\n\nFor pg_dump, track and set the default_table_access_method setting\nthroughout the dump (similar to how default_with_oids was handled, I\nbelieve).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 8 Jan 2019 11:30:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-08 13:02:00 +0900, Michael Paquier wrote:\n> On Mon, Jan 07, 2019 at 06:31:52PM -0800, Andres Freund wrote:\n> > Huh? It's absolutely *trivial* from a buildsystem POV to run the tests\n> > again with a different default AM. That's precisely why I'm talking\n> > about this. Just setting PGOPTIONS='-c\n> > default_table_access_method=zheap' in the new makefile target (the ms\n> > run scripts are similar) is sufficient. And we don't need to force\n> > everyone to constantly run tests with e.g. both heap and zheap, it's\n> > sufficient to do so on a few buildfarm machines, and whenever changing\n> > AM level code. Rerunning all the tests with a different AM is just\n> > setting the same environment variable, but running check-world as the\n> > target.\n> \n> Another point is that having default_table_access_method facilitates\n> the restore of tables across AMs similarly to tablespaces, so CREATE\n> TABLE dumps should not include the AM part.\n\nThat's what I suggested in the first message in this thread, or did I\nmiss a difference?\n\n\n> > And even if you were to successfully argue that it's sufficient during\n> > normal development to only have a few zheap specific additional tests,\n> > we'd certainly want to make it possible to occasionally explicitly run\n> > the rest of the tests under zheap to see whether additional stuff has\n> > been broken - and that's much harder to sift through if there's a lot of\n> > spurious test failures due to \\d[+] outputting additional/differing\n> > data.\n> \n> The specific-heap tests could be included as an extra module in\n> src/test/modules easily, so removing from the main tests what is not\n> completely transparent may make sense.\n\nWhy does it need to be an extra module, rather than just exta regression\nfiles / sections of files that just explicitly specify the AM? Seems a\nlot easier and faster.\n\n\n> > We are working seriously hard on making AMs pluggable. Zheap is not yet,\n> > and won't be that soon, part of core. The concerns for an in-core zheap\n> > (which needs to maintain the test infrastructure during the remainder of\n> > its out-of-core development!) and out-of-core AMs are pretty aligned. I\n> > don't get your confusion.\n> \n> I would imagine that a full-fledged AM should be able to maintain\n> compatibility with the full set of queries that heap is able to\n> support, so if you can make the tests transparent enough so as they\n> can be run for any AMs without alternate input in the core tree, then\n> that's a goal worth it. Don't you have plan inconsistencies as well\n> with zheap?\n\nIn the core tests there's a fair number of things that can be cured by\nadding an ORDER BY to the tests, which seems sensible. We've added a lot\nof those over the years anyway. There's additionally a number of plans\nthat change, which currently is handled by alternatives output files,\nbut I think we should move to reduce those differences, that's probably\nnot too hard.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 8 Jan 2019 09:29:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-08 11:30:56 +0100, Peter Eisentraut wrote:\n> On 08/01/2019 00:56, Andres Freund wrote:\n> > A patch at [2] adds display of a table's access method to \\d+ - but that\n> > means that running the tests with a different default table access\n> > method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> > there'll be a significant number of test failures, even though the test\n> > results did not meaningfully differ.\n> \n> For psql, a variable that hides the access method if it's the default.\n\nYea, I think that seems the least contentious solution. Don't like it\ntoo much, but it seems better than the alternative. I wonder if we want\none for multiple regression related issues, or whether one specifically\nabout table AMs is more appropriate. I lean towards the latter.\n\n\n> > Similarly, if pg_dump starts to dump table access methods either\n> > unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> > unimportant differences.\n> \n> For pg_dump, track and set the default_table_access_method setting\n> throughout the dump (similar to how default_with_oids was handled, I\n> believe).\n\nYea, that's similar to that, and I think that makes sense.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 8 Jan 2019 09:34:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Tue, Jan 08, 2019 at 09:29:49AM -0800, Andres Freund wrote:\n> On 2019-01-08 13:02:00 +0900, Michael Paquier wrote:\n>> The specific-heap tests could be included as an extra module in\n>> src/test/modules easily, so removing from the main tests what is not\n>> completely transparent may make sense.\n> \n> Why does it need to be an extra module, rather than just extra regression\n> files / sections of files that just explicitly specify the AM? Seems a\n> lot easier and faster.\n\nThe point would be to keep individual Makefiles simpler to maintain,\nand separating things can make it simpler. I cannot say for sure\nwithout seeing how things would change though based on what you are\nsuggesting, and that may finish by being a matter of taste.\n\n> In the core tests there's a fair number of things that can be cured by\n> adding an ORDER BY to the tests, which seems sensible. We've added a lot\n> of those over the years anyway.\n\nWhen working on Postgres-XC I cursed about the need to add many ORDER\nBY queries to ensure the ordering of tuples fetched from different\nnodes, and we actually had an option to enforce the default\ndistribution used by tables, so that would be really nice to close the\ngap.\n\n> There's additionally a number of plans\n> that change, which currently is handled by alternatives output files,\n> but I think we should move to reduce those differences, that's probably\n> not too hard.\n\nOkay, that's not surprising. I guess it depends on how many alternate\nfiles are needed and if it is possible to split things so as we avoid\nunnecessary output in alternate files. A lot of things you are\nproposing on this thread make sense in my experience.\n--\nMichael",
"msg_date": "Wed, 9 Jan 2019 07:48:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "> On Tue, Jan 8, 2019 at 6:34 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-01-08 11:30:56 +0100, Peter Eisentraut wrote:\n> > On 08/01/2019 00:56, Andres Freund wrote:\n> > > A patch at [2] adds display of a table's access method to \\d+ - but that\n> > > means that running the tests with a different default table access\n> > > method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> > > there'll be a significant number of test failures, even though the test\n> > > results did not meaningfully differ.\n> >\n> > For psql, a variable that hides the access method if it's the default.\n>\n> Yea, I think that seems the least contentious solution. Don't like it\n> too much, but it seems better than the alternative. I wonder if we want\n> one for multiple regression related issues, or whether one specifically\n> about table AMs is more appropriate. I lean towards the latter.\n\nAre there any similar existing regression related issues? If no, then probably\nthe latter indeed makes more sense.\n\n> > > Similarly, if pg_dump starts to dump table access methods either\n> > > unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> > > unimportant differences.\n> >\n> > For pg_dump, track and set the default_table_access_method setting\n> > throughout the dump (similar to how default_with_oids was handled, I\n> > believe).\n>\n> Yea, that's similar to that, and I think that makes sense.\n\nYes, sounds like a reasonable approach, I can proceed with it.\n\n",
"msg_date": "Wed, 9 Jan 2019 10:01:44 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 11:04 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-01-08 11:30:56 +0100, Peter Eisentraut wrote:\n> > On 08/01/2019 00:56, Andres Freund wrote:\n> > > A patch at [2] adds display of a table's access method to \\d+ - but that\n> > > means that running the tests with a different default table access\n> > > method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> > > there'll be a significant number of test failures, even though the test\n> > > results did not meaningfully differ.\n> >\n> > For psql, a variable that hides the access method if it's the default.\n>\n> Yea, I think that seems the least contentious solution.\n>\n\n+1.\n\n> Don't like it\n> too much, but it seems better than the alternative. I wonder if we want\n> one for multiple regression related issues, or whether one specifically\n> about table AMs is more appropriate. I lean towards the latter.\n>\n\nI didn't understand what is the earlier part \"I wonder if we want one\nfor multiple regression related issues\". What do you mean by multiple\nregression related issues?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 9 Jan 2019 18:26:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-09 18:26:16 +0530, Amit Kapila wrote:\n> On Tue, Jan 8, 2019 at 11:04 PM Andres Freund <andres@anarazel.de> wrote:\n> +1.\n> \n> > Don't like it\n> > too much, but it seems better than the alternative. I wonder if we want\n> > one for multiple regression related issues, or whether one specifically\n> > about table AMs is more appropriate. I lean towards the latter.\n> >\n> \n> I didn't understand what is the earlier part \"I wonder if we want one\n> for multiple regression related issues\". What do you mean by multiple\n> regression related issues?\n\nOne flag that covers all things that make psql output less useful for\nregression test output, or one flag that just controls the table access\nmethod display.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 9 Jan 2019 09:23:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 10:53 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-01-09 18:26:16 +0530, Amit Kapila wrote:\n> > On Tue, Jan 8, 2019 at 11:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > +1.\n> >\n> > > Don't like it\n> > > too much, but it seems better than the alternative. I wonder if we want\n> > > one for multiple regression related issues, or whether one specifically\n> > > about table AMs is more appropriate. I lean towards the latter.\n> > >\n> >\n> > I didn't understand what is the earlier part \"I wonder if we want one\n> > for multiple regression related issues\". What do you mean by multiple\n> > regression related issues?\n>\n> One flag that covers all things that make psql output less useful for\n> regression test output, or one flag that just controls the table access\n> method display.\n>\n\n+1 for the later (one flag that just controls the table access method\ndisplay) as that looks clean.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Thu, 10 Jan 2019 09:28:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Wed, 9 Jan 2019 at 14:30, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Tue, Jan 8, 2019 at 6:34 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-01-08 11:30:56 +0100, Peter Eisentraut wrote:\n> > > On 08/01/2019 00:56, Andres Freund wrote:\n> > > > A patch at [2] adds display of a table's access method to \\d+ - but that\n> > > > means that running the tests with a different default table access\n> > > > method (e.g. using PGOPTIONS='-c default_table_access_method=...)\n> > > > there'll be a significant number of test failures, even though the test\n> > > > results did not meaningfully differ.\n> > >\n> > > For psql, a variable that hides the access method if it's the default.\n> >\n> > Yea, I think that seems the least contentious solution. Don't like it\n> > too much, but it seems better than the alternative. I wonder if we want\n> > one for multiple regression related issues, or whether one specifically\n> > about table AMs is more appropriate. I lean towards the latter.\n>\n> Are there any similar existing regression related issues? If no, then probably\n> the latter indeed makes more sense.\n>\n> > > > Similarly, if pg_dump starts to dump table access methods either\n> > > > unconditionally, or for all non-heap AMS, the pg_dump tests fail due to\n> > > > unimportant differences.\n> > >\n> > > For pg_dump, track and set the default_table_access_method setting\n> > > throughout the dump (similar to how default_with_oids was handled, I\n> > > believe).\n> >\n> > Yea, that's similar to that, and I think that makes sense.\n>\n> Yes, sounds like a reasonable approach, I can proceed with it.\n\nDmitry, I believe you have taken the pg_dump part only. If that's\nright, I can proceed with the psql part. Does that sound right ?\n\n>\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n",
"msg_date": "Fri, 11 Jan 2019 10:31:59 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On 10/01/2019 04:58, Amit Kapila wrote:\n>> One flag that covers all things that make psql output less useful for\n>> regression test output, or one flag that just controls the table access\n>> method display.\n>>\n> +1 for the later (one flag that just controls the table access method\n> display) as that looks clean.\n\nYeah, I'd prefer a specific flag.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 11 Jan 2019 10:06:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "> On Fri, Jan 11, 2019 at 6:02 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> > Yes, sounds like a reasonable approach, I can proceed with it.\n>\n> Dmitry, I believe you have taken the pg_dump part only. If that's\n> right, I can proceed with the psql part. Does that sound right ?\n\nWell, actually I've meant that I'm going to proceed with both, since I've\nposted both psql and pg_dump patches. But of course you're welcome to submit\nany new version or improvements you want.\n\n",
"msg_date": "Fri, 11 Jan 2019 10:18:35 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
},
{
"msg_contents": "On Fri, 11 Jan 2019 at 14:47, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Fri, Jan 11, 2019 at 6:02 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > > Yes, sounds like a reasonable approach, I can proceed with it.\n> >\n> > Dmitry, I believe you have taken the pg_dump part only. If that's\n> > right, I can proceed with the psql part. Does that sound right ?\n>\n> Well, actually I've meant that I'm going to proceed with both, since I've\n> posted both psql and pg_dump patches. But of course you're welcome to submit\n> any new version or improvements you want.\n\nOk, I will review the patches that you send, and we can work on\nimprovements if needed. Thanks.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n",
"msg_date": "Fri, 11 Jan 2019 16:53:47 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Displaying and dumping of table access methods"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThe Server GUC parameters accepts values in both Octal and hexadecimal\nformats also.\n\npostgres=# set max_parallel_workers_per_gather='0x10';\npostgres=# show max_parallel_workers_per_gather;\n max_parallel_workers_per_gather\n---------------------------------\n 16\n\npostgres=# set max_parallel_workers_per_gather='010';\npostgres=# show max_parallel_workers_per_gather;\n max_parallel_workers_per_gather\n---------------------------------\n 8\n\nI can check that this behavior exists for quite some time, but I am not\nable to find any documentation related to it? Can some one point me to\nrelevant section where it is available? If not exists, is it fine to add it?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nHi Hackers,The Server GUC parameters accepts values in both Octal and hexadecimal formats also.postgres=# set max_parallel_workers_per_gather='0x10';postgres=# show max_parallel_workers_per_gather; max_parallel_workers_per_gather --------------------------------- 16postgres=# set max_parallel_workers_per_gather='010';postgres=# show max_parallel_workers_per_gather; max_parallel_workers_per_gather --------------------------------- 8I can check that this behavior exists for quite some time, but I am not able to find any documentation related to it? Can some one point me to relevant section where it is available? If not exists, is it fine to add it?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 8 Jan 2019 17:08:41 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "GUC parameters accepts values in both octal and hexadecimal formats"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 05:08:41PM +1100, Haribabu Kommi wrote:\n> Hi Hackers,\n> \n> The Server GUC parameters accepts values in both Octal and hexadecimal formats\n> also.\n> \n> postgres=# set max_parallel_workers_per_gather='0x10';\n> postgres=# show max_parallel_workers_per_gather;\n> �max_parallel_workers_per_gather�\n> ---------------------------------\n> �16\n> \n> postgres=# set max_parallel_workers_per_gather='010';\n> postgres=# show max_parallel_workers_per_gather;\n> �max_parallel_workers_per_gather�\n> ---------------------------------\n> �8\n> \n> I can check that this behavior exists for quite some�time, but I am not able to\n> find any documentation related to it? Can some one point me to relevant section\n> where it is available? If not exists, is it fine to add it?\n\nWell, we call strtol() in guc.c, and the strtol() manual page says:\n\n\tThe string may begin with an arbitrary amount of white space (as\n\tdetermined by isspace(3)) followed by a single optional '+' or '-' sign.\n\tIf base is zero or 16, the string may then include a \"0x\" prefix, and\n\tthe number will be read in base 16; otherwise, a zero base is taken as\n\t10 (decimal) unless the next character is '0', in which case it is taken\n\tas 8 (octal).\n\nso it looks like the behavior is just a side-effect of our strtol call. \nI am not sure it is worth documenting though.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Fri, 25 Jan 2019 18:34:17 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: GUC parameters accepts values in both octal and hexadecimal\n formats"
}
] |
[
{
"msg_contents": "I've been toying with OpenBSD lately, and soon noticed a seriously\nannoying problem for running Postgres on it: by default, its limits\nfor SysV semaphores are only SEMMNS=60, SEMMNI=10. Not only does that\ngreatly constrain the number of connections for a single installation,\nit means that our TAP tests fail because you can't start two postmasters\nconcurrently (cf [1]).\n\nRaising the annoyance factor considerably, AFAICT the only way to\nincrease these settings is to build your own custom kernel.\n\nSo I looked around for an alternative, and found out that modern\nOpenBSD releases support named POSIX semaphores (though not unnamed\nones, at least not shared unnamed ones). What's more, it appears that\nin this implementation, named semaphores don't eat open file descriptors\nas they do on macOS, removing our major objection to using them.\n\nI don't have any OpenBSD installation on hardware that I'd take very\nseriously for performance testing, but some light testing with\n\"pgbench -S\" suggests that a build with PREFERRED_SEMAPHORES=NAMED_POSIX\nhas just about the same performance as a build with SysV semaphores.\n\nThis all leads to the thought that maybe we should be selecting\nPREFERRED_SEMAPHORES=NAMED_POSIX on OpenBSD. At the very least,\nour docs ought to recommend it as a credible alternative for\npeople who don't want to get into building custom kernels.\n\nI've checked that this works back to OpenBSD 6.0, and scanning\ntheir man pages suggests that the feature appeared in 5.5.\n5.5 isn't that old (2014) so possibly people are still running\nolder versions, but we could easily put in version-specific\ndefault logic similar to what's in src/template/darwin.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/e6ecf989-9d5a-9dc5-12de-96985b6e5a5f%40mksoft.nu\n\n",
"msg_date": "Tue, 08 Jan 2019 01:14:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "OpenBSD versus semaphores"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 7:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've been toying with OpenBSD lately, and soon noticed a seriously\n> annoying problem for running Postgres on it: by default, its limits\n> for SysV semaphores are only SEMMNS=60, SEMMNI=10. Not only does that\n> greatly constrain the number of connections for a single installation,\n> it means that our TAP tests fail because you can't start two postmasters\n> concurrently (cf [1]).\n>\n> Raising the annoyance factor considerably, AFAICT the only way to\n> increase these settings is to build your own custom kernel.\n>\n> So I looked around for an alternative, and found out that modern\n> OpenBSD releases support named POSIX semaphores (though not unnamed\n> ones, at least not shared unnamed ones). What's more, it appears that\n> in this implementation, named semaphores don't eat open file descriptors\n> as they do on macOS, removing our major objection to using them.\n>\n> I don't have any OpenBSD installation on hardware that I'd take very\n> seriously for performance testing, but some light testing with\n> \"pgbench -S\" suggests that a build with PREFERRED_SEMAPHORES=NAMED_POSIX\n> has just about the same performance as a build with SysV semaphores.\n>\n> This all leads to the thought that maybe we should be selecting\n> PREFERRED_SEMAPHORES=NAMED_POSIX on OpenBSD. At the very least,\n> our docs ought to recommend it as a credible alternative for\n> people who don't want to get into building custom kernels.\n>\n> I've checked that this works back to OpenBSD 6.0, and scanning\n> their man pages suggests that the feature appeared in 5.5.\n> 5.5 isn't that old (2014) so possibly people are still running\n> older versions, but we could easily put in version-specific\n> default logic similar to what's in src/template/darwin.\n>\n> Thoughts?\n\nNo OpenBSD here, but I was curious enough to peek at their\nimplementation. Like others, they create a tiny file under /tmp for\neach one, mmap() and close the fd straight away. Apparently don't\nsupport shared sem_init() yet (EPERM). So your plan seems good to me.\nCC'ing Pierre-Emmanuel (OpenBSD PostgreSQL port maintainer) in case he\nis interested.\n\nWild speculation: I wouldn't be surprised if POSIX named semas\nperform better than SysV semas on a large enough system, since they'll\nlive on different pages. At a glance, their sys_semget apparently\nallocates arrays of struct sem without padding and I think they\nprobably get about 4 to a cacheline; see our experience with an 8\nsocket box leading to commit 2d306759 where we added our own padding.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 8 Jan 2019 20:05:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: OpenBSD versus semaphores"
},
{
"msg_contents": "Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> On Tue, Jan 8, 2019 at 7:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So I looked around for an alternative, and found out that modern\n>> OpenBSD releases support named POSIX semaphores (though not unnamed\n>> ones, at least not shared unnamed ones). What's more, it appears that\n>> in this implementation, named semaphores don't eat open file descriptors\n>> as they do on macOS, removing our major objection to using them.\n\n> No OpenBSD here, but I was curious enough to peek at their\n> implementation. Like others, they create a tiny file under /tmp for\n> each one, mmap() and close the fd straight away.\n\nOh, yeah, I can see a bunch of tiny mappings with procmap. I wonder\nwhether that scales any better than an open FD per semaphore, when\nit comes to forking a bunch of child processes that will inherit\nall those mappings or FDs. I've not tried to benchmark child process\nlaunch as such --- as I said, I'm not running this on hardware that\nwould support serious benchmarking.\n\nBTW, I just finished finding out that recent NetBSD (8.99.25) has\nworking code paths for *both* named and unnamed POSIX semaphores.\nHowever, it appears that both code paths involve an open FD per\nsemaphore, so it's likely not something we want to recommend using.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 08 Jan 2019 02:40:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OpenBSD versus semaphores"
},
{
"msg_contents": "\nOn 2019-01-08 07:14, Tom Lane wrote:\n> I've been toying with OpenBSD lately, and soon noticed a seriously\n> annoying problem for running Postgres on it: by default, its limits\n> for SysV semaphores are only SEMMNS=60, SEMMNI=10. Not only does that\n> greatly constrain the number of connections for a single installation,\n> it means that our TAP tests fail because you can't start two postmasters\n> concurrently (cf [1]).\n> \n> Raising the annoyance factor considerably, AFAICT the only way to\n> increase these settings is to build your own custom kernel.\n\nYou don't need to build your custom kernel to change those settings.\n\nJust add:\n\nkern.seminfo.semmni=20\n\nto /etc/sysctl.conf and reboot\n\n/Mikael\n\n",
"msg_date": "Tue, 8 Jan 2019 08:46:35 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: OpenBSD versus semaphores"
},
{
"msg_contents": "On Tue, Jan 8, 2019 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I've been toying with OpenBSD lately, and soon noticed a seriously\n> annoying problem for running Postgres on it: by default, its limits\n> for SysV semaphores are only SEMMNS=60, SEMMNI=10. Not only does that\n> greatly constrain the number of connections for a single installation,\n> it means that our TAP tests fail because you can't start two postmasters\n> concurrently (cf [1]).\n>\n> Raising the annoyance factor considerably, AFAICT the only way to\n> increase these settings is to build your own custom kernel.\n>\n\nThis is not accurate, you can change this values via sysctl(1), extracted\nfrom OpenBSD postgresql port:\n\nTuning for busy servers\n\n=======================\nThe default sizes in the GENERIC kernel for SysV semaphores are only\njust large enough for a database with the default configuration\n(max_connections 40) if no other running processes use semaphores.\nIn other cases you will need to increase the limits. Adding the\nfollowing in /etc/sysctl.conf will be reasonable for many systems:\n\n\tkern.seminfo.semmni=60\n\tkern.seminfo.semmns=1024\n\nTo serve a large number of connections (>250), you may need higher\nvalues for the above.\n\n\n\nhttp://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/ports/databases/postgresql/pkg/README-server?rev=1.25&content-type=text/plain\n\n\n> So I looked around for an alternative, and found out that modern\n> OpenBSD releases support named POSIX semaphores (though not unnamed\n> ones, at least not shared unnamed ones). What's more, it appears that\n> in this implementation, named semaphores don't eat open file descriptors\n> as they do on macOS, removing our major objection to using them.\n>\n> I don't have any OpenBSD installation on hardware that I'd take very\n> seriously for performance testing, but some light testing with\n> \"pgbench -S\" suggests that a build with PREFERRED_SEMAPHORES=NAMED_POSIX\n> has just about the same performance as a build with SysV semaphores.\n>\n> This all leads to the thought that maybe we should be selecting\n> PREFERRED_SEMAPHORES=NAMED_POSIX on OpenBSD. At the very least,\n> our docs ought to recommend it as a credible alternative for\n> people who don't want to get into building custom kernels.\n>\n> I've checked that this works back to OpenBSD 6.0, and scanning\n> their man pages suggests that the feature appeared in 5.5.\n> 5.5 isn't that old (2014) so possibly people are still running\n> older versions, but we could easily put in version-specific\n> default logic similar to what's in src/template/darwin.\n>\n> Thoughts?\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/e6ecf989-9d5a-9dc5-12de-96985b6e5a5f%40mksoft.nu\n>\n>\n\nOn Tue, Jan 8, 2019 at 12:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I've been toying with OpenBSD lately, and soon noticed a seriously\nannoying problem for running Postgres on it: by default, its limits\nfor SysV semaphores are only SEMMNS=60, SEMMNI=10. Not only does that\ngreatly constrain the number of connections for a single installation,\nit means that our TAP tests fail because you can't start two postmasters\nconcurrently (cf [1]).\n\nRaising the annoyance factor considerably, AFAICT the only way to\nincrease these settings is to build your own custom kernel.This is not accurate, you can change this values via sysctl(1), extracted from OpenBSD postgresql port:Tuning for busy servers=======================\nThe default sizes in the GENERIC kernel for SysV semaphores are only\njust large enough for a database with the default configuration\n(max_connections 40) if no other running processes use semaphores.\nIn other cases you will need to increase the limits. Adding the\nfollowing in /etc/sysctl.conf will be reasonable for many systems:\n\n\tkern.seminfo.semmni=60\n\tkern.seminfo.semmns=1024\n\nTo serve a large number of connections (>250), you may need higher\nvalues for the above.\n http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/ports/databases/postgresql/pkg/README-server?rev=1.25&content-type=text/plain\n\nSo I looked around for an alternative, and found out that modern\nOpenBSD releases support named POSIX semaphores (though not unnamed\nones, at least not shared unnamed ones). What's more, it appears that\nin this implementation, named semaphores don't eat open file descriptors\nas they do on macOS, removing our major objection to using them.\n\nI don't have any OpenBSD installation on hardware that I'd take very\nseriously for performance testing, but some light testing with\n\"pgbench -S\" suggests that a build with PREFERRED_SEMAPHORES=NAMED_POSIX\nhas just about the same performance as a build with SysV semaphores.\n\nThis all leads to the thought that maybe we should be selecting\nPREFERRED_SEMAPHORES=NAMED_POSIX on OpenBSD. At the very least,\nour docs ought to recommend it as a credible alternative for\npeople who don't want to get into building custom kernels.\n\nI've checked that this works back to OpenBSD 6.0, and scanning\ntheir man pages suggests that the feature appeared in 5.5.\n5.5 isn't that old (2014) so possibly people are still running\nolder versions, but we could easily put in version-specific\ndefault logic similar to what's in src/template/darwin.\n\nThoughts?\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/e6ecf989-9d5a-9dc5-12de-96985b6e5a5f%40mksoft.nu",
"msg_date": "Tue, 8 Jan 2019 01:47:07 -0600",
"msg_from": "Abel Abraham Camarillo Ojeda <acamari@verlet.org>",
"msg_from_op": false,
"msg_subject": "Re: OpenBSD versus semaphores"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> On 2019-01-08 07:14, Tom Lane wrote:\n>> Raising the annoyance factor considerably, AFAICT the only way to\n>> increase these settings is to build your own custom kernel.\n\n> You don't need to build your custom kernel to change those settings.\n> Just add:\n> kern.seminfo.semmni=20\n> to /etc/sysctl.conf and reboot\n\nHm, I wonder when that came in? Our documentation doesn't know about it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 08 Jan 2019 09:25:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: OpenBSD versus semaphores"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 9:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> > On Tue, Jan 8, 2019 at 7:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> So I looked around for an alternative, and found out that modern\n> >> OpenBSD releases support named POSIX semaphores (though not unnamed\n> >> ones, at least not shared unnamed ones). What's more, it appears that\n> >> in this implementation, named semaphores don't eat open file descriptors\n> >> as they do on macOS, removing our major objection to using them.\n>\n> > No OpenBSD here, but I was curious enough to peek at their\n> > implementation. Like others, they create a tiny file under /tmp for\n> > each one, mmap() and close the fd straight away.\n>\n> Oh, yeah, I can see a bunch of tiny mappings with procmap. I wonder\n> whether that scales any better than an open FD per semaphore, when\n> it comes to forking a bunch of child processes that will inherit\n> all those mappings or FDs. I've not tried to benchmark child process\n> launch as such --- as I said, I'm not running this on hardware that\n> would support serious benchmarking.\n\nI also have no ability to benchmark on a real OpenBSD system, but once\na year or so when I spin up a little OpenBSD VM to test some patch or\nother, it annoys me that our tests fail out of the box and then I have\nto look up how to change the sysctls, so here's a patch. I also\nchecked the release notes to confirm that 5.5 is the right release to\nlook for[1]; by now that's EOL and probably not even worth bothering\nwith the test but doesn't cost much to be cautious about that. 4.x is\nsurely too old to waste electrons on. I guess the question for\nOpenBSD experts is whether having (say) a thousand tiny mappings is\nbad. On the plus side, we know from other Oses that having semas\nspread out is good for reducing false sharing on large systems.\n\n[1] https://www.openbsd.org/55.html",
"msg_date": "Fri, 2 Apr 2021 10:15:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: OpenBSD versus semaphores"
}
] |
[
{
"msg_contents": "Respected Concern,\n\nI want to join GCI as a mentor for the year 2019, please guide me about the\nprocedure,\nthanks in anticipation.\n\n--\nRegards\nPadam Chopra\nGoogle Grand Prize Winner\nMicrosoft Imagine Cup India winner\nTedX Event Organizer\n\nContact:\n\nEmail:padamchopra1337@gmail.com\n\nMore details about me and my work:\n\nGitHub Profile: https://github.com/padamchopra\n<https://github.com/tahirramzan>\nWebsite: http://padamchopra.me/ <https://padamchopra.me/>\n\nRespected Concern,I want to join GCI as a mentor for the year 2019, please guide me about the procedure,thanks in anticipation.--RegardsPadam ChopraGoogle Grand Prize WinnerMicrosoft Imagine Cup India winner TedX Event OrganizerContact:Email:padamchopra1337@gmail.comMore details about me and my work:GitHub Profile: https://github.com/padamchopraWebsite: http://padamchopra.me/",
"msg_date": "Tue, 8 Jan 2019 13:29:00 +0530",
"msg_from": "Padam Chopra <padamchopra1337@gmail.com>",
"msg_from_op": true,
"msg_subject": "Mentoring for GCI-19"
},
{
"msg_contents": "Greetings,\n\n* Padam Chopra (padamchopra1337@gmail.com) wrote:\n> I want to join GCI as a mentor for the year 2019, please guide me about the\n> procedure,\n> thanks in anticipation.\n\nThanks for your interest but we aren't likely to even start thinking\nabout that until we're close to when GCI 2019 starts.\n\nIf you're interesting in mentoring, then I'd suggest you participate on\nour existing mailing lists and become familiar with the community and\nhelp out with all of the questions that are constantly being asked on\nlists like -general, -admin, etc. Doing so would make it much more\nlikely that we'd consider you for a mentor for GCI 2019.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 8 Jan 2019 10:06:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Mentoring for GCI-19"
}
] |
[
{
"msg_contents": "Hi!\n\nI am new to this community. I have submitted few patches to this\ncommitfest and I have read that it is expected that I also review some\nother patches. But I am not sure about the process here. Should I wait\nfor some other patches to be assigned to me to review? Or is there\nsome other process? Also, how is the level at which I should review it\ndetermined? I am not really too sure in my skills and understanding of\nPostgreSQL codebase to feel confident that I can review well, but I am\nwilling to try. I have read [1] and [2].\n\n[1] https://wiki.postgresql.org/wiki/CommitFest\n[2] https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n",
"msg_date": "Tue, 8 Jan 2019 00:48:21 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "commitfest: When are you assigned patches to review?"
},
{
"msg_contents": "\nHello Mitar,\n\n> I am new to this community. I have submitted few patches to this\n> commitfest and I have read that it is expected that I also review some\n> other patches. But I am not sure about the process here. Should I wait\n> for some other patches to be assigned to me to review? Or is there\n> some other process?\n\nThe process is that *you* choose the patches to review and register as \nsuch for the patch on the CF app.\n\n> Also, how is the level at which I should review it\n> determined?\n\nPatches as complex as the one you submitted?\n\nBased on your area of expertise?\n\n> I am not really too sure in my skills and understanding of\n> PostgreSQL codebase to feel confident that I can review well, but I am\n> willing to try. I have read [1] and [2].\n\nThere are doc patches, client-side code patches, compilation \ninfrastructure patches...\n\n-- \nFabien.\n\n",
"msg_date": "Tue, 8 Jan 2019 10:14:10 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: commitfest: When are you assigned patches to review?"
},
{
"msg_contents": "Hi everyone!\n\n> 8 янв. 2019 г., в 14:14, Fabien COELHO <coelho@cri.ensmp.fr> написал(а):\n> \n> The process is that *you* choose the patches to review and register as such for the patch on the CF app.\n\nBy the way, is it ok to negotiate review exchange?\n\nBest regards, Andrey Borodin.\n",
"msg_date": "Tue, 8 Jan 2019 17:20:29 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: commitfest: When are you assigned patches to review?"
},
{
"msg_contents": "On Wed, 9 Jan 2019 at 01:20, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> By the way, is it ok to negotiate review exchange?\n\nI think it happens fairly often. There's no need for the list to know\nanything about it when it does.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 9 Jan 2019 02:44:07 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: commitfest: When are you assigned patches to review?"
},
{
"msg_contents": "On Tue, Jan 08, 2019 at 10:14:10AM +0100, Fabien COELHO wrote:\n>> Also, how is the level at which I should review it\n>> determined?\n> \n> Patches as complex as the one you submitted?\n\nThe usual expectation is to review one patch of equal difficulty for\neach patch submitted. The way to measure a patch difficulty is not\nbased on actual facts but mostly on how a patch feels complicated.\nWhen it comes to reviews, the more you can look at the better of\ncourse, still doing a correct review takes time, and that can be\nsurprising often even for so-said simple patches.\n\n> Based on your area of expertise?\n\nTaking new challenges on a regular basis is not bad either :)\n--\nMichael",
"msg_date": "Wed, 9 Jan 2019 10:11:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: commitfest: When are you assigned patches to review?"
},
{
"msg_contents": "Hi!\n\nFew more questions.\n\nI see that some patches were sent to bugs mailing list, not hackers\n[1]. I thought that all patches have to be send to the hackers mailing\nlist, as per this wiki page [2]. Moreover, because they were send to\nthe bugs mailing list, I am unsure how can it be discussed/reviewed on\nhackers mailing list while keeping the thread, as per this wiki page\n[3]. Furthermore, I thought that each commitfest entry should be about\none patch, but [1] seems to provide 3 patches, with multiple versions,\nwhich makes it a bit unclear to understand which one and how should\nthey apply.\n\n[1] https://commitfest.postgresql.org/21/1924/\n[2] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n[3] https://wiki.postgresql.org/wiki/CommitFest\n\n\nMitar\n\n",
"msg_date": "Tue, 8 Jan 2019 22:56:32 -0800",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: commitfest: When are you assigned patches to review?"
},
{
"msg_contents": "On Tue, Jan 08, 2019 at 10:56:32PM -0800, Mitar wrote:\n> I see that some patches were sent to bugs mailing list, not hackers\n> [1]. I thought that all patches have to be send to the hackers mailing\n> list, as per this wiki page [2]. Moreover, because they were send to\n> the bugs mailing list, I am unsure how can it be discussed/reviewed on\n> hackers mailing list while keeping the thread, as per this wiki page\n> [3]. Furthermore, I thought that each commitfest entry should be about\n> one patch, but [1] seems to provide 3 patches, with multiple versions,\n> which makes it a bit unclear to understand which one and how should\n> they apply.\n\nThat's not a strict process per se. Sometimes when discussing we\nfinish by splitting a patch into multiple ones where it makes sense,\nand the factor which mainly matters is to keep a commit history clean.\nKeeping that point in mind we may have one commit fest entry dealing\nwith one of more patches depending on how the author feels things\nshould be handled. My take is that additional CF entries make sense\nwhen working on patches which require a different audience and a\ndifferent kind of reviews, while refactoring and preparatory work may\nbe included with a main patch as long as the patch set remains in\nroughly the same area of expertise and keeps close to the concept of\nthe thread dealing with a new feature.\n\nBugs can be added as CF entries, posting patches on a bug ticket is\nalso fine. If a bug fix needs more input, moving it to -hackers can\nalso make sense by changing on the way its subject. This depends on\nthe circumstances and that's a case-by-case handling usually.\n\n> [1] https://commitfest.postgresql.org/21/1924/\n\nThis item is fun to work with, though all of them apply to unaccent\nand are not that invasive, so a single entry looks fine IMO.\n--\nMichael",
"msg_date": "Wed, 9 Jan 2019 16:38:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: commitfest: When are you assigned patches to review?"
}
] |
[
{
"msg_contents": "Respected Concern,\n\nI want to join GCI as a mentor for the year 2019, please guide me about the\nprocedure,\nthanks in anticipation.\n\n--\nRegards\nPadam Chopra\nGoogle Grand Prize Winner\nMicrosoft Imagine Cup India winner\nTedX Event Organizer\n\nContact:\n\nEmail:padamchopra1337(at)gmail(dot)com\n\nMore details about me and my work:\n\nGitHub Profile: https://github.com/padamchopra\nWebsite: http://padamchopra.me/\n\nRespected Concern,\nI want to join GCI as a mentor for the year 2019, please guide me about theprocedure,thanks in anticipation.\n--RegardsPadam ChopraGoogle Grand Prize WinnerMicrosoft Imagine Cup India winnerTedX Event Organizer\nContact:\nEmail:padamchopra1337(at)gmail(dot)com\nMore details about me and my work:\nGitHub Profile: https://github.com/padamchopraWebsite: http://padamchopra.me/",
"msg_date": "Tue, 8 Jan 2019 17:04:52 +0530",
"msg_from": "Padam Chopra <padamchopra1337@gmail.com>",
"msg_from_op": true,
"msg_subject": "GCI-2019 Mentoring"
}
] |
[
{
"msg_contents": "I am new to the autovacuum. After reading its code, I am still confusing\nwhat is the autovac_balance_cost() and how the cost logic works to make the\nautovacuum workers consume the I/O equially. Can anyone share some light on\nit?\n\nThanks\n\nI am new to the autovacuum. After reading its code, I am still confusing what is the autovac_balance_cost() and how the cost logic works to make the autovacuum workers consume the I/O equially. Can anyone share some light on it?Thanks",
"msg_date": "Tue, 8 Jan 2019 13:29:49 -0800",
"msg_from": "CNG L <congnanluo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Question about autovacuum function autovac_balance_cost()"
}
] |
[
{
"msg_contents": "eb7ed3f3063401496e4aa4bd68fa33f0be31a72f Allow UNIQUE indexes on partitioned tables\n8224de4f42ccf98e08db07b43d52fed72f962ebb Indexes with INCLUDE columns and their support in B-tree\n\npostgres=# CREATE TABLE t(i int,j int) PARTITION BY LIST (i);\npostgres=# CREATE TABLE t1 PARTITION OF t FOR VALUES IN (1);\npostgres=# CREATE TABLE t2 PARTITION OF t FOR VALUES IN (2);\n\n-- Correctly errors\npostgres=# CREATE UNIQUE INDEX ON t(j);\nERROR: insufficient columns in UNIQUE constraint definition\nDETAIL: UNIQUE constraint on table \"t\" lacks column \"i\" which is part of the partition key.\n\n-- Fails to error\npostgres=# CREATE UNIQUE INDEX ON t(j) INCLUDE(i);\n\n-- Fail to enforce uniqueness across partitions due to failure to enforce inclusion of partition key in index KEY\npostgres=# INSERT INTO t VALUES(1,1);\npostgres=# INSERT INTO t VALUES(2,1); \n\npostgres=# SELECT * FROM t;\n i | j \n---+---\n 1 | 1\n 2 | 1\n(2 rows)\n\nI found this thread appears to have been close to discovering the issue ~9\nmonths ago.\nhttps://www.postgresql.org/message-id/flat/CAJGNTeO%3DBguEyG8wxMpU_Vgvg3nGGzy71zUQ0RpzEn_mb0bSWA%40mail.gmail.com\n\nJustin\n\n",
"msg_date": "Wed, 9 Jan 2019 00:51:09 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "unique, partitioned index fails to distinguish index key from\n INCLUDEd columns"
},
{
"msg_contents": "On 2019-Jan-09, Justin Pryzby wrote:\n\n> -- Fails to error\n> postgres=# CREATE UNIQUE INDEX ON t(j) INCLUDE(i);\n> \n> -- Fail to enforce uniqueness across partitions due to failure to enforce inclusion of partition key in index KEY\n> postgres=# INSERT INTO t VALUES(1,1);\n> postgres=# INSERT INTO t VALUES(2,1); \n\nDoh. Fix pushed. Commit 8224de4f42cc should have changed one\nappearance of ii_NumIndexAttrs to ii_NumIndexKeyAttrs, but because of\nthe nature of concurrent development, nobody noticed.\n\nThanks for reporting.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 14 Jan 2019 19:31:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: unique, partitioned index fails to distinguish index key from\n INCLUDEd columns"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 07:31:07PM -0300, Alvaro Herrera wrote:\n> On 2019-Jan-09, Justin Pryzby wrote:\n> \n> > -- Fails to error\n> > postgres=# CREATE UNIQUE INDEX ON t(j) INCLUDE(i);\n> > \n> > -- Fail to enforce uniqueness across partitions due to failure to enforce inclusion of partition key in index KEY\n> > postgres=# INSERT INTO t VALUES(1,1);\n> > postgres=# INSERT INTO t VALUES(2,1); \n> \n> Doh. Fix pushed. Commit 8224de4f42cc should have changed one\n> appearance of ii_NumIndexAttrs to ii_NumIndexKeyAttrs, but because of\n> the nature of concurrent development, nobody noticed.\n\nI figured as much - I thought to test this while trying to fall asleep,\nwithout knowing they were developed in parallel.\n\nShould backpatch to v11 ?\n0ad41cf537ea5f076273fcffa4c83a184bd9910f\n\nThanks,\nJustin\n\n",
"msg_date": "Mon, 14 Jan 2019 20:30:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: unique, partitioned index fails to distinguish index key from\n INCLUDEd columns"
},
{
"msg_contents": "On 2019-Jan-14, Justin Pryzby wrote:\n\n> On Mon, Jan 14, 2019 at 07:31:07PM -0300, Alvaro Herrera wrote:\n\n> > Doh. Fix pushed. Commit 8224de4f42cc should have changed one\n> > appearance of ii_NumIndexAttrs to ii_NumIndexKeyAttrs, but because of\n> > the nature of concurrent development, nobody noticed.\n> \n> I figured as much - I thought to test this while trying to fall asleep,\n> without knowing they were developed in parallel.\n\n:-)\n\n> Should backpatch to v11 ?\n> 0ad41cf537ea5f076273fcffa4c83a184bd9910f\n\nYep, already done (src/tools/git_changelog in master):\n\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nBranch: master [0ad41cf53] 2019-01-14 19:28:10 -0300\nBranch: REL_11_STABLE [74aa7e046] 2019-01-14 19:25:19 -0300\n\n Fix unique INCLUDE indexes on partitioned tables\n \n We were considering the INCLUDE columns as part of the key, allowing\n unicity-violating rows to be inserted in different partitions.\n \n Concurrent development conflict in eb7ed3f30634 and 8224de4f42cc.\n \n Reported-by: Justin Pryzby\n Discussion: https://postgr.es/m/20190109065109.GA4285@telsasoft.com\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 15 Jan 2019 01:21:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: unique, partitioned index fails to distinguish index key from\n INCLUDEd columns"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are few minor fix in md.c comments\nsrc/backend/storage/smgr/md.c\n\n1. @L174 - removed the unnecessary word \"is\".\n- […] Note that this is breaks mdnblocks() and related functionality [...]\n+ […] Note that this breaks mdnblocks() and related functionality [...]\n\n2. @L885 - grammar fix\n- We used to pass O_CREAT here, but that's has the disadvantage that it might [...]\n+ We used to pass O_CREAT here, but that has the disadvantage that it might [...]\n\nRegards,\nKirk J.",
"msg_date": "Wed, 9 Jan 2019 08:30:53 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "some minor comment fix in md.c"
},
{
"msg_contents": "On Wed, Jan 09, 2019 at 08:30:53AM +0000, Jamison, Kirk wrote:\n> Here are few minor fix in md.c comments\n> src/backend/storage/smgr/md.c\n> \n> 1. @L174 - removed the unnecessary word \"is\".\n> - […] Note that this is breaks mdnblocks() and related functionality [...]\n> + […] Note that this breaks mdnblocks() and related functionality [...]\n> \n> 2. @L885 - grammar fix\n> - We used to pass O_CREAT here, but that's has the disadvantage that it might [...]\n> + We used to pass O_CREAT here, but that has the disadvantage that it might [...]\n\nThanks, that looks good to me so pushed.\n--\nMichael",
"msg_date": "Thu, 10 Jan 2019 09:39:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: some minor comment fix in md.c"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed a couple of problems with foreign keys on partitioned tables.\n\n1. Foreign keys of partitions stop working correctly after being detached\nfrom the parent table\n\ncreate table pk (a int primary key);\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (a);\ncreate table p11 partition of p1 for values in (1);\nalter table p add foreign key (a) references pk (a);\n\n-- these things work correctly\ninsert into p values (1);\nERROR: insert or update on table \"p11\" violates foreign key constraint\n\"p_a_fkey\"\nDETAIL: Key (a)=(1) is not present in table \"pk\".\ninsert into pk values (1);\ninsert into p values (1);\ndelete from pk where a = 1;\nERROR: update or delete on table \"pk\" violates foreign key constraint\n\"p_a_fkey\" on table \"p\"\nDETAIL: Key (a)=(1) is still referenced from table \"p\".\n\n-- detach p1, which preserves the foreign key key\nalter table p detach partition p1;\ncreate table p12 partition of p1 for values in (2);\n\n-- this part of the foreign key on p1 still works\ninsert into p1 values (2);\nERROR: insert or update on table \"p12\" violates foreign key constraint\n\"p_a_fkey\"\nDETAIL: Key (a)=(2) is not present in table \"pk\".\n\n-- but this seems wrong\ndelete from pk where a = 1;\nDELETE 1\n\n-- because\nselect * from p1;\n a\n───\n 1\n(1 row)\n\nThis happens because the action triggers defined on the PK relation (pk)\nrefers to p as the referencing relation. On detaching p1 from p, p1's\ndata is no longer accessible to that trigger. To fix this problem, we\nneed create action triggers on PK relation that refer to p1 when it's\ndetached (unless such triggers already exist which might be true in some\ncases). Attached patch 0001 shows this approach.\n\n2. Foreign keys of a partition cannot be dropped in some cases after\ndetaching it from the parent.\n\ncreate table p (a int references pk) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (a);\ncreate table p11 partition of p1 for values in (1);\nalter table p detach partition p1;\n\n-- p1's foreign key is no longer inherited, so should be able to drop it\nalter table p1 drop constraint p_a_fkey ;\nERROR: constraint \"p_a_fkey\" of relation \"p11\" does not exist\n\nThis happens because by the time ATExecDropConstraint tries to recursively\ndrop the p11's inherited foreign key constraint (which is what normally\nhappens for inherited constraints), the latter has already been dropped by\ndependency management. I think the foreign key inheritance related code\ndoesn't need to add dependencies for something that inheritance recursion\ncan take of and I can't think of any other reason to have such\ndependencies around. I thought maybe they're needed for pg_dump to work\ncorrectly, but apparently not so.\n\nInterestingly, the above problem doesn't occur if the constraint is added\nto partitions by inheritance recursion.\n\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (a);\ncreate table p11 partition of p1 for values in (1);\nalter table p add foreign key (a) references pk (a);\nalter table p detach partition p1;\nalter table p1 drop constraint p_a_fkey ;\nALTER TABLE\n\nLooking into it, that happens to work *accidentally*.\n\nATExecDropInherit() doesn't try to recurse, which prevents the error in\nthis case, because it finds that the constraint on p1 is marked NO INHERIT\n(non-inheritable), which is incorrect. The value of p1's constraint's\nconnoinherit (in fact, other inheritance related properties too) is\nincorrect, because ATAddForeignKeyConstraint doesn't bother to set them\ncorrectly. This is what the inheritance properties of various copies of\n'p_a_fkey' looks like in the catalog in this case:\n\n-- case 1: foreign key added to partitions recursively\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (a);\ncreate table p11 partition of p1 for values in (1);\nalter table p add foreign key (a) references pk (a);\nselect conname, conrelid::regclass, conislocal, coninhcount, connoinherit\nfrom pg_constraint where conname like 'p%fkey%';\n conname │ conrelid │ conislocal │ coninhcount │ connoinherit\n──────────┼──────────┼────────────┼─────────────┼──────────────\n p_a_fkey │ p │ t │ 0 │ t\n p_a_fkey │ p1 │ t │ 0 │ t\n p_a_fkey │ p11 │ t │ 0 │ t\n(3 rows)\n\nIn this case, after detaching p1 from p, p1's foreign key's coninhcount\nturns to -1, which is not good.\n\nalter table p detach partition p1;\nselect conname, conrelid::regclass, conislocal, coninhcount, connoinherit\nfrom pg_constraint where conname like 'p%fkey%';\n conname │ conrelid │ conislocal │ coninhcount │ connoinherit\n──────────┼──────────┼────────────┼─────────────┼──────────────\n p_a_fkey │ p │ t │ 0 │ t\n p_a_fkey │ p11 │ t │ 0 │ t\n p_a_fkey │ p1 │ t │ -1 │ t\n(3 rows)\n\n-- case 2: foreign keys cloned to partitions after adding partitions\ncreate table p (a int references pk) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (a);\ncreate table p11 partition of p1 for values in (1);\nselect conname, conrelid::regclass, conislocal, coninhcount, connoinherit\nfrom pg_constraint where conname like 'p%fkey%';\n conname │ conrelid │ conislocal │ coninhcount │ connoinherit\n──────────┼──────────┼────────────┼─────────────┼──────────────\n p_a_fkey │ p │ t │ 0 │ t\n p_a_fkey │ p1 │ f │ 1 │ f\n p_a_fkey │ p11 │ f │ 1 │ f\n(3 rows)\n\nAnyway, I propose we fix this by first getting rid of dependencies for\nforeign key constraints and instead rely on inheritance recursion for\ndropping the inherited constraints. Before we can do that, we'll need to\nconsistently set the inheritance properties of foreign key constraints\ncorrectly, that is, teach ATAddForeignKeyConstraint what\nclone_fk_constraints already does correctly. See the attached patch 0002\nfor that.\n\nI'm also attaching versions of 0001 and 0002 that can be applied to PG 11.\n\nThanks,\nAmit",
"msg_date": "Wed, 9 Jan 2019 19:21:38 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "problems with foreign keys on partitioned tables"
},
{
"msg_contents": "Hi Amit\n\nOn 2019-Jan-09, Amit Langote wrote:\n\n> I noticed a couple of problems with foreign keys on partitioned tables.\n\nOuch, thanks for reporting. I think 0001 needs a bit of a tweak in pg11\nto avoid an ABI break -- I intend to study this one and try to push\nearly next week. I'm going to see about pushing 0002 shortly,\nadding some of your tests.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 11 Jan 2019 19:39:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019-Jan-09, Amit Langote wrote:\n\n> 1. Foreign keys of partitions stop working correctly after being detached\n> from the parent table\n\n> This happens because the action triggers defined on the PK relation (pk)\n> refers to p as the referencing relation. On detaching p1 from p, p1's\n> data is no longer accessible to that trigger.\n\nOuch.\n\n> To fix this problem, we need create action triggers on PK relation\n> that refer to p1 when it's detached (unless such triggers already\n> exist which might be true in some cases). Attached patch 0001 shows\n> this approach.\n\nHmm, okay. I'm not in love with the idea that such triggers might\nalready exist -- seems unclean. We should remove redundant action\ntriggers when we attach a table as a partition, no?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 17 Jan 2019 19:54:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019/01/18 7:54, Alvaro Herrera wrote:\n> On 2019-Jan-09, Amit Langote wrote:\n> \n>> 1. Foreign keys of partitions stop working correctly after being detached\n>> from the parent table\n> \n>> This happens because the action triggers defined on the PK relation (pk)\n>> refers to p as the referencing relation. On detaching p1 from p, p1's\n>> data is no longer accessible to that trigger.\n> \n> Ouch.\n> \n>> To fix this problem, we need create action triggers on PK relation\n>> that refer to p1 when it's detached (unless such triggers already\n>> exist which might be true in some cases). Attached patch 0001 shows\n>> this approach.\n> \n> Hmm, okay. I'm not in love with the idea that such triggers might\n> already exist -- seems unclean. We should remove redundant action\n> triggers when we attach a table as a partition, no?\n\nOK, I agree. I have updated the patch to make things work that way. With\nthe patch:\n\ncreate table pk (a int primary key);\ncreate table p (a int references pk) partition by list (a);\n\n-- this query shows the action triggers on the referenced rel ('pk'), name\n-- of the constraint that the trigger is part of and the foreign key rel\n-- ('p', etc.)\n\nselect tgrelid::regclass as pkrel, c.conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼───────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n(2 rows)\n\ncreate table p1 (\n a int references pk,\n foreign key (a) references pk (a) ON UPDATE CASCADE ON DELETE CASCADE\nDEFERRABLE,\n foreign key (a) references pk (a) MATCH FULL ON UPDATE CASCADE ON\nDELETE CASCADE\n) partition by list (a);\n\n-- p1_a_fkey on 'p1' is equivalent to p_a_fkey on 'p', but they're not\n-- attached yet\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n(8 rows)\n\ncreate table p11 (like p1, foreign key (a) references pk);\n\n-- again, p11_a_fkey, p1_a_fkey, and p_a_fkey are equivalent\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p11_a_fkey │ p11\n pk │ p11_a_fkey │ p11\n(10 rows)\n\n\nalter table p1 attach partition p11 for values in (1);\n\n-- p1_a_fkey and p11_a_fkey merged, so triggers for the latter dropped\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n(8 rows)\n\n-- p_a_fkey and p1_a_fkey merged, so triggers for the latter dropped\n\nalter table p attach partition p1 for values in (1);\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n(6 rows)\n\n\nalter table p detach partition p1;\n\n-- p1_a_fkey needs its own triggers again\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey │ p1\n(8 rows)\n\nalter table p1 detach partition p11;\n\n-- p11_a_fkey needs its own triggers again\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey │ p1\n pk │ p11_a_fkey │ p11\n pk │ p11_a_fkey │ p11\n pk │ p1_a_fkey1 │ p11\n pk │ p1_a_fkey1 │ p11\n pk │ p1_a_fkey2 │ p11\n pk │ p1_a_fkey2 │ p11\n(14 rows)\n\n-- try again\n\nalter table p1 attach partition p11 for values in (1);\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey │ p1\n pk │ p1_a_fkey │ p1\n(8 rows)\n\n\nalter table p attach partition p1 for values in (1);\n\nselect tgrelid::regclass as pkrel, conname as fkconname,\ntgconstrrelid::regclass as fkrel from pg_trigger t, pg_constraint c where\ntgrelid = 'pk'::regclass and tgconstraint = c.oid;\n pkrel │ fkconname │ fkrel\n───────┼────────────┼───────\n pk │ p_a_fkey │ p\n pk │ p_a_fkey │ p\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey1 │ p1\n pk │ p1_a_fkey2 │ p1\n pk │ p1_a_fkey2 │ p1\n(6 rows)\n\n\nBy the way, I also noticed that there's duplicated code in\nclone_fk_constraints() which 0001 gets rid of:\n\n datum = fastgetattr(tuple, Anum_pg_constraint_conpfeqop,\n tupdesc, &isnull);\n if (isnull)\n elog(ERROR, \"null conpfeqop\");\n arr = DatumGetArrayTypeP(datum);\n nelem = ARR_DIMS(arr)[0];\n if (ARR_NDIM(arr) != 1 ||\n nelem < 1 ||\n nelem > INDEX_MAX_KEYS ||\n ARR_HASNULL(arr) ||\n ARR_ELEMTYPE(arr) != OIDOID)\n elog(ERROR, \"conpfeqop is not a 1-D OID array\");\n memcpy(conpfeqop, ARR_DATA_PTR(arr), nelem * sizeof(Oid));\n\n- datum = fastgetattr(tuple, Anum_pg_constraint_conpfeqop,\n- tupdesc, &isnull);\n- if (isnull)\n- elog(ERROR, \"null conpfeqop\");\n- arr = DatumGetArrayTypeP(datum);\n- nelem = ARR_DIMS(arr)[0];\n- if (ARR_NDIM(arr) != 1 ||\n- nelem < 1 ||\n- nelem > INDEX_MAX_KEYS ||\n- ARR_HASNULL(arr) ||\n- ARR_ELEMTYPE(arr) != OIDOID)\n- elog(ERROR, \"conpfeqop is not a 1-D OID array\");\n- memcpy(conpfeqop, ARR_DATA_PTR(arr), nelem * sizeof(Oid));\n-\n\nI know you're working on a bug fix in the thread on pgsql-bugs which is\nrelated to the patch 0002 here, but attaching it here anyway, because it\nproposes to get rid of the needless dependencies which I didn't see\nmentioned on the other thread. Also, updated 0001 needed it to be rebased.\n\nLike the last time, I've also attached the patches that can be applied\nPG11 branch.\n\nThanks,\nAmit",
"msg_date": "Fri, 18 Jan 2019 14:27:08 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019-Jan-18, Amit Langote wrote:\n\n> OK, I agree. I have updated the patch to make things work that way.\n\nThanks, this is better. There were a few other things I didn't like, so\nI updated it. Mostly, two things:\n\n1. I didn't like a seqscan on pg_trigger, so I turned that into an\nindexed scan on the constraint OID, and then the other two conditions\nare checked in the returned tuples. Also, what's the point on\nduplicating code and checking how many you deleted? Just delete them\nall.\n\n2. I didn't like the ABI break, and it wasn't necessary: you can just\ncall createForeignKeyActionTriggers directly. That's much simpler.\n\nI also added tests. While running them, I noticed that my previous\ncommit was broken in terms of relcache invalidation. I don't really\nknow if this is a new problem with that commit, or an existing one. The\nfix is 0001.\n\nHaven't got around to your 0002 yet.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 18 Jan 2019 19:16:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On Sat, Jan 19, 2019 at 7:16 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Thanks, this is better. There were a few other things I didn't like, so\n> I updated it. Mostly, two things:\n>\n> 1. I didn't like a seqscan on pg_trigger, so I turned that into an\n> indexed scan on the constraint OID, and then the other two conditions\n> are checked in the returned tuples. Also, what's the point on\n> duplicating code and checking how many you deleted? Just delete them\n> all.\n\nYeah, I didn't quite like what that code looked like, but it didn't\noccur to me that there's an index on tgconstraint.\n\nIt looks much better now.\n\n> 2. I didn't like the ABI break, and it wasn't necessary: you can just\n> call createForeignKeyActionTriggers directly. That's much simpler.\n\nOK.\n\n> I also added tests. While running them, I noticed that my previous\n> commit was broken in terms of relcache invalidation. I don't really\n> know if this is a new problem with that commit, or an existing one. The\n> fix is 0001.\n\nLooks good.\n\nThanks,\nAmit\n\n",
"msg_date": "Sat, 19 Jan 2019 21:07:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "Pushed now, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 21 Jan 2019 20:12:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "Hi Amit,\n\nWill you please rebase 0002? Please add your proposed tests cases to\nit, too.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 21 Jan 2019 20:30:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019/01/22 8:30, Alvaro Herrera wrote:\n> Hi Amit,\n> \n> Will you please rebase 0002? Please add your proposed tests cases to\n> it, too.\n\nDone. See the attached patches for HEAD and PG 11.\n\nThanks,\nAmit",
"msg_date": "Tue, 22 Jan 2019 13:29:43 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019-Jan-22, Amit Langote wrote:\n\n> On 2019/01/22 8:30, Alvaro Herrera wrote:\n> > Hi Amit,\n> > \n> > Will you please rebase 0002? Please add your proposed tests cases to\n> > it, too.\n> \n> Done. See the attached patches for HEAD and PG 11.\n\nI'm not quite sure I understand why the one in DefineIndex needs the\ndeps but nothing else does. I fear that you added that one just to\nappease the existing test that breaks otherwise, and I worry that with\nthat addition we're papering over some other, more fundamental bug.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 23 Jan 2019 18:13:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn 2019/01/24 6:13, Alvaro Herrera wrote:\n> On 2019-Jan-22, Amit Langote wrote:\n>> Done. See the attached patches for HEAD and PG 11.\n> \n> I'm not quite sure I understand why the one in DefineIndex needs the\n> deps but nothing else does. I fear that you added that one just to\n> appease the existing test that breaks otherwise, and I worry that with\n> that addition we're papering over some other, more fundamental bug.\n\nThinking more on this, my proposal to rip dependencies between parent and\nchild constraints altogether to resolve the bug I initially reported is\nstarting to sound a bit overambitious especially considering that we'd\nneed to back-patch it (the patch didn't even consider index constraints\nproperly, creating a divergence between the behaviors of inherited foreign\nkey constraints and inherited index constraints). We can pursue it if\nonly to avoid bloating the catalog for what can be achieved with little\nbit of additional code in tablecmds.c, but maybe we should refrain from\ndoing it in reaction to this particular bug.\n\nI've updated the patch that implements a much simpler fix for this\nparticular bug. Just to reiterate, the following illustrates the bug:\n\ncreate table pk (a int primary key);\ncreate table p (a int references pk) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (a);\ncreate table p11 partition of p1 for values in (1);\nalter table p detach partition p1;\nalter table p1 drop constraint p_a_fkey;\nERROR: constraint \"p_a_fkey\" of relation \"p11\" does not exist\n\nThe error occurs because ATExecDropConstraint when recursively called on\np11 cannot find the constraint as the dependency mechanism already dropped\nit. The new fix is to return from ATExecDropConstraint without recursing\nif the constraint being dropped is index or foreign constraint.\n\nA few hunks of the originally proposed patch are attached here as 0001,\nespecially the part which fixes ATAddForeignKeyConstraint to pass the\ncorrect value of connoninherit to CreateConstraintEntry (which should be\nfalse for partitioned tables). With that change, many tests start failing\nbecause of the above bug. That patch also adds a test case like the one\nabove, but it fails along with others due to the bug. Patch 0002 is the\naforementioned simpler fix to make the errors (existing and the newly\nadded) go away.\n\nThese patches apply unchanged to the PG 11 branch.\n\nThanks,\nAmit",
"msg_date": "Thu, 24 Jan 2019 21:43:20 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "Hello\n\nOn 2019-Jan-24, Amit Langote wrote:\n\n> Thinking more on this, my proposal to rip dependencies between parent and\n> child constraints altogether to resolve the bug I initially reported is\n> starting to sound a bit overambitious especially considering that we'd\n> need to back-patch it (the patch didn't even consider index constraints\n> properly, creating a divergence between the behaviors of inherited foreign\n> key constraints and inherited index constraints). We can pursue it if\n> only to avoid bloating the catalog for what can be achieved with little\n> bit of additional code in tablecmds.c, but maybe we should refrain from\n> doing it in reaction to this particular bug.\n\nWhile studying your fix it occurred to me that perhaps we could change\nthings so that we first collect a list of objects to drop, and only when\nwe're done recursing perform the deletion, as per the attached patch.\nHowever, this fails for the test case in your 0001 patch (but not the\none you show in your email body), because you added a stealthy extra\ningredient to it: the constraint in the grandchild has a different name,\nso when ATExecDropConstraint() tries to search for it by name, it's just\nnot there, not because it was dropped but because it has never existed\nin the first place.\n\nUnless I misunderstand, this means that your plan to remove those\ncatalog tuples won't work at all, because there is no way to find those\nconstraints other than via pg_depend if they have different names.\n\nI'm leaning towards the idea that your patch is the definitive fix and\nnot just a backpatchable band-aid.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 24 Jan 2019 12:08:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On Fri, Jan 25, 2019 at 12:08 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Jan-24, Amit Langote wrote:\n>\n> > Thinking more on this, my proposal to rip dependencies between parent and\n> > child constraints altogether to resolve the bug I initially reported is\n> > starting to sound a bit overambitious especially considering that we'd\n> > need to back-patch it (the patch didn't even consider index constraints\n> > properly, creating a divergence between the behaviors of inherited foreign\n> > key constraints and inherited index constraints). We can pursue it if\n> > only to avoid bloating the catalog for what can be achieved with little\n> > bit of additional code in tablecmds.c, but maybe we should refrain from\n> > doing it in reaction to this particular bug.\n>\n> While studying your fix it occurred to me that perhaps we could change\n> things so that we first collect a list of objects to drop, and only when\n> we're done recursing perform the deletion, as per the attached patch.\n> However, this fails for the test case in your 0001 patch (but not the\n> one you show in your email body), because you added a stealthy extra\n> ingredient to it: the constraint in the grandchild has a different name,\n> so when ATExecDropConstraint() tries to search for it by name, it's just\n> not there, not because it was dropped but because it has never existed\n> in the first place.\n\nDoesn't the following performDeletion() at the start of\nATExecDropConstraint(), through findDependentObject()'s own recursion,\ntake care of deleting *all* constraints, including those of?\n\n /*\n * Perform the actual constraint deletion\n */\n conobj.classId = ConstraintRelationId;\n conobj.objectId = con->oid;\n conobj.objectSubId = 0;\n\n performDeletion(&conobj, behavior, 0);\n\n> Unless I misunderstand, this means that your plan to remove those\n> catalog tuples won't work at all, because there is no way to find those\n> constraints other than via pg_depend if they have different names.\n\nYeah, that's right. Actually, I gave up on developing the patch\nfurther based on that approach (no-dependencies approach) when I\nedited the test to give the grandchild constraint its own name.\n\n> I'm leaning towards the idea that your patch is the definitive fix and\n> not just a backpatchable band-aid.\n\nYeah, I think so too.\n\nThanks,\nAmit\n\n",
"msg_date": "Fri, 25 Jan 2019 00:30:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On Fri, Jan 25, 2019 at 12:30 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Jan 25, 2019 at 12:08 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Jan-24, Amit Langote wrote:\n> >\n> > > Thinking more on this, my proposal to rip dependencies between parent and\n> > > child constraints altogether to resolve the bug I initially reported is\n> > > starting to sound a bit overambitious especially considering that we'd\n> > > need to back-patch it (the patch didn't even consider index constraints\n> > > properly, creating a divergence between the behaviors of inherited foreign\n> > > key constraints and inherited index constraints). We can pursue it if\n> > > only to avoid bloating the catalog for what can be achieved with little\n> > > bit of additional code in tablecmds.c, but maybe we should refrain from\n> > > doing it in reaction to this particular bug.\n> >\n> > While studying your fix it occurred to me that perhaps we could change\n> > things so that we first collect a list of objects to drop, and only when\n> > we're done recursing perform the deletion, as per the attached patch.\n> > However, this fails for the test case in your 0001 patch (but not the\n> > one you show in your email body), because you added a stealthy extra\n> > ingredient to it: the constraint in the grandchild has a different name,\n> > so when ATExecDropConstraint() tries to search for it by name, it's just\n> > not there, not because it was dropped but because it has never existed\n> > in the first place.\n>\n> Doesn't the following performDeletion() at the start of\n> ATExecDropConstraint(), through findDependentObject()'s own recursion,\n> take care of deleting *all* constraints, including those of?\n\nMeant to say: \"...including those of the grandchildren?\"\n\n> /*\n> * Perform the actual constraint deletion\n> */\n> conobj.classId = ConstraintRelationId;\n> conobj.objectId = con->oid;\n> conobj.objectSubId = 0;\n>\n> performDeletion(&conobj, behavior, 0);\n\nThanks,\nAmit\n\n",
"msg_date": "Fri, 25 Jan 2019 00:37:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019-Jan-25, Amit Langote wrote:\n\n> On Fri, Jan 25, 2019 at 12:30 AM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> > Doesn't the following performDeletion() at the start of\n> > ATExecDropConstraint(), through findDependentObject()'s own recursion,\n> > take care of deleting *all* constraints, including those of?\n> \n> Meant to say: \"...including those of the grandchildren?\"\n> \n> > /*\n> > * Perform the actual constraint deletion\n> > */\n> > conobj.classId = ConstraintRelationId;\n> > conobj.objectId = con->oid;\n> > conobj.objectSubId = 0;\n> >\n> > performDeletion(&conobj, behavior, 0);\n\nOf course it does when the dependencies are set up -- but in the\napproach we just gave up on, those dependencies would not exist.\nAnyway, my motivation was that performMultipleDeletions has the\nadvantage that it collects all objects to be dropped before deleting\nanyway, and so the error that a constraint was dropped in a previous\nrecursion step would not occur.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 24 Jan 2019 13:37:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019-Jan-24, Amit Langote wrote:\n\n> A few hunks of the originally proposed patch are attached here as 0001,\n> especially the part which fixes ATAddForeignKeyConstraint to pass the\n> correct value of connoninherit to CreateConstraintEntry (which should be\n> false for partitioned tables). With that change, many tests start failing\n> because of the above bug. That patch also adds a test case like the one\n> above, but it fails along with others due to the bug. Patch 0002 is the\n> aforementioned simpler fix to make the errors (existing and the newly\n> added) go away.\n\nCool, thanks. I made a bunch of fixes -- I added an elog(ERROR) check\nto ensure a constraint whose parent is being set does not already have a\nparent; that seemed in line with the new asserts that check the\nconinhcount. I also moved those asserts, changing the spirit of what\nthey checked. Also: I wasn't sure about stopping recursion for legacy\ninheritance in ATExecDropConstraint() for non-check constraints, so I\nchanged that to occur in partitioned only. Also, stylistic fixes.\n\nI was mildly surprised to realize that the my_fkey constraint on\nfk_part_1_1 is gone after dropping fkey on its parent, since it was\ndeclared locally when that table was created. However, it makes perfect\nsense in retrospect, since we made it dependent on its parent. I'm not\nterribly happy about this, but I don't quite see a way to make it better\nthat doesn't require much more code than is warranted.\n\n> These patches apply unchanged to the PG 11 branch.\n\nYeah, only if you tried to compile, it would have complained about\ntable_close() ;-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 24 Jan 2019 14:18:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
},
{
"msg_contents": "On 2019/01/25 2:18, Alvaro Herrera wrote:\n> On 2019-Jan-24, Amit Langote wrote:\n> \n>> A few hunks of the originally proposed patch are attached here as 0001,\n>> especially the part which fixes ATAddForeignKeyConstraint to pass the\n>> correct value of connoninherit to CreateConstraintEntry (which should be\n>> false for partitioned tables). With that change, many tests start failing\n>> because of the above bug. That patch also adds a test case like the one\n>> above, but it fails along with others due to the bug. Patch 0002 is the\n>> aforementioned simpler fix to make the errors (existing and the newly\n>> added) go away.\n> \n> Cool, thanks. I made a bunch of fixes -- I added an elog(ERROR) check\n> to ensure a constraint whose parent is being set does not already have a\n> parent; that seemed in line with the new asserts that check the\n> coninhcount. I also moved those asserts, changing the spirit of what\n> they checked. Also: I wasn't sure about stopping recursion for legacy\n> inheritance in ATExecDropConstraint() for non-check constraints, so I\n> changed that to occur in partitioned only. Also, stylistic fixes.\n\nThanks for the fixes and committing.\n\n> I was mildly surprised to realize that the my_fkey constraint on\n> fk_part_1_1 is gone after dropping fkey on its parent, since it was\n> declared locally when that table was created. However, it makes perfect\n> sense in retrospect, since we made it dependent on its parent. I'm not\n> terribly happy about this, but I don't quite see a way to make it better\n> that doesn't require much more code than is warranted.\n\nFwiw, CHECK constraints behave that way too. OTOH, detaching a partition\npreserves all the constraints, even the ones that were never locally\ndefined on the partition.\n\n>> These patches apply unchanged to the PG 11 branch.\n> \n> Yeah, only if you tried to compile, it would have complained about\n> table_close() ;-)\n\nOops, sorry. I was really in a hurry that day as the dinnertime had passed.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 28 Jan 2019 12:02:13 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: problems with foreign keys on partitioned tables"
}
] |
[
{
"msg_contents": "Hi,\n\nthere was a report in #postgresql recently about a crash on Google Cloud\nSQL with the somewhat misleading message \"could not write to log file\"\nwhile in fact it was the xlog/wal:\n\n|PANIC: could not write to log file 000000010000019600000054 at offset\n| 13279232, length 245760: Cannot allocate memory \n|ERROR: could not write block 74666 in file \"base/18031/48587\": Cannot\n| allocate memory \n|CONTEXT: writing block 74666 of relation base/18031/48587 \n|LOG: server process (PID 5160) was terminated by signal 9: Killed \n\nThe slightly longer logfile can be found here: http://dpaste.com/2T61PS9\n\nI suggest to reword that message, e.g. \"could not write to transaction\nlog file\" or \"could not write to wal file\".\n\nAlso, the errno (ENOMEM) is curious (and the user wrote that Google\nmonitoring reported memory at 16/20GB at the time of the crash), but it\ncould be a due to running on a cloud-fork? As you have no access to\nPGDATA, it sounds difficult to diagnose after the fact.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n",
"msg_date": "Wed, 09 Jan 2019 12:06:39 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "Misleading panic message in backend/access/transam/xlog.c"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 12:06 PM Michael Banck <michael.banck@credativ.de>\nwrote:\n\n> Hi,\n>\n> there was a report in #postgresql recently about a crash on Google Cloud\n> SQL with the somewhat misleading message \"could not write to log file\"\n> while in fact it was the xlog/wal:\n>\n> |PANIC: could not write to log file 000000010000019600000054 at offset\n> | 13279232, length 245760: Cannot allocate memory\n> |ERROR: could not write block 74666 in file \"base/18031/48587\": Cannot\n> | allocate memory\n> |CONTEXT: writing block 74666 of relation base/18031/48587\n> |LOG: server process (PID 5160) was terminated by signal 9: Killed\n>\n> The slightly longer logfile can be found here: http://dpaste.com/2T61PS9\n>\n> I suggest to reword that message, e.g. \"could not write to transaction\n> log file\" or \"could not write to wal file\".\n>\n\nGiven the change xlog -> wal, I would suggest \"could not write to wal file\"\nas the correct option there.\n\nAnd +1 for rewording it. I think there are also some other messages like it\nthat needs to be changed, and also things like\n\n(errmsg(\"restored log file \\\"%s\\\" from archive\"\n\ncould do with an update.\n\n\nAlso, the errno (ENOMEM) is curious (and the user wrote that Google\n> monitoring reported memory at 16/20GB at the time of the crash), but it\n> could be a due to running on a cloud-fork? As you have no access to\n> PGDATA, it sounds difficult to diagnose after the fact.\n>\n\nYeah, nobody knows what Google has done in their fork *or* how they\nactually measure those things, so without a repro I think that's hard..\n\n\n//Magnus\n\nOn Wed, Jan 9, 2019 at 12:06 PM Michael Banck <michael.banck@credativ.de> wrote:Hi,\n\nthere was a report in #postgresql recently about a crash on Google Cloud\nSQL with the somewhat misleading message \"could not write to log file\"\nwhile in fact it was the xlog/wal:\n\n|PANIC: could not write to log file 000000010000019600000054 at offset\n| 13279232, length 245760: Cannot allocate memory \n|ERROR: could not write block 74666 in file \"base/18031/48587\": Cannot\n| allocate memory \n|CONTEXT: writing block 74666 of relation base/18031/48587 \n|LOG: server process (PID 5160) was terminated by signal 9: Killed \n\nThe slightly longer logfile can be found here: http://dpaste.com/2T61PS9\n\nI suggest to reword that message, e.g. \"could not write to transaction\nlog file\" or \"could not write to wal file\".Given the change xlog -> wal, I would suggest \"could not write to wal file\" as the correct option there.And +1 for rewording it. I think there are also some other messages like it that needs to be changed, and also things like(errmsg(\"restored log file \\\"%s\\\" from archive\"could do with an update.\nAlso, the errno (ENOMEM) is curious (and the user wrote that Google\nmonitoring reported memory at 16/20GB at the time of the crash), but it\ncould be a due to running on a cloud-fork? As you have no access to\nPGDATA, it sounds difficult to diagnose after the fact.Yeah, nobody knows what Google has done in their fork *or* how they actually measure those things, so without a repro I think that's hard..//Magnus",
"msg_date": "Wed, 9 Jan 2019 12:12:42 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Misleading panic message in backend/access/transam/xlog.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-09 12:06:39 +0100, Michael Banck wrote:\n> there was a report in #postgresql recently about a crash on Google Cloud\n> SQL with the somewhat misleading message \"could not write to log file\"\n> while in fact it was the xlog/wal:\n> \n> |PANIC: could not write to log file 000000010000019600000054 at offset\n> | 13279232, length 245760: Cannot allocate memory�\n> |ERROR: could not write block 74666 in file \"base/18031/48587\": Cannot\n> | allocate memory�\n> |CONTEXT: writing block 74666 of relation base/18031/48587 \n> |LOG: server process (PID 5160) was terminated by signal 9: Killed \n> \n> The slightly longer logfile can be found here: http://dpaste.com/2T61PS9\n> \n> I suggest to reword that message, e.g. \"could not write to transaction\n> log file\" or \"could not write to wal file\".\n\nI'm quite unenthused about that. If anything, I'd remove detail and use\nthe standard error message about not being able to write to a file, and\ninclude the full path.\n\n\n> Also, the errno (ENOMEM) is curious (and the user wrote that Google\n> monitoring reported memory at 16/20GB at the time of the crash), but it\n> could be a due to running on a cloud-fork? As you have no access to\n> PGDATA, it sounds difficult to diagnose after the fact.\n\nYes, that sounds quite likely. This pretty much is a write() which isn't\ndocumented to return ENOMEM commonly, so I assume Google's doing\nsomething odd.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 9 Jan 2019 08:10:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Misleading panic message in backend/access/transam/xlog.c"
},
{
"msg_contents": "On Wed, Jan 09, 2019 at 08:10:43AM -0800, Andres Freund wrote:\n> I'm quite unenthused about that. If anything, I'd remove detail and use\n> the standard error message about not being able to write to a file, and\n> include the full path.\n\nPartially agreed. Those messages have been left out of 56df07b\nbecause they include some context about the offset and the length, and\nI don't think that we simply want to remove that information. What\nabout making the offset and the length part of an extra errdetail, and\nswitch the main error string to a more generic one?\n--\nMichael",
"msg_date": "Thu, 10 Jan 2019 10:01:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Misleading panic message in backend/access/transam/xlog.c"
},
{
"msg_contents": "\n\nOn January 9, 2019 5:01:40 PM PST, Michael Paquier <michael@paquier.xyz> wrote:\n>On Wed, Jan 09, 2019 at 08:10:43AM -0800, Andres Freund wrote:\n>> I'm quite unenthused about that. If anything, I'd remove detail and\n>use\n>> the standard error message about not being able to write to a file,\n>and\n>> include the full path.\n>\n>Partially agreed. Those messages have been left out of 56df07b\n>because they include some context about the offset and the length, and\n>I don't think that we simply want to remove that information. What\n>about making the offset and the length part of an extra errdetail, and\n>switch the main error string to a more generic one?\n\nIIRC we have other such errors including offset and length (and if not we'll grow some). It should be formatted as a genetic write error with the file name, no reference to log file, etc, even if there's no precedent.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Wed, 09 Jan 2019 17:09:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Misleading panic message in backend/access/transam/xlog.c"
},
{
"msg_contents": "On Wed, Jan 09, 2019 at 05:09:19PM -0800, Andres Freund wrote:\n> IIRC we have other such errors including offset and length (and if\n> not we'll grow some). It should be formatted as a genetic write\n> error with the file name, no reference to log file, etc, even if\n> there's no precedent. \n\nYeah, there are a couple of them:\naccess/transam/xlog.c:\nerrmsg(\"could not read from log segment %s, offset %u: %m\",\naccess/transam/xlog.c:\nerrmsg(\"could not read from log segment %s, offset %u: read %d of %zu\",\naccess/transam/xlogutils.c:\nerrmsg(\"could not seek in log segment %s to offset %u: %m\"\naccess/transam/xlogutils.c:\nerrmsg(\"could not read from log segment %s, offset %u, length %lu: %m\",\nreplication/walreceiver.c:\nerrmsg(\"could not seek in log segment %s to offset %u: %m\",\nreplication/walsender.c:\nerrmsg(\"could not seek in log segment %s to offset %u: %m\",\nreplication/walsender.c:\nerrmsg(\"could not read from log segment %s, offset %u, length %zu: %m\",\nreplication/walsender.c:\nerrmsg(\"could not read from log segment %s, offset %u: read %d of %zu\",\n--\nMichael",
"msg_date": "Thu, 10 Jan 2019 10:38:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Misleading panic message in backend/access/transam/xlog.c"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 8:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jan 09, 2019 at 05:09:19PM -0800, Andres Freund wrote:\n> > IIRC we have other such errors including offset and length (and if\n> > not we'll grow some). It should be formatted as a genetic write\n> > error with the file name, no reference to log file, etc, even if\n> > there's no precedent.\n>\n> Yeah, there are a couple of them:\n> access/transam/xlog.c:\n> errmsg(\"could not read from log segment %s, offset %u: %m\",\n\nIn smgr.c, we have:\n\n\"could not read block %u in file \\\"%s\\\": %m\"\n\nThat seems to be the closet thing we have to a generic message\ntemplate right now, but it's not entirely generic because it talks\nabout blocks. Maybe we should go with something like:\n\n\"could not read %u bytes in file \\\"%s\\\" at offset %u: %m\"\n\n...and use that for both WAL and smgr.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 11:17:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Misleading panic message in backend/access/transam/xlog.c"
}
] |
[
{
"msg_contents": "Use perfect hashing, instead of binary search, for keyword lookup.\n\nWe've been speculating for a long time that hash-based keyword lookup\nought to be faster than binary search, but up to now we hadn't found\na suitable tool for generating the hash function. Joerg Sonnenberger\nprovided the inspiration, and sample code, to show us that rolling our\nown generator wasn't a ridiculous idea. Hence, do that.\n\nThe method used here requires a lookup table of approximately 4 bytes\nper keyword, but that's less than what we saved in the predecessor commit\nafb0d0712, so it's not a big problem. The time savings is indeed\nsignificant: preliminary testing suggests that the total time for raw\nparsing (flex + bison phases) drops by ~20%.\n\nPatch by me, but it owes its existence to Joerg Sonnenberger;\nthanks also to John Naylor for review.\n\nDiscussion: https://postgr.es/m/20190103163340.GA15803@britannica.bec.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/c64d0cd5ce24a344798534f1bc5827a9199b7a6e\n\nModified Files\n--------------\nsrc/common/Makefile | 9 +-\nsrc/common/kwlookup.c | 73 +++---\nsrc/include/common/kwlookup.h | 4 +\nsrc/include/parser/kwlist.h | 3 +-\nsrc/interfaces/ecpg/preproc/Makefile | 13 +-\nsrc/interfaces/ecpg/preproc/c_keywords.c | 51 ++--\nsrc/interfaces/ecpg/preproc/c_kwlist.h | 3 +-\nsrc/interfaces/ecpg/preproc/ecpg_kwlist.h | 3 +-\nsrc/pl/plpgsql/src/Makefile | 13 +-\nsrc/pl/plpgsql/src/pl_reserved_kwlist.h | 5 +-\nsrc/pl/plpgsql/src/pl_unreserved_kwlist.h | 7 +-\nsrc/tools/PerfectHash.pm | 376 ++++++++++++++++++++++++++++++\nsrc/tools/gen_keywordlist.pl | 53 ++++-\nsrc/tools/msvc/Solution.pm | 10 +-\n14 files changed, 516 insertions(+), 107 deletions(-)\n\n",
"msg_date": "Thu, 10 Jan 2019 00:48:01 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Use perfect hashing, instead of binary search,\n for keyword looku"
},
{
"msg_contents": "On Wed, Jan 9, 2019 at 7:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Use perfect hashing, instead of binary search, for keyword lookup.\n>\n> We've been speculating for a long time that hash-based keyword lookup\n> ought to be faster than binary search, but up to now we hadn't found\n> a suitable tool for generating the hash function. Joerg Sonnenberger\n> provided the inspiration, and sample code, to show us that rolling our\n> own generator wasn't a ridiculous idea. Hence, do that.\n>\n> The method used here requires a lookup table of approximately 4 bytes\n> per keyword, but that's less than what we saved in the predecessor commit\n> afb0d0712, so it's not a big problem. The time savings is indeed\n> significant: preliminary testing suggests that the total time for raw\n> parsing (flex + bison phases) drops by ~20%.\n>\n> Patch by me, but it owes its existence to Joerg Sonnenberger;\n> thanks also to John Naylor for review.\n\nWow. That is a VERY significant improvement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 9 Jan 2019 21:11:20 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use perfect hashing, instead of binary search,\n for keyword looku"
}
] |
[
{
"msg_contents": "I see somebody marked the CF as in-progress, but if anyone volunteered\nto be nagger-in-chief for this month, I didn't see that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 09 Jan 2019 20:43:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "BTW, have we got a commitfest manager for the January CF?"
},
{
"msg_contents": "On Wed, Jan 09, 2019 at 08:43:08PM -0500, Tom Lane wrote:\n> I see somebody marked the CF as in-progress, but if anyone volunteered\n> to be nagger-in-chief for this month, I didn't see that.\n\nNo volunteers as far as I know of...\n--\nMichael",
"msg_date": "Thu, 10 Jan 2019 10:52:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BTW, have we got a commitfest manager for the January CF?"
},
{
"msg_contents": "On Wed, Jan 09, 2019 at 08:43:08PM -0500, Tom Lane wrote:\n> I see somebody marked the CF as in-progress, but if anyone volunteered\n> to be nagger-in-chief for this month, I didn't see that.\n\nI'm happy to do it.\n\nWould love to chat with recent prior CFMs, if they're willing.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Thu, 10 Jan 2019 04:13:44 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: BTW, have we got a commitfest manager for the January CF?"
}
] |
[
{
"msg_contents": "Has the policy on cross-posting to multiple lists been hardened recently?\n\nThe \"Crash on ALTER TABLE\" thread [1] started on -bugs, but Andrew's\nmessage on 8 Jan with an initial proposed patch and my response later\nthat day both CC'ed -hackers and seem to have been rejected, and so\nare missing from the archives.\n\nIn that case, it's not a big deal because subsequent replies included\nthe text from the missing messages, so it's still possible to follow\nthe discussion, but I wanted to check whether this was an intentional\nchange of policy. If so, it seems a bit harsh to flat-out reject these\nmessages. My prior understanding was that cross-posting, while\ngenerally discouraged, does still sometimes have value.\n\n[1] https://www.postgresql.org/message-id/flat/CAEZATCVqksrnXybSaogWOzmVjE3O-NqMSpoHDuDw9_7mhNpeLQ%40mail.gmail.com#2c25e9a783d4685912dcef8b3f3edd63\n\nRegards,\nDean\n\n",
"msg_date": "Thu, 10 Jan 2019 09:56:30 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Has the policy on cross-posting to multiple lists been hardened recently?\n\nNot that I've heard of.\n\n> The \"Crash on ALTER TABLE\" thread [1] started on -bugs, but Andrew's\n> message on 8 Jan with an initial proposed patch and my response later\n> that day both CC'ed -hackers and seem to have been rejected, and so\n> are missing from the archives.\n\nI've done the same recently, without problems. I'd suggest inquiring\non the pgsql-www list; project infrastructure issues are not really\non-topic here.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 10 Jan 2019 10:58:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 12:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > The \"Crash on ALTER TABLE\" thread [1] started on -bugs, but Andrew's\n> > message on 8 Jan with an initial proposed patch and my response later\n> > that day both CC'ed -hackers and seem to have been rejected, and so\n> > are missing from the archives.\n>\n> I've done the same recently, without problems. I'd suggest inquiring\n> on the pgsql-www list; project infrastructure issues are not really\n> on-topic here.\n\nFwiw, an email I sent yesterday was rejected similarly because I'd\ntried to send it to both pgsql-hackers and pgsql-performance. I\nmentioned about that when I resent the same email successfully [1]\nafter dropping pgsql-performance from the list of recipients.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/96720c99-ffa0-01ad-c594-0504c8eda708%40lab.ntt.co.jp\n\n",
"msg_date": "Fri, 11 Jan 2019 01:11:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n(moving to -www as suggested downthread and as generally more\nappropriate)\n\n* Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> Has the policy on cross-posting to multiple lists been hardened recently?\n\nSo, the short answer is 'yes'.\n\nWe've made a few different changes in the recent weeks. The first\nchange that was made was actually to start dropping emails where the\nlist is being BCC'd. That was done a couple of weeks ago and seems to\nhave gone well and has reduced the amount of spam our moderators are\ndealing with.\n\nThis most recent change was to implement a policy where we don't allow\npublic lists to be CC'd with other public lists; when that happens we\ninstead reply with an email basically saying \"please pick the right list\nto send your email to.\"\n\nPerhaps that hasn't been getting through to people...? Though I had\nsomeone respond to -owner basically saying \"thanks, I'll pick the right\nlist\", so at least some are seeing it.\n\nAs for how this change came to be implemented without much discussion\nexternally, I'm afraid that's probably the combination of \"well, the BCC\nchange went just fine and no one complained\", confusion between folks on\ninfra as to if we had only discussed it internally or if we had already\ndiscussed it externally with people (the individual who actually made\nthe change *cough* apparently thought it had already been discussed\nexternally when we hadn't and probably should have at least announced\nit when we did make the change anyway...), and general frustration among\nsome about the increasing number of cross-post emails we're getting\nwhich really shouldn't be cross-posted.\n\nIn an ideal world, everyone would know that they really *shouldn't*\ncross-post, and we also wouldn't have extremely long many-mailing-list\ncross-posted threads, and we wouldn't need to have such a policy, but\nthat's not really where we are.\n\nOne thing which hadn't been considered and probably should have is the\nimpact on existing threads, but I'm not sure if we really could have\nsensibly done something about that.\n\nThen there's the big question which we really should have discussed\nahead of time, but, do people feel that such a restriction ends up doing\nmore harm than good? Are there concerns about the BCC restriction? In\nthe short period of time that it's been in place, I've seen some good\ncome from it in the form of people learning to post to the correct list\ninstead of just cross-posting to a bunch of lists, but I've also seen\n(now) the cases where existing threads were confused do to the change,\nso I suppose I'm on the fence, though I still tend towards having the\npolicy in place and hoping that it doesn't overly bother existing users\nwhile helping newcomers.\n\nWe're here now though, so, thoughts? Should I go undo it right away?\nShould we see how it goes? Try other things? We could possibly have it\nonly apply to emails from people who don't have accounts or who aren't\nsubscribed to the lists? Or have a flag on a per-account basis which\nbasically says \"let me cross-post\"? Open to suggestions (note: I've not\nrun all the above ideas by the other pglister hacker *cough*, so I can't\nsay if all of them would be possible/reasonable, just throwing out\nideas).\n\nThanks!\n\nStephen",
"msg_date": "Thu, 10 Jan 2019 12:18:35 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On 2019-Jan-10, Stephen Frost wrote:\n\n> Are there concerns about the BCC restriction?\n\nNone here.\n\n> We're here now though, so, thoughts? Should I go undo it right away?\n\nI don't like the crosspost ban, personally. Some sort of limit makes\nsense, but I think cross-posting to two lists should be allowed. I\ndon't see an use case for cross-posting to more than two lists (though\nmaybe -hackers + -bugs + -docs would make sense ...)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 10 Jan 2019 14:34:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n>> Has the policy on cross-posting to multiple lists been hardened recently?\n\n> So, the short answer is 'yes'.\n> Perhaps that hasn't been getting through to people...?\n\nIf this was publicly announced anywhere, I didn't see it.\nI would have pushed back if I had. CC'ing -hackers on a reply to\na bug report is something I do all the time, and I do not think\nit'd be a good idea to stop doing so, nor to make the thread\ndisappear from the -bugs archives.\n\nI'm quite on board with the need to reduce useless cross-posting,\nbut this is not the solution.\n\nMaybe there could be a different rule for initial submissions\n(one list only) than follow-ups (can add lists)?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 10 Jan 2019 12:47:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Thu, 10 Jan 2019 at 17:18, Stephen Frost <sfrost@snowman.net> wrote:\n> * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> > Has the policy on cross-posting to multiple lists been hardened recently?\n>\n> This most recent change was to implement a policy where we don't allow\n> public lists to be CC'd with other public lists; when that happens we\n> instead reply with an email basically saying \"please pick the right list\n> to send your email to.\"\n\nThe problem with that as a mechanism for stopping people from cross\nposting is that it doesn't (and can't) actually stop the message from\nbeing delivered to people already on the CC list for that thread.\n\nSo in this case, Andrew first cross posted it, but I was already on\nthe CC list, so I got the message as normal, not realising that it\nhadn't come via the lists. I then hit \"Reply all\" ... (rinse and\nrepeat). I didn't even immediately notice the failure to send the\nmessage because my own reply just got added to end of the conversation\nin my mail client, but presumably the intention was that both Andrew\nand I should have noticed and re-posted to a single list. But of\ncourse that would then have annoyed all the people already on the\nthread who would have got duplicates of mails they had already\nreceived.\n\nPersonally, I don't have a problem with people cross posting. I think\nthere are real cases where it's the right thing to do -- it's common\npractice for legitimate reasons. Yes, it can be abused, but there are\nworse abuses of email all the time.\n\nRegards,\nDean\n\n",
"msg_date": "Thu, 10 Jan 2019 18:19:47 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 11:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> >> Has the policy on cross-posting to multiple lists been hardened recently?\n>\n> > So, the short answer is 'yes'.\n> > Perhaps that hasn't been getting through to people...?\n>\n> If this was publicly announced anywhere, I didn't see it.\n> I would have pushed back if I had. CC'ing -hackers on a reply to\n> a bug report is something I do all the time, and I do not think\n> it'd be a good idea to stop doing so, nor to make the thread\n> disappear from the -bugs archives.\n>\n> I'm quite on board with the need to reduce useless cross-posting,\n> but this is not the solution.\n\nAgreed. Similarly, posts from pgadmin-support sometimes end up\nintentionally being cross posted to pgadmin-hackers.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 10 Jan 2019 23:53:47 +0530",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Dave Page (dpage@pgadmin.org) wrote:\n> On Thu, Jan 10, 2019 at 11:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm quite on board with the need to reduce useless cross-posting,\n> > but this is not the solution.\n> \n> Agreed. Similarly, posts from pgadmin-support sometimes end up\n> intentionally being cross posted to pgadmin-hackers.\n\nSo, in implementing this we did consider that different lists might wish\nfor different policies and given that -hackers seems to be common among\nthe discussion, what if we just dropped the restriction for posts to\n-hackers?\n\nThat is, emails to -bugs and -hackers would be allowed through to both\nlists, cross-posts to -general and -sql, for example, would get the\nbounce-back.\n\nAs there seems relatively little downside, I've gone ahead and made that\nchange, but I don't mean to forstall further discussion. Should we\napply that change to other lists? To all of them?\n\nTom's idea about allowing cross-posts on replies is an interesting one\nas well. I've also added a certain someone to the thread explicitly to\nsee what his thoughts are on that, and the rest.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 10 Jan 2019 14:18:48 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 12:18 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Are there concerns about the BCC restriction?\n\nOne thing people sometimes do when something is posted to the wrong\nlist is (1) reply, (2) explain in the reply that the message was\nposted to the wrong pace, (3) move the original list from Cc into Bcc,\nand (4) add the correct list into Cc. That has the advantage that\npeople on the original list can see that someone replied (which avoids\nduplicate replies by different people) and know where to go to find\nthe rest of the discussion if they want to see it.\n\nI think the idea of allowing 2 lists but not >2 is probably a good\none. Also, it might be good to be more permissive for, say, people\nwho have successfully posted at least 1000 emails to the lists. Such\npeople presumably are less likely to do abusive things, and more\nlikely to care about and heed any correction given to them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 10 Jan 2019 14:46:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Jan 10, 2019 at 12:18 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Are there concerns about the BCC restriction?\n> \n> One thing people sometimes do when something is posted to the wrong\n> list is (1) reply, (2) explain in the reply that the message was\n> posted to the wrong pace, (3) move the original list from Cc into Bcc,\n> and (4) add the correct list into Cc. That has the advantage that\n> people on the original list can see that someone replied (which avoids\n> duplicate replies by different people) and know where to go to find\n> the rest of the discussion if they want to see it.\n\nWhile considering that, we actively went and looked at both the\nfrequency and the success of that approach and, frankly, neither were\nvery inspiring. There were very few cases of that being tried and, as I\nrecall anyway, none of them were actually successful in 'moving' the\nthread- that is, people continued on the original list to begin with\nanyway, except that some of the thread was now on another list.\n\nWe had discussed allowing bcc's to lists when we detect that there's at\nleast *some* valid list in the To or Cc line, but it didn't seem\nworthwhile given the research that was done.\n\nFor some (private) lists, we have the policy set to moderate emails\nwhich bcc those lists (such as -security). Also, the \"don't CC multiple\nlists\" was only applied to public/archived lists to begin with,\nintentionally.\n\n> I think the idea of allowing 2 lists but not >2 is probably a good\n> one. Also, it might be good to be more permissive for, say, people\n> who have successfully posted at least 1000 emails to the lists. Such\n> people presumably are less likely to do abusive things, and more\n> likely to care about and heed any correction given to them.\n\nYeah, that's in-line with what I had suggested up-thread where we have\nsome kind of flag which can either be set by the user themselves (maybe\nwe have some language above the flag that cautions against cross-posts\nand whatnot), or set by the system (>1000 emails, as you say, or maybe\n\"after 2 weeks of being subscribed to a list\", similar to the community\naccount \"cooling off\" period we have), or maybe by the list admins\n(likely initially based on a heuristic of \"lots of emails sent\" or\nsomething, but then handled on an individual basis).\n\nI am a little concerned that we make the system too complicated for\npeople to understand too though. Haven't got a particularly good answer\nfor that, sadly.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 10 Jan 2019 15:03:32 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-10 12:47:03 -0500, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> >> Has the policy on cross-posting to multiple lists been hardened recently?\n> \n> > So, the short answer is 'yes'.\n> > Perhaps that hasn't been getting through to people...?\n> \n> If this was publicly announced anywhere, I didn't see it.\n> I would have pushed back if I had. CC'ing -hackers on a reply to\n> a bug report is something I do all the time, and I do not think\n> it'd be a good idea to stop doing so, nor to make the thread\n> disappear from the -bugs archives.\n\n+1\n\nThis seems quite the significant change to make without public\ndiscussion.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 10 Jan 2019 12:27:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On 2019-01-10 12:18:35 -0500, Stephen Frost wrote:\n> Should I go undo it right away?\n\nYes.\n\n",
"msg_date": "Thu, 10 Jan 2019 12:52:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": ">> I'm quite on board with the need to reduce useless cross-posting,\n>> but this is not the solution.\n> \n> Agreed. Similarly, posts from pgadmin-support sometimes end up\n> intentionally being cross posted to pgadmin-hackers.\n\nAnother use case is, pgsql-docs and pgsql-hackers. For non trivial\ndocumentation changes I would like to register a doc patch to CF, but\nCF app does not pick up any messages other than posted in\npgsql-hackers. So to discuss with pgsql-doc subscribers, while dealing\nwith CF app, I would like cross postings for pgsql-hackers and\npgsql-docs.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Fri, 11 Jan 2019 10:08:04 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Tatsuo Ishii (ishii@sraoss.co.jp) wrote:\n> >> I'm quite on board with the need to reduce useless cross-posting,\n> >> but this is not the solution.\n> > \n> > Agreed. Similarly, posts from pgadmin-support sometimes end up\n> > intentionally being cross posted to pgadmin-hackers.\n> \n> Another use case is, pgsql-docs and pgsql-hackers. For non trivial\n> documentation changes I would like to register a doc patch to CF, but\n> CF app does not pick up any messages other than posted in\n> pgsql-hackers. So to discuss with pgsql-doc subscribers, while dealing\n> with CF app, I would like cross postings for pgsql-hackers and\n> pgsql-docs.\n\nSo, cross-posting between -hackers and -docs should be working now,\nthanks to the change I made yesterday.\n\nAfter stealing some time from Magnus to chat quickly about this (he\nseems to be mostly unavailable at present), what we're trying to figure\nout is what the group, overall, wants, and in particular if the change\nto allow cross-posting with -hackers solves the valid use-cases while\npreventing the invalid use-cases (like cross-posting between -general,\n-performance, and -sql).\n\nOf course, it isn't perfect, but then it's unlikely that anything will\nbe. Changes which require us to write additional code into pglister\nwill, of course, take longer, but we can work towards it if there's\nagreement about what such a change would look like. In the interim, we\ncould see how things go with the current configuration, or we could add\nother lists to the 'exclude', beyond just -hackers and the private\nlists, or we could add them all (effectively going back to where things\nwere before the changes were made).\n\nThoughts? Specific votes in one of those directions would help me, at\nleast, figure out what should be done today.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 11 Jan 2019 11:38:05 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 11:38 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Thoughts? Specific votes in one of those directions would help me, at\n> least, figure out what should be done today.\n\nWell, you know, you could just undo the ban which you imposed\nunilaterally and which nobody so far has said they liked, and multiple\npeople have said they disliked. Then after having the public\ndiscussion about what the policy should be, you could implement the\nconclusions of that discussion.\n\nI mean, personally, I have no problem with SOME cross-posting\nrestrictions, but nothing you've proposed so far seems very good,\nother than maybe the >2 rule. But if you're looking to understand\nwhat people want better, you don't really need more votes. What has\nbeen said by a whole bunch of people is not in any significant way\nunclear. They don't like the restrictions, and they do like being\nconsulted.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 11:43:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n> After stealing some time from Magnus to chat quickly about this (he\n> seems to be mostly unavailable at present), what we're trying to figure\n> out is what the group, overall, wants, and in particular if the change\n> to allow cross-posting with -hackers solves the valid use-cases while\n> preventing the invalid use-cases (like cross-posting between -general,\n> -performance, and -sql).\n\nThose don't really seem to be common and painful enough to really need a\ntechnical solution. -performance still seems like a useful subset of\npeople, and sometimes threads migrate to/from there. I'd personally just\nmerge -sql with -general, it doesn't seem to have a use-case left\nanymore. But that can be done later.\n\n\n> Of course, it isn't perfect, but then it's unlikely that anything will\n> be. Changes which require us to write additional code into pglister\n> will, of course, take longer, but we can work towards it if there's\n> agreement about what such a change would look like. In the interim, we\n> could see how things go with the current configuration, or we could add\n> other lists to the 'exclude', beyond just -hackers and the private\n> lists, or we could add them all (effectively going back to where things\n> were before the changes were made).\n> \n> Thoughts? Specific votes in one of those directions would help me, at\n> least, figure out what should be done today.\n\nI think you should just revert to the prior state, and then we can\ndiscuss potential solutions and the problems they're intended to\naddress. I find it baffling that after being called out for\nunilateral/not publicly discussed decisions you attempt to address that\ncriticism by continuing to make unilateral decisions.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 11 Jan 2019 10:23:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On 2019-Jan-11, Andres Freund wrote:\n\n> On 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n\n> > Thoughts? Specific votes in one of those directions would help me, at\n> > least, figure out what should be done today.\n> \n> I think you should just revert to the prior state, and then we can\n> discuss potential solutions and the problems they're intended to\n> address.\n\n+1\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 11 Jan 2019 15:33:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 7:33 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Jan-11, Andres Freund wrote:\n>\n> > On 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n>\n> > > Thoughts? Specific votes in one of those directions would help me, at\n> > > least, figure out what should be done today.\n> >\n> > I think you should just revert to the prior state, and then we can\n> > discuss potential solutions and the problems they're intended to\n> > address.\n>\n> +1\n>\n\n\nI've reverted this change across all lists it was enabled for.\n\nAnd for the record, I'm the one who asked Stephen to go for a second round\nof feedback and not just immediately revert it (he pinged me on chat, as I\nwas unable to keep track of the mail thread myself due to other commitments\nand airplanes and things).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jan 11, 2019 at 7:33 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Jan-11, Andres Freund wrote:\n\n> On 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n\n> > Thoughts? Specific votes in one of those directions would help me, at\n> > least, figure out what should be done today.\n> \n> I think you should just revert to the prior state, and then we can\n> discuss potential solutions and the problems they're intended to\n> address.\n\n+1I've reverted this change across all lists it was enabled for.And for the record, I'm the one who asked Stephen to go for a second round of feedback and not just immediately revert it (he pinged me on chat, as I was unable to keep track of the mail thread myself due to other commitments and airplanes and things).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 11 Jan 2019 21:18:48 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jan-11, Andres Freund wrote:\n>> On 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n>>> Thoughts? Specific votes in one of those directions would help me, at\n>>> least, figure out what should be done today.\n\n>> I think you should just revert to the prior state, and then we can\n>> discuss potential solutions and the problems they're intended to\n>> address.\n\n> +1\n\nSame here. The problem you want to solve has been there for decades,\nwe don't need a solution urgently.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 11 Jan 2019 16:11:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Jan-11, Andres Freund wrote:\n> >> On 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n> >>> Thoughts? Specific votes in one of those directions would help me, at\n> >>> least, figure out what should be done today.\n> \n> >> I think you should just revert to the prior state, and then we can\n> >> discuss potential solutions and the problems they're intended to\n> >> address.\n> \n> > +1\n> \n> Same here. The problem you want to solve has been there for decades,\n> we don't need a solution urgently.\n\nSo, this thread never got to anywhere and, unsurprisingly, we're seeing\nnot just a continuing set of cross-posts that shouldn't be, but an\nincrease in them.\n\nAt this point, I'd suggest we start moderating such cross-posts, letting\nmoderators know that they should reject ones that aren't done with any\nthought to it with a request to the submitter to please pick a list\ninstead of spamming them all.\n\nThoughts?\n\nThanks,\n\nStephen",
"msg_date": "Fri, 22 May 2020 13:48:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Fri, May 22, 2020 at 7:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > On 2019-Jan-11, Andres Freund wrote:\n> > >> On 2019-01-11 11:38:05 -0500, Stephen Frost wrote:\n> > >>> Thoughts? Specific votes in one of those directions would help me, at\n> > >>> least, figure out what should be done today.\n> >\n> > >> I think you should just revert to the prior state, and then we can\n> > >> discuss potential solutions and the problems they're intended to\n> > >> address.\n> >\n> > > +1\n> >\n> > Same here. The problem you want to solve has been there for decades,\n> > we don't need a solution urgently.\n>\n> So, this thread never got to anywhere and, unsurprisingly, we're seeing\n> not just a continuing set of cross-posts that shouldn't be, but an\n> increase in them.\n>\n> At this point, I'd suggest we start moderating such cross-posts, letting\n> moderators know that they should reject ones that aren't done with any\n> thought to it with a request to the submitter to please pick a list\n> instead of spamming them all.\n>\n> Thoughts?\n\nHuge +1\n\n\n",
"msg_date": "Fri, 22 May 2020 19:54:34 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> At this point, I'd suggest we start moderating such cross-posts, letting\n> moderators know that they should reject ones that aren't done with any\n> thought to it with a request to the submitter to please pick a list\n> instead of spamming them all.\n\n+1 ... the problem does seem to be getting worse lately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 May 2020 15:04:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Fri, May 22, 2020 at 9:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > At this point, I'd suggest we start moderating such cross-posts, letting\n> > moderators know that they should reject ones that aren't done with any\n> > thought to it with a request to the submitter to please pick a list\n> > instead of spamming them all.\n>\n> +1 ... the problem does seem to be getting worse lately.\n>\n> regards, tom lane\n>\n\nI finally managed to get around to pushing code into pglister that allows a\n\"moderate\" policy to be configured for CC handling on lists, and not just\ndiscard (it was originally decided we would never want this, but it's\npretty clear the ideas around this has changed).\n\nWe have a few internal lists set to discard at this point. And to be clear\nof the differences:\n* Allow -- any number of CCs are allowed\n* Moderate -- if more than one list is in to or cc, email gets moderated\nand sender gets a notice (with option to withdraw)\n* Discard -- if more than one list in to or cc, email gets discarded, and\nsender gets a notice\n\nIf an email is cced between a list that's moderate and one that's discard,\nit gets discarded from the one list and moderated on the other one, and the\nsender gets two separate notices. If it's cced between two lists that are\nboth in moderate, the sender gets one moderation notice for each of them.\nIf it's only cced between lists with discard policy, sender gets a single\nnotice.\n\nI haven't (yet) reconfigured any lists. But right now all our general lists\nhave policy \"allow\". Should we more or less change all our public lists to\nbe \"moderate\"?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, May 22, 2020 at 9:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Stephen Frost <sfrost@snowman.net> writes:\n> At this point, I'd suggest we start moderating such cross-posts, letting\n> moderators know that they should reject ones that aren't done with any\n> thought to it with a request to the submitter to please pick a list\n> instead of spamming them all.\n\n+1 ... the problem does seem to be getting worse lately.\n\n regards, tom lane\nI finally managed to get around to pushing code into pglister that allows a \"moderate\" policy to be configured for CC handling on lists, and not just discard (it was originally decided we would never want this, but it's pretty clear the ideas around this has changed).We have a few internal lists set to discard at this point. And to be clear of the differences:* Allow -- any number of CCs are allowed* Moderate -- if more than one list is in to or cc, email gets moderated and sender gets a notice (with option to withdraw)* Discard -- if more than one list in to or cc, email gets discarded, and sender gets a noticeIf an email is cced between a list that's moderate and one that's discard, it gets discarded from the one list and moderated on the other one, and the sender gets two separate notices. If it's cced between two lists that are both in moderate, the sender gets one moderation notice for each of them. If it's only cced between lists with discard policy, sender gets a single notice.I haven't (yet) reconfigured any lists. But right now all our general lists have policy \"allow\". Should we more or less change all our public lists to be \"moderate\"?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 11 Jul 2020 17:51:59 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> I haven't (yet) reconfigured any lists. But right now all our general lists\n> have policy \"allow\". Should we more or less change all our public lists to\n> be \"moderate\"?\n\nThe only case that might be a bad idea IMO is cross-posts between\npgsql-bugs and other lists. I could personally do without that case\ntoo, but we have done it often in the past (and I think there's at\nleast one such thread active right now).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 13:14:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > I haven't (yet) reconfigured any lists. But right now all our general lists\n> > have policy \"allow\". Should we more or less change all our public lists to\n> > be \"moderate\"?\n> \n> The only case that might be a bad idea IMO is cross-posts between\n> pgsql-bugs and other lists. I could personally do without that case\n> too, but we have done it often in the past (and I think there's at\n> least one such thread active right now).\n\nSuch cases wouldn't be dropped- just moderated, at least until/unless we\nimplement something to allow bypassing that moderation in some cases.\n\n+1 for enabling it across the board and then we can keep an eye on it\nand if it becomes a lot of effort for moderators or we end up with\nthings getting too delayed then we can always adjust either the lists\nthis is applied to, or have a mechanism/flag to allow certain posters to\nbypass this particular moderation, or similar.\n\nIn addition, I would add this to: https://www.postgresql.org/list/\n\nTip #3: Choose the most appropriate list\n\nChoose the most appropriate individual list for your question-\nplease do not cross-post between the mailing lists (unless there is a\nspecific reason, such as a confirmed bug reported on -bugs leading into\na discussion which is appropriate for -hackers). Cross-posted emails\n(ones where more than one list is included in the To or CC) will be\nmoderated and therefore will also take longer to reach subscribers.\n\n(or something along those lines)\n\nLastly, let's make sure to notify all the moderators explicitly of the\nchange- I'm not sure if all of them follow -www.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 13 Jul 2020 08:21:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> Tip #3: Choose the most appropriate list\n> \n> Choose the most appropriate individual list for your question-\n> please do not cross-post between the mailing lists (unless there is a\n> specific reason, such as a confirmed bug reported on -bugs leading into\n> a discussion which is appropriate for -hackers). Cross-posted emails\n> (ones where more than one list is included in the To or CC) will be\n> moderated and therefore will also take longer to reach subscribers.\n\nConcretely, I propose to push the attached later today, unless anyone\nhas an issue with it.\n\nThis restructures the page a bit to title the Tips section explicitly,\nand moves the title for Subscribing/Unsubscribing down to actually be\nover that part of the page, and adds a paragraph explicitly talking\nabout Unsubscribing, since we didn't actually have that before (even\nthough the title implied we did..).\n\nThanks,\n\nStephen",
"msg_date": "Tue, 14 Jul 2020 10:23:22 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 4:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > Tip #3: Choose the most appropriate list\n> >\n> > Choose the most appropriate individual list for your question-\n> > please do not cross-post between the mailing lists (unless there is a\n> > specific reason, such as a confirmed bug reported on -bugs leading into\n> > a discussion which is appropriate for -hackers). Cross-posted emails\n> > (ones where more than one list is included in the To or CC) will be\n> > moderated and therefore will also take longer to reach subscribers.\n>\n> Concretely, I propose to push the attached later today, unless anyone\n> has an issue with it.\n>\n> This restructures the page a bit to title the Tips section explicitly,\n> and moves the title for Subscribing/Unsubscribing down to actually be\n> over that part of the page, and adds a paragraph explicitly talking\n> about Unsubscribing, since we didn't actually have that before (even\n> though the title implied we did..).\n>\n\nLGTM.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jul 14, 2020 at 4:23 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> Tip #3: Choose the most appropriate list\n> \n> Choose the most appropriate individual list for your question-\n> please do not cross-post between the mailing lists (unless there is a\n> specific reason, such as a confirmed bug reported on -bugs leading into\n> a discussion which is appropriate for -hackers). Cross-posted emails\n> (ones where more than one list is included in the To or CC) will be\n> moderated and therefore will also take longer to reach subscribers.\n\nConcretely, I propose to push the attached later today, unless anyone\nhas an issue with it.\n\nThis restructures the page a bit to title the Tips section explicitly,\nand moves the title for Subscribing/Unsubscribing down to actually be\nover that part of the page, and adds a paragraph explicitly talking\nabout Unsubscribing, since we didn't actually have that before (even\nthough the title implied we did..).LGTM. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 15 Jul 2020 12:41:02 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Tue, Jul 14, 2020 at 4:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Stephen Frost (sfrost@snowman.net) wrote:\n> > > Tip #3: Choose the most appropriate list\n> > >\n> > > Choose the most appropriate individual list for your question-\n> > > please do not cross-post between the mailing lists (unless there is a\n> > > specific reason, such as a confirmed bug reported on -bugs leading into\n> > > a discussion which is appropriate for -hackers). Cross-posted emails\n> > > (ones where more than one list is included in the To or CC) will be\n> > > moderated and therefore will also take longer to reach subscribers.\n> >\n> > Concretely, I propose to push the attached later today, unless anyone\n> > has an issue with it.\n> >\n> > This restructures the page a bit to title the Tips section explicitly,\n> > and moves the title for Subscribing/Unsubscribing down to actually be\n> > over that part of the page, and adds a paragraph explicitly talking\n> > about Unsubscribing, since we didn't actually have that before (even\n> > though the title implied we did..).\n> \n> LGTM.\n\nThanks, pushed.\n\nWith that done, I think we can go ahead and enable the moderation of\ncross-posted emails.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 15 Jul 2020 09:32:37 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On 2020-Jul-15, Stephen Frost wrote:\n\n> With that done, I think we can go ahead and enable the moderation of\n> cross-posted emails.\n\nBTW now that this is working, I think we should discuss that if person A\ncross-posts, and that post is approved, then whenever person B replies\nit should also be approved -- surely there's no need to approve the\ncross-posting (for the known subset of lists) for each reply.\n\n(We were just bitten by that in thread\nhttps://postgr.es/m/15858-9572469fd3b73263@postgresql.org )\n\nRight?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 17 Sep 2020 15:49:23 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> BTW now that this is working, I think we should discuss that if person A\n> cross-posts, and that post is approved, then whenever person B replies\n> it should also be approved -- surely there's no need to approve the\n> cross-posting (for the known subset of lists) for each reply.\n\nIf that can be automated it'd surely make things noticeably less painful.\nAs is, once somebody's started a multi-list thread, the only way to get\nout of trouble is for someone to remember to remove other lists from a\nreply ... and even then, if anyone replies to an earlier post, it's a mess\nall over again. But I didn't realize we had the ability to pre-approve\nwhole threads for this filter?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Sep 2020 15:03:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
},
{
"msg_contents": "On Thu, Sep 17, 2020 at 9:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > BTW now that this is working, I think we should discuss that if person A\n> > cross-posts, and that post is approved, then whenever person B replies\n> > it should also be approved -- surely there's no need to approve the\n> > cross-posting (for the known subset of lists) for each reply.\n>\n> If that can be automated it'd surely make things noticeably less painful.\n> As is, once somebody's started a multi-list thread, the only way to get\n> out of trouble is for someone to remember to remove other lists from a\n> reply ... and even then, if anyone replies to an earlier post, it's a mess\n> all over again. But I didn't realize we had the ability to pre-approve\n> whole threads for this filter?\n>\n\nWe don't. And I don't see an obvious way to do it either.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Sep 17, 2020 at 9:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> BTW now that this is working, I think we should discuss that if person A\n> cross-posts, and that post is approved, then whenever person B replies\n> it should also be approved -- surely there's no need to approve the\n> cross-posting (for the known subset of lists) for each reply.\n\nIf that can be automated it'd surely make things noticeably less painful.\nAs is, once somebody's started a multi-list thread, the only way to get\nout of trouble is for someone to remember to remove other lists from a\nreply ... and even then, if anyone replies to an earlier post, it's a mess\nall over again. But I didn't realize we had the ability to pre-approve\nwhole threads for this filter?We don't. And I don't see an obvious way to do it either. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 17 Sep 2020 21:17:07 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Policy on cross-posting to multiple lists"
}
] |
[
{
"msg_contents": "The file header in the advanced tutorial has what seems like incorrect (or at\nleast odd) wording: \"Tutorial on advanced more PostgreSQL features”. Attached\npatch changes to “more advanced” which I think is what was the intention.\n\nI can willingly admit that I had never even noticed the tutorial directory\nuntil I yesterday stumbled across it. The commit introducing the above wording\nis by now old enough to drive.\n\ncheers ./daniel",
"msg_date": "Thu, 10 Jan 2019 13:33:43 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Strange wording in advanced tutorial"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 01:33:43PM +0100, Daniel Gustafsson wrote:\n> The file header in the advanced tutorial has what seems like incorrect (or at\n> least odd) wording: \"Tutorial on advanced more PostgreSQL features”. Attached\n> patch changes to “more advanced” which I think is what was the intention.\n> \n> I can willingly admit that I had never even noticed the tutorial directory\n> until I yesterday stumbled across it. The commit introducing the above wording\n> is by now old enough to drive.\n> \n\nAgreed, thanks more. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Fri, 25 Jan 2019 18:57:38 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Strange wording in advanced tutorial"
}
] |
[
{
"msg_contents": "Folks,\n\nWe're 10 days into the Commitfest, the first few having been the new\nyear, with people maybe paying attention to other things.\n\nI'd like to propose extending this CF by some period, maybe as long\nas ten days, so people get all the opportunities they might have had\nif it had started on time.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Thu, 10 Jan 2019 21:26:19 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Commitfest delayed: extend it?"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> We're 10 days into the Commitfest, the first few having been the new\n> year, with people maybe paying attention to other things.\n> I'd like to propose extending this CF by some period, maybe as long\n> as ten days, so people get all the opportunities they might have had\n> if it had started on time.\n\nI think it *did* start on time, at least people have been acting like\nit was on. It just wasn't very official.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 10 Jan 2019 15:28:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest delayed: extend it?"
},
{
"msg_contents": "On 2019-Jan-10, Tom Lane wrote:\n\n> David Fetter <david@fetter.org> writes:\n> > We're 10 days into the Commitfest, the first few having been the new\n> > year, with people maybe paying attention to other things.\n> > I'd like to propose extending this CF by some period, maybe as long\n> > as ten days, so people get all the opportunities they might have had\n> > if it had started on time.\n> \n> I think it *did* start on time, at least people have been acting like\n> it was on. It just wasn't very official.\n\nIt has definitely started, at least for me :-)\n\nWe're going to have a bit of a triage session in the FOSDEM dev's\nmeeting, on Jan 31st, right at the end. I think that will be a good\nopportunity to give some final cleanup, and we should close it then or\nshortly thereafter.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 10 Jan 2019 17:44:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest delayed: extend it?"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 05:44:34PM -0300, Alvaro Herrera wrote:\n> It has definitely started, at least for me :-)\n\nYes, there is no point in extending or delaying it.\n\n> We're going to have a bit of a triage session in the FOSDEM dev's\n> meeting, on Jan 31st, right at the end. I think that will be a good\n> opportunity to give some final cleanup, and we should close it then or\n> shortly thereafter.\n\nA lot of folks are going to be there then (not me)? If it is possible\nto get the CF closed more or less on time using this method that would\nbe nice.\n--\nMichael",
"msg_date": "Fri, 11 Jan 2019 09:53:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest delayed: extend it?"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 09:53:16AM +0900, Michael Paquier wrote:\n> On Thu, Jan 10, 2019 at 05:44:34PM -0300, Alvaro Herrera wrote:\n> > It has definitely started, at least for me :-)\n> \n> Yes, there is no point in extending or delaying it.\n> \n> > We're going to have a bit of a triage session in the FOSDEM dev's\n> > meeting, on Jan 31st, right at the end. I think that will be a\n> > good opportunity to give some final cleanup, and we should close\n> > it then or shortly thereafter.\n> \n> A lot of folks are going to be there then (not me)? If it is\n> possible to get the CF closed more or less on time using this method\n> that would be nice.\n\nConsensus having been reached, I'll aim for 1/31 or 2/1 at the latest.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 11 Jan 2019 03:06:59 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest delayed: extend it?"
}
] |
[
{
"msg_contents": "Hi,\n\nA number of postgres files have sections like heapam's\n\n * INTERFACE ROUTINES\n * relation_open - open any relation by relation OID\n * relation_openrv - open any relation specified by a RangeVar\n * relation_close - close any relation\n * heap_open - open a heap relation by relation OID\n * heap_openrv - open a heap relation specified by a RangeVar\n * heap_close - (now just a macro for relation_close)\n * heap_beginscan - begin relation scan\n * heap_rescan - restart a relation scan\n * heap_endscan - end relation scan\n * heap_getnext - retrieve next tuple in scan\n * heap_fetch - retrieve tuple with given tid\n * heap_insert - insert tuple into a relation\n * heap_multi_insert - insert multiple tuples into a relation\n * heap_delete - delete a tuple from a relation\n * heap_update - replace a tuple in a relation with another tuple\n * heap_sync - sync heap, for when no WAL has been written\n\nThey're often out-of-date, and I personally never found them to be\nuseful. A few people, including yours truly, have been removing a few\nhere and there when overhauling a subsystem to avoid having to update\nand then adjust them.\n\nI think it might be a good idea to just do that for all at once. Having\nto consider separately committing a removal, updating them without\nfixing preexisting issues, or just leaving them outdated on a regular\nbasis imo is a usless distraction.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 10 Jan 2019 15:58:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Remove all \"INTERFACE ROUTINES\" style comments"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 12:58 PM Andres Freund <andres@anarazel.de> wrote:\n> A number of postgres files have sections like heapam's\n>\n> * INTERFACE ROUTINES\n> * relation_open - open any relation by relation OID\n> * relation_openrv - open any relation specified by a RangeVar\n> * relation_close - close any relation\n> * heap_open - open a heap relation by relation OID\n> * heap_openrv - open a heap relation specified by a RangeVar\n> * heap_close - (now just a macro for relation_close)\n> * heap_beginscan - begin relation scan\n> * heap_rescan - restart a relation scan\n> * heap_endscan - end relation scan\n> * heap_getnext - retrieve next tuple in scan\n> * heap_fetch - retrieve tuple with given tid\n> * heap_insert - insert tuple into a relation\n> * heap_multi_insert - insert multiple tuples into a relation\n> * heap_delete - delete a tuple from a relation\n> * heap_update - replace a tuple in a relation with another tuple\n> * heap_sync - sync heap, for when no WAL has been written\n>\n> They're often out-of-date, and I personally never found them to be\n> useful. A few people, including yours truly, have been removing a few\n> here and there when overhauling a subsystem to avoid having to update\n> and then adjust them.\n>\n> I think it might be a good idea to just do that for all at once. Having\n> to consider separately committing a removal, updating them without\n> fixing preexisting issues, or just leaving them outdated on a regular\n> basis imo is a usless distraction.\n>\n> Comments?\n\n+1\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Fri, 11 Jan 2019 13:05:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove all \"INTERFACE ROUTINES\" style comments"
},
{
"msg_contents": "Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> On Fri, Jan 11, 2019 at 12:58 PM Andres Freund <andres@anarazel.de> wrote:\n>> A number of postgres files have sections like heapam's\n>> * INTERFACE ROUTINES\n>> \n>> They're often out-of-date, and I personally never found them to be\n>> useful. A few people, including yours truly, have been removing a few\n>> here and there when overhauling a subsystem to avoid having to update\n>> and then adjust them.\n>> I think it might be a good idea to just do that for all at once.\n\n> +1\n\nI agree we don't maintain them well, so it'd be better to remove them,\nas long as we make sure any useful info gets transferred someplace else\n(like per-function header comments).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 10 Jan 2019 22:42:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove all \"INTERFACE ROUTINES\" style comments"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 7:05 PM Thomas Munro\n<thomas.munro@enterprisedb.com> wrote:\n> +1\n\n+1\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 12:02:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove all \"INTERFACE ROUTINES\" style comments"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-10 15:58:41 -0800, Andres Freund wrote:\n> A number of postgres files have sections like heapam's\n> \n> * INTERFACE ROUTINES\n> ...\n> They're often out-of-date, and I personally never found them to be\n> useful. A few people, including yours truly, have been removing a few\n> here and there when overhauling a subsystem to avoid having to update\n> and then adjust them.\n> \n> I think it might be a good idea to just do that for all at once. Having\n> to consider separately committing a removal, updating them without\n> fixing preexisting issues, or just leaving them outdated on a regular\n> basis imo is a usless distraction.\n\nAs the reaction was positive, here's a first draft of a commit removing\nthem. A few comments:\n\n- I left two INTERFACE ROUTINES blocks intact, because they actually add\n somewhat useful information. Namely fd.c's, which actually seems\n useful, and predicate.c's about which I'm less sure.\n- I tried to move all comments about the routines in the INTERFACE\n section to the functions if they didn't have a roughly equivalent\n comment. Even if the comment wasn't that useful. Particularly just\n about all the function comments in executor/node*.c files are useless,\n but I thought that's something best to be cleaned up separately.\n- After removing the INTERFACE ROUTINES blocks a number of executor\n files had a separate comment block with just a NOTES section. I merged\n these with the file header comment blocks, and indented them to\n match. I think this is better, but I'm only like 60% convinced of\n that.\n\nComments? I'll revisit this patch on Monday or so, make another pass\nthrough it, and push it then.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 11 Jan 2019 17:12:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Remove all \"INTERFACE ROUTINES\" style comments"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 12:02:22PM -0500, Robert Haas wrote:\n> On Thu, Jan 10, 2019 at 7:05 PM Thomas Munro\n> <thomas.munro@enterprisedb.com> wrote:\n> > +1\n> \n> +1\n\n+1.\n--\nMichael",
"msg_date": "Sat, 12 Jan 2019 10:17:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove all \"INTERFACE ROUTINES\" style comments"
},
{
"msg_contents": "On 12/01/2019 03:12, Andres Freund wrote:\n> On 2019-01-10 15:58:41 -0800, Andres Freund wrote:\n>> A number of postgres files have sections like heapam's\n>>\n>> * INTERFACE ROUTINES\n>> ...\n>> They're often out-of-date, and I personally never found them to be\n>> useful. A few people, including yours truly, have been removing a few\n>> here and there when overhauling a subsystem to avoid having to update\n>> and then adjust them.\n>>\n>> I think it might be a good idea to just do that for all at once. Having\n>> to consider separately committing a removal, updating them without\n>> fixing preexisting issues, or just leaving them outdated on a regular\n>> basis imo is a usless distraction.\n> \n> As the reaction was positive, here's a first draft of a commit removing\n> them. A few comments:\n> \n> - I left two INTERFACE ROUTINES blocks intact, because they actually add\n> somewhat useful information. Namely fd.c's, which actually seems\n> useful, and predicate.c's about which I'm less sure.\n> - I tried to move all comments about the routines in the INTERFACE\n> section to the functions if they didn't have a roughly equivalent\n> comment. Even if the comment wasn't that useful. Particularly just\n> about all the function comments in executor/node*.c files are useless,\n> but I thought that's something best to be cleaned up separately.\n> - After removing the INTERFACE ROUTINES blocks a number of executor\n> files had a separate comment block with just a NOTES section. I merged\n> these with the file header comment blocks, and indented them to\n> match. I think this is better, but I'm only like 60% convinced of\n> that.\n> \n> Comments? I'll revisit this patch on Monday or so, make another pass\n> through it, and push it then.\n\nI agree that just listing all the public functions in a source file is \nnot useful. But listing the most important ones, perhaps with examples \non how to use them together, or grouping functions when there are a lot \nof them, is useful. A high-level view of the public interface is \nespecially useful for people who are browsing the code for the first time.\n\nThe comments in execMain.c seemed like a nice overview to the interface. \nI'm not sure the comments on each function do quite the same thing. The \ngrouping of the functions in pqcomm.c's seemed useful. Especially when \nsome of the routines listed there are actually macros defined in \nlibpq.h, so if someone just looks at the contents of pqcomm.c, he might \nnot realize that there's more in libpq.h. The grouping in pqformat.c \nalso seemd useful.\n\nIn that vein, the comments in heapam.c could be re-structured, something \nlike this:\n\n * Opening/closing relations\n * -------------------------\n *\n * The relation_* functions work on any relation, not only heap\n * relations:\n *\n * relation_open - open any relation by relation OID\n * relation_openrv - open any relation specified by a RangeVar\n * relation_close - close any relation\n *\n * These are similar, but check that the relation is a Heap\n * relation:\n *\n * heap_open - open a heap relation by relation OID\n * heap_openrv - open a heap relation specified by a RangeVar\n * heap_close - (now just a macro for relation_close)\n *\n * Heap scans\n * ----------\n *\n * Functions for doing a Sequential Scan on a heap table:\n *\n * heap_beginscan - begin relation scan\n * heap_rescan\t - restart a relation scan\n * heap_endscan - end relation scan\n * heap_getnext - retrieve next tuple in scan\n *\n * To retrieve a single heap tuple, by tid:\n * heap_fetch - retrieve tuple with given tid\n *\n * Updating a heap\n * ---------------\n *\n * heap_insert - insert tuple into a relation\n * heap_multi_insert - insert multiple tuples into a relation\n * heap_delete - delete a tuple from a relation\n * heap_update - replace a tuple in a relation with another tuple\n * heap_sync - sync heap, for when no WAL has been written\n\n- Heikki\n\n",
"msg_date": "Mon, 14 Jan 2019 13:46:58 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove all \"INTERFACE ROUTINES\" style comments"
}
] |
[
{
"msg_contents": "Hi,\n\nI created some empty tables and run ` EXPLAIN ANALYZE` on `SELECT * `. I found\nthe results have different row numbers, but the tables are all empty.\n\n=# CREATE TABLE t1(id INT, data INT);\n=# EXPLAIN ANALYZE SELECT * FROM t1;\n Seq Scan on t1 (cost=0.00..32.60 rows=2260 width=8) (actual\n time=0.003..0.003 rows=0 loops=1)\n\n=# CREATE TABLE t2(data VARCHAR);\n=# EXPLAIN ANALYZE SELECT * FROM t2;\n Seq Scan on t2 (cost=0.00..23.60 rows=1360 width=32) (actual\n time=0.002..0.002 rows=0 loops=1)\n\n=# CREATE TABLE t3(id INT, data VARCHAR);\n=# EXPLAIN ANALYZE SELECT * FROM t3;\n Seq Scan on t3 (cost=0.00..22.70 rows=1270 width=36) (actual\n time=0.001..0.001 rows=0 loops=1)\n\nI found this behavior unexpected. I'm still trying to find out how/where the planner\ndetermines the plan_rows. Any help will be appreciated!\n\nThank you,\nDonald Dong\n",
"msg_date": "Thu, 10 Jan 2019 18:41:46 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "How does the planner determine plan_rows ?"
},
{
"msg_contents": ">>>>> \"Donald\" == Donald Dong <xdong@csumb.edu> writes:\n\n Donald> Hi,\n Donald> I created some empty tables and run ` EXPLAIN ANALYZE` on\n Donald> `SELECT * `. I found the results have different row numbers,\n Donald> but the tables are all empty.\n\nEmpty tables are something of a special case, because the planner\ndoesn't assume that they will _stay_ empty, and using an estimate of 0\nor 1 rows would tend to create a distorted plan that would likely blow\nup in runtime as soon as you insert a second row.\n\nThe place to look for info would be estimate_rel_size in\noptimizer/util/plancat.c, from which you can see that empty tables get\na default size estimate of 10 pages. Thus:\n\n Donald> =# CREATE TABLE t1(id INT, data INT);\n Donald> =# EXPLAIN ANALYZE SELECT * FROM t1;\n Donald> Seq Scan on t1 (cost=0.00..32.60 rows=2260 width=8) (actual\n Donald> time=0.003..0.003 rows=0 loops=1)\n\nAn (int,int) tuple takes about 36 bytes, so you can get about 226 of\nthem on a page, so 10 pages is 2260 rows.\n\n Donald> =# CREATE TABLE t2(data VARCHAR);\n Donald> =# EXPLAIN ANALYZE SELECT * FROM t2;\n Donald> Seq Scan on t2 (cost=0.00..23.60 rows=1360 width=32) (actual\n Donald> time=0.002..0.002 rows=0 loops=1)\n\nSize of a varchar with no specified length isn't known, so the planner\ndetermines an average length of 32 by the time-honoured method of rectal\nextraction (see get_typavgwidth in lsyscache.c), making 136 rows per\npage.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Fri, 11 Jan 2019 03:48:32 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: How does the planner determine plan_rows ?"
},
{
"msg_contents": "Thank you for the great explanation!\n\n> On Jan 10, 2019, at 7:48 PM, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n> \n>>>>>> \"Donald\" == Donald Dong <xdong@csumb.edu> writes:\n> \n> Donald> Hi,\n> Donald> I created some empty tables and run ` EXPLAIN ANALYZE` on\n> Donald> `SELECT * `. I found the results have different row numbers,\n> Donald> but the tables are all empty.\n> \n> Empty tables are something of a special case, because the planner\n> doesn't assume that they will _stay_ empty, and using an estimate of 0\n> or 1 rows would tend to create a distorted plan that would likely blow\n> up in runtime as soon as you insert a second row.\n> \n> The place to look for info would be estimate_rel_size in\n> optimizer/util/plancat.c, from which you can see that empty tables get\n> a default size estimate of 10 pages. Thus:\n> \n> Donald> =# CREATE TABLE t1(id INT, data INT);\n> Donald> =# EXPLAIN ANALYZE SELECT * FROM t1;\n> Donald> Seq Scan on t1 (cost=0.00..32.60 rows=2260 width=8) (actual\n> Donald> time=0.003..0.003 rows=0 loops=1)\n> \n> An (int,int) tuple takes about 36 bytes, so you can get about 226 of\n> them on a page, so 10 pages is 2260 rows.\n> \n> Donald> =# CREATE TABLE t2(data VARCHAR);\n> Donald> =# EXPLAIN ANALYZE SELECT * FROM t2;\n> Donald> Seq Scan on t2 (cost=0.00..23.60 rows=1360 width=32) (actual\n> Donald> time=0.002..0.002 rows=0 loops=1)\n> \n> Size of a varchar with no specified length isn't known, so the planner\n> determines an average length of 32 by the time-honoured method of rectal\n> extraction (see get_typavgwidth in lsyscache.c), making 136 rows per\n> page.\n> \n> -- \n> Andrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 10 Jan 2019 19:56:15 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: How does the planner determine plan_rows ?"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> I created some empty tables and run ` EXPLAIN ANALYZE` on `SELECT * `. I found\n> the results have different row numbers, but the tables are all empty.\n\nThis isn't a terribly interesting case, since you've neither loaded\nany data nor vacuumed/analyzed the table, but ...\n\n> I found this behavior unexpected. I'm still trying to find out how/where the planner\n> determines the plan_rows.\n\n... estimate_rel_size() in plancat.c is where to look to find out\nabout the planner's default estimates when it's lacking hard data.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 10 Jan 2019 23:01:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How does the planner determine plan_rows ?"
},
{
"msg_contents": "\n> On Jan 10, 2019, at 8:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> ... estimate_rel_size() in plancat.c is where to look to find out\n> about the planner's default estimates when it's lacking hard data.\n\nThank you! Now I see how the planner uses the rows to estimate the cost and\ngenerates the best_plan.\n\nTo me, tracing the function calls is not a simple task. I'm using cscope, and I\nuse printf when I'm not entirely sure. I was considering to use gbd, but I'm\nhaving issues referencing the source code in gdb.\n\nI'm very interested to learn how the professionals explore the codebase!\n",
"msg_date": "Thu, 10 Jan 2019 23:41:51 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: How does the planner determine plan_rows ?"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 11:41:51PM -0800, Donald Dong wrote:\n> \n> > On Jan 10, 2019, at 8:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > ... estimate_rel_size() in plancat.c is where to look to find out\n> > about the planner's default estimates when it's lacking hard data.\n> \n> Thank you! Now I see how the planner uses the rows to estimate the cost and\n> generates the best_plan.\n> \n> To me, tracing the function calls is not a simple task. I'm using cscope, and I\n> use printf when I'm not entirely sure. I was considering to use gbd, but I'm\n> having issues referencing the source code in gdb.\n> \n> I'm very interested to learn how the professionals explore the codebase!\n\nUh, the developer FAQ has some info on this:\n\n\thttps://wiki.postgresql.org/wiki/Developer_FAQ\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Fri, 25 Jan 2019 19:01:53 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: How does the planner determine plan_rows ?"
}
] |
[
{
"msg_contents": "Hi,\n\nThe pluggable storage patchset has a large struct full of callbacks, and\na bunch of wrapper functions for calling those callbacks. While\nstarting to polish the patchset, I tried to make the formatting nice.\n\nBy default pgindent yields formatting like:\n\n/*\n * API struct for a table AM. Note this must be allocated in a\n * server-lifetime manner, typically as a static const struct, which then gets\n * returned by FormData_pg_am.amhandler.\n */\ntypedef struct TableAmRoutine\n{\n NodeTag type;\n\n...\n void (*relation_set_new_filenode) (Relation relation,\n char persistence,\n TransactionId *freezeXid,\n MultiXactId *minmulti);\n\n...\n\n\nstatic inline void\ntable_set_new_filenode(Relation rel, char persistence,\n TransactionId *freezeXid, MultiXactId *minmulti)\n{\n rel->rd_tableam->relation_set_new_filenode(rel, persistence,\n freezeXid, minmulti);\n}\n\nwhich isn't particularly pretty, especially because there's callbacks\nwith longer names than the example above.\n\n\nUnfortunately pgindent prevents formatting the callbacks like:\n void (*relation_set_new_filenode) (\n Relation relation,\n char persistence,\n TransactionId *freezeXid,\n MultiXactId *minmulti);\n\nor something in that vein. What however does work, is:\n\n void (*relation_set_new_filenode)\n (Relation relation,\n char persistence,\n TransactionId *freezeXid,\n MultiXactId *minmulti);\n\nI.e. putting the opening ( of the parameter list into a separate line\nyields somewhat usable formatting. This also has the advantage that the\narguments of all callbacks line up, making it a bit easier to scan.\n\nSimilarly, to reduce the indentation, especially for callbacks with long\nnames and/o with longer paramter names, we can do:\n\nstatic inline void\ntable_set_new_filenode(Relation rel, char persistence,\n TransactionId *freezeXid, MultiXactId *minmulti)\n{\n rel->rd_tableam->relation_set_new_filenode\n (rel, persistence, freezeXid, minmulti);\n}\n\n\nSo, putting the parameter list, both in use and declaration, entirely\ninto a new line yields decent formatting with pgindent. But it's kinda\nweird. I can't really come up with a better alternative, and after a\nfew minutes it looks pretty reasonable.\n\nComments? Better alternatives?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 10 Jan 2019 20:45:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Acceptable/Best formatting of callbacks (for pluggable storage)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The pluggable storage patchset has a large struct full of callbacks, and\n> a bunch of wrapper functions for calling those callbacks. While\n> starting to polish the patchset, I tried to make the formatting nice.\n> ...\n> So, putting the parameter list, both in use and declaration, entirely\n> into a new line yields decent formatting with pgindent. But it's kinda\n> weird. I can't really come up with a better alternative, and after a\n> few minutes it looks pretty reasonable.\n\n> Comments? Better alternatives?\n\nUse shorter method names? This sounds like an ugly workaround for\na carpal-tunnel-syndrome-inducing design.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 11 Jan 2019 09:42:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Acceptable/Best formatting of callbacks (for pluggable storage)"
},
{
"msg_contents": "On Thu, Jan 10, 2019 at 11:45 PM Andres Freund <andres@anarazel.de> wrote:\n> void (*relation_set_new_filenode) (Relation relation,\n> char persistence,\n> TransactionId *freezeXid,\n> MultiXactId *minmulti);\n\nHonestly, I don't see the problem with that, really. But you could\nalso use the style that is used in fdwapi.h, where we have a typedef\nfor each callback first, and then the actual structure just declares a\nfunction pointer of each time. That saves a bit of horizontal space\nand might look a little nicer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 10:32:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Acceptable/Best formatting of callbacks (for pluggable storage)"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 9:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Use shorter method names? This sounds like an ugly workaround for\n> a carpal-tunnel-syndrome-inducing design.\n\nWhat would you suggest instead of something like\n\"relation_set_new_filenode\"? I agree that shorter names have some\nmerit, but it's not always easy to figure out how to shorten them\nwithout making the result unclear.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 10:33:48 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Acceptable/Best formatting of callbacks (for pluggable storage)"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-11 10:32:03 -0500, Robert Haas wrote:\n> On Thu, Jan 10, 2019 at 11:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > void (*relation_set_new_filenode) (Relation relation,\n> > char persistence,\n> > TransactionId *freezeXid,\n> > MultiXactId *minmulti);\n> \n> Honestly, I don't see the problem with that, really.\n\nIt's just hard to read if there's a lot of callbacks defined, the more\naccurate the name, the more deeply indented. Obviously that's always a\nconcern and thing to balance, but the added indentation due to the\nwhitespace, and the parens, * and whitespace between ) ( make it worse.\n\n\n> But you could\n> also use the style that is used in fdwapi.h, where we have a typedef\n> for each callback first, and then the actual structure just declares a\n> function pointer of each time. That saves a bit of horizontal space\n> and might look a little nicer.\n\nIt's what the patch did at first. It doesn't save much space, because\nthe indentation due to the typedef at the start of the line is about as\nmuch as defining in the struct adds, and we often add a _function\nsuffix. It additionally adds a fair bit of mental overhead - there's\nanother set of names that one needs to keep track of, figuring out what\na callback means requires looking in an additional place. I found that\nremoving that indirection made for a significantly more pleasant\nexperience working on the patchset.\n\n\nJust as an example of why I think this isn't great:\n\ntypedef Size (*EstimateDSMForeignScan_function) (ForeignScanState *node,\n ParallelContext *pcxt);\ntypedef void (*InitializeDSMForeignScan_function) (ForeignScanState *node,\n ParallelContext *pcxt,\n void *coordinate);\ntypedef void (*ReInitializeDSMForeignScan_function) (ForeignScanState *node,\n ParallelContext *pcxt,\n void *coordinate);\ntypedef void (*InitializeWorkerForeignScan_function) (ForeignScanState *node,\n shm_toc *toc,\n void *coordinate);\ntypedef void (*ShutdownForeignScan_function) (ForeignScanState *node);\ntypedef bool (*IsForeignScanParallelSafe_function) (PlannerInfo *root,\n RelOptInfo *rel,\n RangeTblEntry *rte);\ntypedef List *(*ReparameterizeForeignPathByChild_function) (PlannerInfo *root,\n List *fdw_private,\n RelOptInfo *child_rel);\n\nthat's a lot of indentation variability in a small amount of space - I\nfind it noticably slower to mentally parse due to that. Compare that\nwith:\n\ntypedef Size (*EstimateDSMForeignScan_function)\n (ForeignScanState *node,\n ParallelContext *pcxt);\n\ntypedef void (*InitializeDSMForeignScan_function)\n (ParallelContext *pcxt,\n void *coordinate);\n\ntypedef void (*ReInitializeDSMForeignScan_function)\n (ForeignScanState *node,\n ParallelContext *pcxt,\n void *coordinate);\n\ntypedef void (*InitializeWorkerForeignScan_function)\n (ForeignScanState *node,\n shm_toc *toc,\n void *coordinate);\n\ntypedef void (*ShutdownForeignScan_function)\n (ForeignScanState *node);\n\ntypedef bool (*IsForeignScanParallelSafe_function)\n (PlannerInfo *root,\n RelOptInfo *rel,\n RangeTblEntry *rte);\n\ntypedef List *(*ReparameterizeForeignPathByChild_function)\n (PlannerInfo *root,\n List *fdw_private,\n RelOptInfo *child_rel);\n\nI find the second formatting considerably easier to read, albeit not for\nthe first few seconds.\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 11 Jan 2019 09:56:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Acceptable/Best formatting of callbacks (for pluggable storage)"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-11 09:42:19 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The pluggable storage patchset has a large struct full of callbacks, and\n> > a bunch of wrapper functions for calling those callbacks. While\n> > starting to polish the patchset, I tried to make the formatting nice.\n> > ...\n> > So, putting the parameter list, both in use and declaration, entirely\n> > into a new line yields decent formatting with pgindent. But it's kinda\n> > weird. I can't really come up with a better alternative, and after a\n> > few minutes it looks pretty reasonable.\n> \n> > Comments? Better alternatives?\n> \n> Use shorter method names? This sounds like an ugly workaround for\n> a carpal-tunnel-syndrome-inducing design.\n\nI'm confused. What did I write about that has unreasonably long names?\nAnd if you're referring to the wider design, that all seems fairly\nfundamental to something needing callbacks - not exactly a first in\npostgres.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 11 Jan 2019 09:58:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Acceptable/Best formatting of callbacks (for pluggable storage)"
},
{
"msg_contents": "On 2019-Jan-11, Andres Freund wrote:\n\n> Just as an example of why I think this isn't great:\n\nHmm, to me, the first example is much better because of *vertical* space\n-- I can have the whole API in one screenful. In the other example, the\nsame number of functions take many more lines. The fact that the\narguments are indented differently doesn't bother me.\n\n\n> typedef Size (*EstimateDSMForeignScan_function) (ForeignScanState *node,\n> ParallelContext *pcxt);\n> typedef void (*InitializeDSMForeignScan_function) (ForeignScanState *node,\n> ParallelContext *pcxt,\n> void *coordinate);\n> typedef void (*ReInitializeDSMForeignScan_function) (ForeignScanState *node,\n> ParallelContext *pcxt,\n> void *coordinate);\n> typedef void (*InitializeWorkerForeignScan_function) (ForeignScanState *node,\n> shm_toc *toc,\n> void *coordinate);\n> typedef void (*ShutdownForeignScan_function) (ForeignScanState *node);\n> typedef bool (*IsForeignScanParallelSafe_function) (PlannerInfo *root,\n> RelOptInfo *rel,\n> RangeTblEntry *rte);\n> typedef List *(*ReparameterizeForeignPathByChild_function) (PlannerInfo *root,\n> List *fdw_private,\n> RelOptInfo *child_rel);\n> \n> that's a lot of indentation variability in a small amount of space - I\n> find it noticably slower to mentally parse due to that. Compare that\n> with:\n> \n> typedef Size (*EstimateDSMForeignScan_function)\n> (ForeignScanState *node,\n> ParallelContext *pcxt);\n> \n> typedef void (*InitializeDSMForeignScan_function)\n> (ParallelContext *pcxt,\n> void *coordinate);\n> \n> typedef void (*ReInitializeDSMForeignScan_function)\n> (ForeignScanState *node,\n> ParallelContext *pcxt,\n> void *coordinate);\n> \n> typedef void (*InitializeWorkerForeignScan_function)\n> (ForeignScanState *node,\n> shm_toc *toc,\n> void *coordinate);\n> \n> typedef void (*ShutdownForeignScan_function)\n> (ForeignScanState *node);\n> \n> typedef bool (*IsForeignScanParallelSafe_function)\n> (PlannerInfo *root,\n> RelOptInfo *rel,\n> RangeTblEntry *rte);\n> \n> typedef List *(*ReparameterizeForeignPathByChild_function)\n> (PlannerInfo *root,\n> List *fdw_private,\n> RelOptInfo *child_rel);\n> \n> I find the second formatting considerably easier to read, albeit not for\n> the first few seconds.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 17 Jan 2019 12:05:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Acceptable/Best formatting of callbacks (for pluggable storage)"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15589\nLogged by: Leif Gunnar Erlandsen\nEmail address: leif@lako.no\nPostgreSQL version: 11.1\nOperating system: Red Hat Enterprise Linux Server release 7.6\nDescription: \n\nrecovery.conf was specified as\r\nrestore_command = 'cp /db/disk1/restore/archivelogs/archivelogs/%f %p'\r\nrecovery_target_time = '2019-01-03 13:00:00'\r\n\r\nDue to some missing wal-files restore ended.\r\n\r\n2019-01-10 12:05:50 CET [68417]: [67-1] user=,db=,app=,client= LOG: \nrestored log file \"0000000500000158000000FF\" from archive\r\ncp: cannot stat\n'/db/disk1/restore/archivelogs/archivelogs/000000050000015900000000': No\nsuch file or directory\r\n2019-01-10 12:05:50 CET [68417]: [68-1] user=,db=,app=,client= LOG: redo\ndone at 158/FFFFFFB8\r\n2019-01-10 12:05:50 CET [68417]: [69-1] user=,db=,app=,client= LOG: last\ncompleted transaction was at log time 2019-01-03 06:34:45.935752+01\r\n2019-01-10 12:05:50 CET [68417]: [70-1] user=,db=,app=,client= LOG: \nrestored log file \"0000000500000158000000FF\" from archive\r\ncp: cannot stat\n'/db/disk1/restore/archivelogs/archivelogs/00000006.history': No such file\nor directory\r\n2019-01-10 12:05:50 CET [68417]: [71-1] user=,db=,app=,client= LOG: \nselected new timeline ID: 6\r\n2019-01-10 12:05:50 CET [68417]: [72-1] user=,db=,app=,client= LOG: archive\nrecovery complete\r\ncp: cannot stat\n'/db/disk1/restore/archivelogs/archivelogs/00000005.history': No such file\nor directory\r\n2019-01-10 12:05:51 CET [68420]: [2-1] user=,db=,app=,client= LOG: \nrestartpoint complete: wrote 44395 buffers (5.4%); 1 WAL file(s) added, 0\nremoved, 0 recycled; write=6.310 s, sync=0.268 s, total=6.631 s; sync\nfiles=178, longest=0.072 s, average=0.001 s; distance=64019 kB,\nestimate=64019 kB\r\n2019-01-10 12:05:51 CET [68420]: [3-1] user=,db=,app=,client= LOG: recovery\nrestart point at 158/C4E84F98\r\n2019-01-10 12:05:51 CET [68420]: [4-1] user=,db=,app=,client= DETAIL: Last\ncompleted transaction was at log time 2019-01-03 06:34:45.935752+01.\r\n2019-01-10 12:05:51 CET [68420]: [5-1] user=,db=,app=,client= LOG: \ncheckpoints are occurring too frequently (7 seconds apart)\r\n2019-01-10 12:05:51 CET [68420]: [6-1] user=,db=,app=,client= HINT: \nConsider increasing the configuration parameter \"max_wal_size\".\r\n2019-01-10 12:05:51 CET [68420]: [7-1] user=,db=,app=,client= LOG: \ncheckpoint starting: end-of-recovery immediate wait xlog\r\n2019-01-10 12:05:51 CET [68420]: [8-1] user=,db=,app=,client= LOG: \ncheckpoint complete: wrote 18678 buffers (2.3%); 0 WAL file(s) added, 0\nremoved, 0 recycled; write=0.251 s, sync=0.006 s, total=0.312 s; sync\nfiles=149, longest=0.006 s, average=0.000 s; distance=968172 kB,\nestimate=968172 kB\r\n2019-01-10 12:05:51 CET [68415]: [8-1] user=,db=,app=,client= LOG: database\nsystem is ready to accept connections\r\n\r\n\r\nI found the missing wal-files and performed restore again from the start.\r\nThen recovery paused when it was at correct time.\r\n\r\n2019-01-10 13:46:28 CET [77004]: [3334-1] user=,db=,app=,client= LOG: \nrestored log file \"0000000500000165000000C2\" from archive\r\n2019-01-10 13:46:28 CET [77007]: [318-1] user=,db=,app=,client= LOG: \nrestartpoint complete: wrote 87591 buffers (10.7%); 0 WAL file(s) added, 22\nremoved, 20 recycled; write=3.049 s, sync=0.001 s, total=3.192 s; sync\nfiles=143, longest=0.001 s, average=0.000 s; distance=688531 kB,\nestimate=689818 kB\r\n2019-01-10 13:46:28 CET [77007]: [319-1] user=,db=,app=,client= LOG: \nrecovery restart point at 165/9706C358\r\n2019-01-10 13:46:28 CET [77007]: [320-1] user=,db=,app=,client= DETAIL: \nLast completed transaction was at log time 2019-01-03 12:13:22.014815+01.\r\n2019-01-10 13:46:28 CET [77007]: [321-1] user=,db=,app=,client= LOG: \nrestartpoint starting: xlog\r\n2019-01-10 13:46:29 CET [77004]: [3335-1] user=,db=,app=,client= LOG: \nrestored log file \"0000000500000165000000C3\" from archive\r\n2019-01-10 13:46:29 CET [77004]: [3336-1] user=,db=,app=,client= LOG: \nrecovery stopping before commit of transaction 3316604, time 2019-01-03\n13:00:01.563953+01\r\n2019-01-10 13:46:29 CET [77004]: [3337-1] user=,db=,app=,client= LOG: \nrecovery has paused\r\n2019-01-10 13:46:29 CET [77004]: [3338-1] user=,db=,app=,client= HINT: \nExecute pg_wal_replay_resume() to continue.\r\n\r\n\r\nPostgreSQL should have paused recovery also on the first scenario. Then I\ncould have added missing wal and continued with restore.",
"msg_date": "Fri, 11 Jan 2019 08:36:35 +0000",
"msg_from": "=?utf-8?q?PG_Bug_reporting_form?= <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15589: Due to missing wal,\n restore ends prematurely and opens database for read/write"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <\nnoreply@postgresql.org> wrote:\n\n>\n> PostgreSQL should have paused recovery also on the first scenario. Then I\n> could have added missing wal and continued with restore.\n>\n\nI agree with you that something here is not very user friendly. But the\ncounter argument is that you should not be hiding WAL files from the system\nin the first place. Once it is told that the file doesn't exist, it\ndoesn't make sense to pause because non-existence is usually permanent, so\nthere is nothing to be done after a pause. Or in other words, the pause\nfeature is to let you change your mind about what time point you want to\nrecover to (moving it only forward), not to let you change your mind about\nwhat WAL files exist in the first place. So I don't think this is a bug,\nbut perhaps there is room for improvement.\n\nAt the least, I think we should log an explicit WARNING if the WAL stream\nends before the specified PIT is reached.\n\nCheers,\n\nJeff\n\nOn Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org> wrote:\nPostgreSQL should have paused recovery also on the first scenario. Then I\ncould have added missing wal and continued with restore.I agree with you that something here is not very user friendly. But the counter argument is that you should not be hiding WAL files from the system in the first place. Once it is told that the file doesn't exist, it doesn't make sense to pause because non-existence is usually permanent, so there is nothing to be done after a pause. Or in other words, the pause feature is to let you change your mind about what time point you want to recover to (moving it only forward), not to let you change your mind about what WAL files exist in the first place. So I don't think this is a bug, but perhaps there is room for improvement.At the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached.Cheers,Jeff",
"msg_date": "Fri, 11 Jan 2019 09:03:04 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
},
{
"msg_contents": "\"Jeff Janes\" <jeff.janes@gmail.com (mailto:jeff.janes@gmail.com?to=%22Jeff%20Janes%22%20<jeff.janes@gmail.com>)> skrev 11. januar 2019 kl. 15:03:\n On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org (mailto:noreply@postgresql.org)> wrote: \nPostgreSQL should have paused recovery also on the first scenario. Then I\ncould have added missing wal and continued with restore. \nI agree with you that something here is not very user friendly. But the counter argument is that you should not be hiding WAL files from the system in the first place. Once it is told that the file doesn't exist, it doesn't make sense to pause because non-existence is usually permanent, so there is nothing to be done after a pause. Or in other words, the pause feature is to let you change your mind about what time point you want to recover to (moving it only forward), not to let you change your mind about what WAL files exist in the first place. So I don't think this is a bug, but perhaps there is room for improvement. \nAt the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached. \nNo one hides WAL files from the system deliberately. A few days of WAL files could take up a lot of space. And they could come from different backup sets.\nIf you have a gap in your restored WAL-files and have specified date and time further ahead, an warning and a pause should be issued.\nWithout the pause in recover the warning is of little use as the database is already opened for read and write.\n\n--\nLeif\n\n\"Jeff Janes\" <jeff.janes@gmail.com> skrev 11. januar 2019 kl. 15:03: On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org> wrote: PostgreSQL should have paused recovery also on the first scenario. Then Icould have added missing wal and continued with restore. I agree with you that something here is not very user friendly. But the counter argument is that you should not be hiding WAL files from the system in the first place. Once it is told that the file doesn't exist, it doesn't make sense to pause because non-existence is usually permanent, so there is nothing to be done after a pause. Or in other words, the pause feature is to let you change your mind about what time point you want to recover to (moving it only forward), not to let you change your mind about what WAL files exist in the first place. So I don't think this is a bug, but perhaps there is room for improvement. At the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached. No one hides WAL files from the system deliberately. A few days of WAL files could take up a lot of space. And they could come from different backup sets.If you have a gap in your restored WAL-files and have specified date and time further ahead, an warning and a pause should be issued.Without the pause in recover the warning is of little use as the database is already opened for read and write.--Leif",
"msg_date": "Sat, 12 Jan 2019 05:59:54 +0000",
"msg_from": "leif@lako.no",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
},
{
"msg_contents": "\"Jeff Janes\" <jeff.janes@gmail.com (mailto:jeff.janes@gmail.com?to=%22Jeff%20Janes%22%20<jeff.janes@gmail.com>)> skrev 11. januar 2019 kl. 15:03:\n On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org (mailto:noreply@postgresql.org)> wrote: \nPostgreSQL should have paused recovery also on the first scenario. Then I\ncould have added missing wal and continued with restore. \nAt the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached. \n The documentation for recovery.conf states that with recovery_target_time set recovery_target_action defaults to pause.\nEven if stop point is not reached, pause should be activated.\nAfter checking the source this might be solved with a small addition in StartupXLOG.c\nSomeone with more experience with the source code should check this out.\n if (reachedStopPoint) \n{ \n if (!reachedConsistency) \n ereport(FATAL, \n (errmsg(\"requested recovery stop point is before consistent recovery point\"))); \n /* \n * This is the last point where we can restart recovery with a \n * new recovery target, if we shutdown and begin again. After \n * this, Resource Managers may choose to do permanent \n * corrective actions at end of recovery. \n */ \n switch (recoveryTargetAction) \n { \n case RECOVERY_TARGET_ACTION_SHUTDOWN: \n /* \n * exit with special return code to request shutdown \n * of postmaster. Log messages issued from \n * postmaster. \n */ \n proc_exit(3); \n case RECOVERY_TARGET_ACTION_PAUSE: \n SetRecoveryPause(true); \n recoveryPausesHere(); \n /* drop into promote */ \n case RECOVERY_TARGET_ACTION_PROMOTE: \n break; \n } \n /* Add these lines .... */ \n } else if (recoveryTarget == RECOVERY_TARGET_TIME) \n{ \n /* \n * Stop point not reached but next WAL could not be read \n * Some explanation and warning should be logged \n */ \n switch (recoveryTargetAction) \n { \n case RECOVERY_TARGET_ACTION_PAUSE: \n SetRecoveryPause(true); \n recoveryPausesHere(); \n break; \n } \n} \n/* .... until here */ \n--\n\nLeif\n\n \"Jeff Janes\" <jeff.janes@gmail.com> skrev 11. januar 2019 kl. 15:03: On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org> wrote: PostgreSQL should have paused recovery also on the first scenario. Then Icould have added missing wal and continued with restore. At the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached. The documentation for recovery.conf states that with recovery_target_time set recovery_target_action defaults to pause.Even if stop point is not reached, pause should be activated.After checking the source this might be solved with a small addition in StartupXLOG.cSomeone with more experience with the source code should check this out. if (reachedStopPoint) { if (!reachedConsistency) ereport(FATAL, (errmsg(\"requested recovery stop point is before consistent recovery point\"))); /* * This is the last point where we can restart recovery with a * new recovery target, if we shutdown and begin again. After * this, Resource Managers may choose to do permanent * corrective actions at end of recovery. */ switch (recoveryTargetAction) { case RECOVERY_TARGET_ACTION_SHUTDOWN: /* * exit with special return code to request shutdown * of postmaster. Log messages issued from * postmaster. */ proc_exit(3); case RECOVERY_TARGET_ACTION_PAUSE: SetRecoveryPause(true); recoveryPausesHere(); /* drop into promote */ case RECOVERY_TARGET_ACTION_PROMOTE: break; } /* Add these lines .... */ } else if (recoveryTarget == RECOVERY_TARGET_TIME) { /* * Stop point not reached but next WAL could not be read * Some explanation and warning should be logged */ switch (recoveryTargetAction) { case RECOVERY_TARGET_ACTION_PAUSE: SetRecoveryPause(true); recoveryPausesHere(); break; } } /* .... until here */ --Leif",
"msg_date": "Sat, 12 Jan 2019 07:40:07 +0000",
"msg_from": "leif@lako.no",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
},
{
"msg_contents": "Hi\nI have reported a bug via PostgreSQL bug report form, but havent got any response so far.\nThis might not be a bug, but a feature not implemented yet.\nI have created an suggestion to make a small addition to StartupXLOG.c to solve the issue.\n\nAny suggestions?\n\n--\nLeif Gunnar Erlandsen\n\n-------- Videresendt melding -------\nFra: leif@lako.no (mailto:leif@lako.no)\nTil: \"Jeff Janes\" <jeff.janes@gmail.com (mailto:jeff.janes@gmail.com?to=%22Jeff%20Janes%22%20<jeff.janes@gmail.com>)>, pgsql-bugs@lists.postgresql.org (mailto:pgsql-bugs@lists.postgresql.org)\nSendt: 12. januar 2019 kl. 08:40\nEmne: Re: BUG #15589: Due to missing wal, restore ends prematurely and opens database for read/write\n\"Jeff Janes\" <jeff.janes@gmail.com (mailto:jeff.janes@gmail.com?to=%22Jeff%20Janes%22%20<jeff.janes@gmail.com>)> skrev 11. januar 2019 kl. 15:03:\n On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org (mailto:noreply@postgresql.org)> wrote: \nPostgreSQL should have paused recovery also on the first scenario. Then I\ncould have added missing wal and continued with restore. \nAt the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached. \n The documentation for recovery.conf states that with recovery_target_time set recovery_target_action defaults to pause.\nEven if stop point is not reached, pause should be activated.\nAfter checking the source this might be solved with a small addition in StartupXLOG.c\nSomeone with more experience with the source code should check this out. \n if (reachedStopPoint) \n{ \n if (!reachedConsistency) \n ereport(FATAL, \n (errmsg(\"requested recovery stop point is before consistent recovery point\"))); \n/* \n* This is the last point where we can restart recovery with a \n* new recovery target, if we shutdown and begin again. After \n* this, Resource Managers may choose to do permanent \n* corrective actions at end of recovery. \n*/ \n switch (recoveryTargetAction) \n{ \n case RECOVERY_TARGET_ACTION_SHUTDOWN: \n/* \n* exit with special return code to request shutdown \n* of postmaster. Log messages issued from \n* postmaster. \n*/ \n proc_exit(3); \n case RECOVERY_TARGET_ACTION_PAUSE: \n SetRecoveryPause(true); \n recoveryPausesHere(); \n/* drop into promote */ \n case RECOVERY_TARGET_ACTION_PROMOTE: \n break; \n} \n /* Add these lines .... */ \n } else if (recoveryTarget == RECOVERY_TARGET_TIME) \n{ \n/* \n* Stop point not reached but next WAL could not be read \n* Some explanation and warning should be logged \n*/ \n switch (recoveryTargetAction) \n{ \n case RECOVERY_TARGET_ACTION_PAUSE: \n SetRecoveryPause(true); \n recoveryPausesHere(); \n break; \n} \n} \n/* .... until here */ \n--\n\nLeif\n\nHiI have reported a bug via PostgreSQL bug report form, but havent got any response so far.This might not be a bug, but a feature not implemented yet.I have created an suggestion to make a small addition to StartupXLOG.c to solve the issue.Any suggestions?--Leif Gunnar Erlandsen-------- Videresendt melding -------Fra: leif@lako.noTil: \"Jeff Janes\" <jeff.janes@gmail.com>, pgsql-bugs@lists.postgresql.orgSendt: 12. januar 2019 kl. 08:40Emne: Re: BUG #15589: Due to missing wal, restore ends prematurely and opens database for read/write \"Jeff Janes\" <jeff.janes@gmail.com> skrev 11. januar 2019 kl. 15:03: On Fri, Jan 11, 2019 at 4:08 AM PG Bug reporting form <noreply@postgresql.org> wrote: PostgreSQL should have paused recovery also on the first scenario. Then Icould have added missing wal and continued with restore. At the least, I think we should log an explicit WARNING if the WAL stream ends before the specified PIT is reached. The documentation for recovery.conf states that with recovery_target_time set recovery_target_action defaults to pause.Even if stop point is not reached, pause should be activated.After checking the source this might be solved with a small addition in StartupXLOG.cSomeone with more experience with the source code should check this out. if (reachedStopPoint) { if (!reachedConsistency) ereport(FATAL, (errmsg(\"requested recovery stop point is before consistent recovery point\"))); /* * This is the last point where we can restart recovery with a * new recovery target, if we shutdown and begin again. After * this, Resource Managers may choose to do permanent * corrective actions at end of recovery. */ switch (recoveryTargetAction) { case RECOVERY_TARGET_ACTION_SHUTDOWN: /* * exit with special return code to request shutdown * of postmaster. Log messages issued from * postmaster. */ proc_exit(3); case RECOVERY_TARGET_ACTION_PAUSE: SetRecoveryPause(true); recoveryPausesHere(); /* drop into promote */ case RECOVERY_TARGET_ACTION_PROMOTE: break; } /* Add these lines .... */ } else if (recoveryTarget == RECOVERY_TARGET_TIME) { /* * Stop point not reached but next WAL could not be read * Some explanation and warning should be logged */ switch (recoveryTargetAction) { case RECOVERY_TARGET_ACTION_PAUSE: SetRecoveryPause(true); recoveryPausesHere(); break; } } /* .... until here */ --Leif",
"msg_date": "Wed, 23 Jan 2019 06:57:27 +0000",
"msg_from": "leif@lako.no",
"msg_from_op": false,
"msg_subject": "Fwd: Re: BUG #15589: Due to missing wal, restore ends prematurely\n and opens database for read/write"
},
{
"msg_contents": "Hi\nI have reported a bug via PostgreSQL bug report form, but havent got any response so far.\nThis might not be a bug, but a feature not implemented yet.\nI suggest to make a small addition to StartupXLOG to solve the issue.\n\n\n\ngit diff src/backend/access/transam/xlog.c\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 2ab7d804f0..d0e5bb3f84 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -7277,6 +7277,19 @@ StartupXLOG(void)\n \n case RECOVERY_TARGET_ACTION_PROMOTE:\n break;\n+ } \n+ } else if (recoveryTarget == RECOVERY_TARGET_TIME)\n+ {\n+ /*\n+ * Stop point not reached but next WAL could not be read\n+ * Some explanation and warning should be logged\n+ */\n+ switch (recoveryTargetAction)\n+ {\n+ case RECOVERY_TARGET_ACTION_PAUSE:\n+ SetRecoveryPause(true);\n+ recoveryPausesHere();\n+ break;\n }\n }\n\n\n\n\n\nThe scenario I want to solve is:\nNeed to restore backup to another server.\n Restores pgbasebackup files\n Restores som wal-files\n Extract pgbasebackup files\n creates recover.conf with pit\n Starts postgresql\n recover ends before pit due to missing wal-files\n database opens read/write\n\nI think database should have paused recovery then I could restore \nadditional wal-files and restart postgresql to continue with recover.\n\nWith large databases and a lot of wal-files it is time consuming to repeat parts of the process.\n\nBest regards\nLeif Gunnar Erlandsen\n\n",
"msg_date": "Wed, 30 Jan 2019 15:53:51 +0000",
"msg_from": "leif@lako.no",
"msg_from_op": false,
"msg_subject": "Fwd: Re: BUG #15589: Due to missing wal, restore ends prematurely\n and opens database for read/write"
},
{
"msg_contents": "At Wed, 30 Jan 2019 15:53:51 +0000, leif@lako.no wrote in <a3bf3b8910cd5adb8a5fbc8113eac0ab@lako.no>\n> Hi\n> I have reported a bug via PostgreSQL bug report form, but havent got any response so far.\n> This might not be a bug, but a feature not implemented yet.\n> I suggest to make a small addition to StartupXLOG to solve the issue.\n\nI can understand what you want, but it doesn't seem acceptable\nsince it introduces inconsistency among target kinds.\n\n> The scenario I want to solve is:\n> Need to restore backup to another server.\n> Restores pgbasebackup files\n> Restores som wal-files\n> Extract pgbasebackup files\n> creates recover.conf with pit\n> Starts postgresql\n> recover ends before pit due to missing wal-files\n> database opens read/write\n> \n> I think database should have paused recovery then I could restore \n> additional wal-files and restart postgresql to continue with recover.\n\nI don't think no one expected that server follows\nrecovery_target_action without setting a target, so we can change\nthe behavior when any kind of target is specified. So I propose\nto follow recovery_target_action even if not rached the target\nwhen any recovery target isspecified.\n\nWith the attached PoC (for master), recovery stops as follows:\n\nLOG: consistent recovery state reached at 0/2F000000\nLOG: database system is ready to accept read only connections\nrc_work/00000001000000000000002F’: No such file or directory\nWARNING: not reached specfied recovery target, take specified action anyway\nDETAIL: This means a wrong target or missing of expected WAL files.\nLOG: recovery has paused\nHINT: Execute pg_wal_replay_resume() to continue.\n\nIf no target is specifed, it promtes immediately ignoring r_t_action.\n\nIf this is acceptable I'll post complete version (including\ndocumentation). I don't think this back-patcheable.\n\n> With large databases and a lot of wal-files it is time consuming to repeat parts of the process.\n\nI understand your concern.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 31 Jan 2019 21:26:48 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
},
{
"msg_contents": "\"Kyotaro HORIGUCHI\" <horiguchi.kyotaro@lab.ntt.co.jp> skrev 31. januar 2019 kl. 13:28:\n\n> If this is acceptable I'll post complete version (including\n> documentation). I don't think this back-patcheable.\n> \n\nIf you are asking me, then I think this is exactly what I wanted, thank you for your effort.\n\n\n>> With large databases and a lot of wal-files it is time consuming to repeat parts of the process.\n> \n> I understand your concern.\n> \n> regards.\n> \n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\nregards\nLeif Gunnar Erlandsen\n\n",
"msg_date": "Fri, 22 Feb 2019 08:49:38 +0000",
"msg_from": "leif@lako.no",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
},
{
"msg_contents": "On Thu, Jan 31, 2019 at 09:26:48PM +0900, Kyotaro HORIGUCHI wrote:\n> I don't think no one expected that server follows\n> recovery_target_action without setting a target, so we can change\n> the behavior when any kind of target is specified. So I propose\n> to follow recovery_target_action even if not rached the target\n> when any recovery target isspecified.\n\nQuoting the docs:\nhttps://www.postgresql.org/docs/current/recovery-target-settings.html\nrecovery_target_action (enum)\n\"Specifies what action the server should take once the recovery target\nis *reached*.\"\n\nSo what we have now is that an action would be taken iff a stop point\nis defined and reached. What this patch changes is that the action\nwould be taken even if the stop point has *not* been reached once the\nend of a WAL stream is found.\n\n+ * to be taken regardless whether the target is reached or not .\nNit 1: Dot at the end has an extra space.\n\nNit 2: s/specfied/specified/\n\nPlease do not take me wrong, I can see that there could be use cases\nwhere it is possible to take an action at the end of a WAL stream if\nthere is less WAL than what was planned, perhaps if the OP has set\nan incorrect stop position too far in the future, still too much WAL\nwould have been replayed so it would make the base backup unusable for\nfuture uses. Also, it looks incorrect to me to change an existing\nbehavior and to use the same semantics for triggering an action if a\nstop point is defined and reached.\n--\nMichael",
"msg_date": "Tue, 26 Feb 2019 17:12:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
},
{
"msg_contents": "\"Michael Paquier\" <michael@paquier.xyz> skrev 26. februar 2019 kl. 09:13:\n\n> On Thu, Jan 31, 2019 at 09:26:48PM +0900, Kyotaro HORIGUCHI wrote:\n> \n>> I don't think no one expected that server follows\n>> recovery_target_action without setting a target, so we can change\n>> the behavior when any kind of target is specified. So I propose\n>> to follow recovery_target_action even if not rached the target\n>> when any recovery target isspecified.\n> \n> Quoting the docs:\n> https://www.postgresql.org/docs/current/recovery-target-settings.html\n> recovery_target_action (enum)\n> \"Specifies what action the server should take once the recovery target\n> is *reached*.\"\n\nI know this and recovery_target_action in my case was \"pause\".\nRecovery target was specified with a date and time.\n\n> So what we have now is that an action would be taken iff a stop point\n> is defined and reached. What this patch changes is that the action\n> would be taken even if the stop point has *not* been reached once the\n> end of a WAL stream is found.\n\nYes, and this is expected behaviour in my use case. This was a PITR scenario, to a new server, and not crash recovery.\nI restored a backup and placed WAL-files in a separate directory, then I created a recovery.conf with correct recovery_target_time.\nAfter PostgreSQL started it stopped after a short while and opened the database in read/write.\nChecks showed target was not reached. Log showed that no more WAL could be found.\nIf PostgreSQL had followed recovery_target_action, then I could have restored the missing WAL-files and continued replay of WAL.\nAs this was not the case I had to restart the process from the beginning, this took many hours.\nAnother thing to consider is that in instances such as this one, where a lot of WAL was needed for replay, it is not always given that we have the sufficient amount of available disk space in order to store them all at the same time.\n\n \n> Please do not take me wrong, I can see that there could be use cases\n> where it is possible to take an action at the end of a WAL stream if\n> there is less WAL than what was planned, perhaps if the OP has set\n> an incorrect stop position too far in the future, still too much WAL\n> would have been replayed so it would make the base backup unusable for\n> future uses. Also, it looks incorrect to me to change an existing\n> behavior and to use the same semantics for triggering an action if a\n> stop point is defined and reached.\n\nI did not set an incorrect stop position. I see this change as something most in a similar situation would expect from their database system.\n\nAFAIK the doc does not specify what happens if recovery_target_time is specified but not reached. But as default recovery_target_action is set to \"pause\" I would have assumed \"pause\" to be the action.\n\nregards\nLeif Gunnar Erlandsen\n\n",
"msg_date": "Wed, 27 Feb 2019 09:14:40 +0000",
"msg_from": "leif@lako.no",
"msg_from_op": false,
"msg_subject": "Re: BUG #15589: Due to missing wal, restore ends prematurely and\n opens database for read/write"
}
] |
[
{
"msg_contents": "Would it make sense to add a column to pg_stat_database showing the total\nnumber of checksum errors that have occurred in a database?\n\nIt's really a \">1 means it's bad\", but it's a lot easier to monitor that in\nthe statistics views, and given how much a lot of people set their systems\nout to log, it's far too easy to miss individual checksum matches in the\nlogs.\n\nIf we track it at the database level, I don't think the overhead of adding\none more counter would be very high either.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nWould it make sense to add a column to pg_stat_database showing the total number of checksum errors that have occurred in a database?It's really a \">1 means it's bad\", but it's a lot easier to monitor that in the statistics views, and given how much a lot of people set their systems out to log, it's far too easy to miss individual checksum matches in the logs.If we track it at the database level, I don't think the overhead of adding one more counter would be very high either.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 11 Jan 2019 11:20:35 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 5:21 AM Magnus Hagander <magnus@hagander.net> wrote:\n> Would it make sense to add a column to pg_stat_database showing the total number of checksum errors that have occurred in a database?\n>\n> It's really a \">1 means it's bad\", but it's a lot easier to monitor that in the statistics views, and given how much a lot of people set their systems out to log, it's far too easy to miss individual checksum matches in the logs.\n>\n> If we track it at the database level, I don't think the overhead of adding one more counter would be very high either.\n\nIt's probably not the idea way to track it. If you have a terabyte or\nfifty of data, and you see that you have some checksum failures, good\nluck finding the offending blocks.\n\nBut I'm tentatively in favor of your proposal anyway, because it's\npretty simple and cheap and might help people, and doing something\nnoticeably better is probably annoyingly complicated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 11 Jan 2019 13:40:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "\n\n\nOn 1/11/19 7:40 PM, Robert Haas wrote:\n> On Fri, Jan 11, 2019 at 5:21 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> Would it make sense to add a column to pg_stat_database showing\n>> the total number of checksum errors that have occurred in a database?\n>> \n>> It's really a \">1 means it's bad\", but it's a lot easier to monitor\n>> that in the statistics views, and given how much a lot of people\n>> set their systems out to log, it's far too easy to miss individual\n>> checksum matches in the logs.\n>>\n>> If we track it at the database level, I don't think the overhead \n>> of adding one more counter would be very high either.\n> \n> It's probably not the idea way to track it. If you have a terabyte or\n> fifty of data, and you see that you have some checksum failures, good\n> luck finding the offending blocks.\n> \n\nIsn't that somewhat similar to deadlocks, which we also track in\npg_stat_database? The number of deadlocks is rather useless on it's own,\nyou need to dive into the server log to find the details. Same for\nchecksum errors.\n\n> But I'm tentatively in favor of your proposal anyway, because it's\n> pretty simple and cheap and might help people, and doing something\n> noticeably better is probably annoyingly complicated.\n> \n\n+1\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 11 Jan 2019 21:20:20 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Jan 11, 2019 at 9:20 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n>\n>\n> On 1/11/19 7:40 PM, Robert Haas wrote:\n> > On Fri, Jan 11, 2019 at 5:21 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >> Would it make sense to add a column to pg_stat_database showing\n> >> the total number of checksum errors that have occurred in a database?\n> >>\n> >> It's really a \">1 means it's bad\", but it's a lot easier to monitor\n> >> that in the statistics views, and given how much a lot of people\n> >> set their systems out to log, it's far too easy to miss individual\n> >> checksum matches in the logs.\n> >>\n> >> If we track it at the database level, I don't think the overhead\n> >> of adding one more counter would be very high either.\n> >\n> > It's probably not the idea way to track it. If you have a terabyte or\n> > fifty of data, and you see that you have some checksum failures, good\n> > luck finding the offending blocks.\n> >\n>\n> Isn't that somewhat similar to deadlocks, which we also track in\n> pg_stat_database? The number of deadlocks is rather useless on it's own,\n> you need to dive into the server log to find the details. Same for\n> checksum errors.\n>\n\nIt is a bit similar yeah. Though a checksum counter is really a \"you need\nto look at fixing this right away\" in a bit more sense than deadlocks. But\nyes, the fact that we already tracks deadlocks there is a good example. (Of\ncourse, I believe I added that one at some point as well, so I'm clearly\nbiased there)\n\n\n> But I'm tentatively in favor of your proposal anyway, because it's\n> > pretty simple and cheap and might help people, and doing something\n> > noticeably better is probably annoyingly complicated.\n> >\n>\n> +1\n>\n\nYeah, that's the idea behind it -- it's cheap, and an\nearly-warning-indicator.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jan 11, 2019 at 9:20 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n\nOn 1/11/19 7:40 PM, Robert Haas wrote:\n> On Fri, Jan 11, 2019 at 5:21 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> Would it make sense to add a column to pg_stat_database showing\n>> the total number of checksum errors that have occurred in a database?\n>> \n>> It's really a \">1 means it's bad\", but it's a lot easier to monitor\n>> that in the statistics views, and given how much a lot of people\n>> set their systems out to log, it's far too easy to miss individual\n>> checksum matches in the logs.\n>>\n>> If we track it at the database level, I don't think the overhead \n>> of adding one more counter would be very high either.\n> \n> It's probably not the idea way to track it. If you have a terabyte or\n> fifty of data, and you see that you have some checksum failures, good\n> luck finding the offending blocks.\n> \n\nIsn't that somewhat similar to deadlocks, which we also track in\npg_stat_database? The number of deadlocks is rather useless on it's own,\nyou need to dive into the server log to find the details. Same for\nchecksum errors.It is a bit similar yeah. Though a checksum counter is really a \"you need to look at fixing this right away\" in a bit more sense than deadlocks. But yes, the fact that we already tracks deadlocks there is a good example. (Of course, I believe I added that one at some point as well, so I'm clearly biased there)\n> But I'm tentatively in favor of your proposal anyway, because it's\n> pretty simple and cheap and might help people, and doing something\n> noticeably better is probably annoyingly complicated.\n> \n\n+1Yeah, that's the idea behind it -- it's cheap, and an early-warning-indicator. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 11 Jan 2019 21:25:56 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On 1/11/19 10:25 PM, Magnus Hagander wrote:\n> On Fri, Jan 11, 2019 at 9:20 PM Tomas Vondra \n> On 1/11/19 7:40 PM, Robert Haas wrote:\n> > But I'm tentatively in favor of your proposal anyway, because it's\n> > pretty simple and cheap and might help people, and doing something\n> > noticeably better is probably annoyingly complicated.\n> >\n> \n> +1\n> \n> Yeah, that's the idea behind it -- it's cheap, and an \n> early-warning-indicator.\n\n+1\n\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Sat, 12 Jan 2019 06:15:54 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Jan 12, 2019 at 5:16 AM David Steele <david@pgmasters.net> wrote:\n\n> On 1/11/19 10:25 PM, Magnus Hagander wrote:\n> > On Fri, Jan 11, 2019 at 9:20 PM Tomas Vondra\n> > On 1/11/19 7:40 PM, Robert Haas wrote:\n> > > But I'm tentatively in favor of your proposal anyway, because it's\n> > > pretty simple and cheap and might help people, and doing something\n> > > noticeably better is probably annoyingly complicated.\n> > >\n> >\n> > +1\n> >\n> > Yeah, that's the idea behind it -- it's cheap, and an\n> > early-warning-indicator.\n>\n> +1\n>\n\nPFA is a patch to do this.\n\nIt tracks things that happen in the general backends. Possibly we should\nalso consider counting the errors actually found when running base backups?\nOTOH, that part of the code doesn't really track things like databases (as\nit operates just on the raw data directory underneath), so that\nimplementation would definitely not be as clean...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>",
"msg_date": "Fri, 22 Feb 2019 15:00:47 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> PFA is a patch to do this.\n\n+void\n+pgstat_report_checksum_failure(void)\n+{\n+ PgStat_MsgDeadlock msg;\n\nI think that you meant PgStat_MsgChecksumFailure :)\n\n+/* ----------\n+ * pgstat_recv_checksum_failure() -\n+ *\n+ * Process a DEADLOCK message.\n+ * ----------\n\nsame here\n\nOtherwise LGTM.\n\n",
"msg_date": "Fri, 22 Feb 2019 15:16:09 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > PFA is a patch to do this.\n>\n> +void\n> +pgstat_report_checksum_failure(void)\n> +{\n> + PgStat_MsgDeadlock msg;\n>\n> I think that you meant PgStat_MsgChecksumFailure :)\n>\n> +/* ----------\n> + * pgstat_recv_checksum_failure() -\n> + *\n> + * Process a DEADLOCK message.\n> + * ----------\n>\n> same here\n>\n> Otherwise LGTM.\n>\n\nHaha, damit, that's embarassing. You can probably guess where I copy/pasted\nfrom :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Feb 22, 2019 at 3:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> PFA is a patch to do this.\n\n+void\n+pgstat_report_checksum_failure(void)\n+{\n+ PgStat_MsgDeadlock msg;\n\nI think that you meant PgStat_MsgChecksumFailure :)\n\n+/* ----------\n+ * pgstat_recv_checksum_failure() -\n+ *\n+ * Process a DEADLOCK message.\n+ * ----------\n\nsame here\n\nOtherwise LGTM.\nHaha, damit, that's embarassing. You can probably guess where I copy/pasted from :)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 22 Feb 2019 15:23:54 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:23 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n>\n>\n> On Fri, Feb 22, 2019 at 3:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n>> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net>\n>> wrote:\n>> >\n>> > PFA is a patch to do this.\n>>\n>> +void\n>> +pgstat_report_checksum_failure(void)\n>> +{\n>> + PgStat_MsgDeadlock msg;\n>>\n>> I think that you meant PgStat_MsgChecksumFailure :)\n>>\n>> +/* ----------\n>> + * pgstat_recv_checksum_failure() -\n>> + *\n>> + * Process a DEADLOCK message.\n>> + * ----------\n>>\n>> same here\n>>\n>> Otherwise LGTM.\n>>\n>\n> Haha, damit, that's embarassing. You can probably guess where I\n> copy/pasted from :)\n>\n>\nAnd of course, then I forgot to attach the new file.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>",
"msg_date": "Fri, 22 Feb 2019 15:24:52 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:25 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Fri, Feb 22, 2019 at 3:23 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>>\n>>\n>> On Fri, Feb 22, 2019 at 3:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>>\n>>> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>> >\n>>> > PFA is a patch to do this.\n>>>\n>>> +void\n>>> +pgstat_report_checksum_failure(void)\n>>> +{\n>>> + PgStat_MsgDeadlock msg;\n>>>\n>>> I think that you meant PgStat_MsgChecksumFailure :)\n>>>\n>>> +/* ----------\n>>> + * pgstat_recv_checksum_failure() -\n>>> + *\n>>> + * Process a DEADLOCK message.\n>>> + * ----------\n>>>\n>>> same here\n>>>\n>>> Otherwise LGTM.\n>>\n>>\n>> Haha, damit, that's embarassing. You can probably guess where I copy/pasted from :)\n\nheh :)\n\n>>\n>\n> And of course, then I forgot to attach the new file.\n\nIt all looks fine. One minor nitpicking issue I just noticed, there's\nan extra space there:\n\n+ dbentry->n_checksum_failures ++;\n\nI'm marking it as ready for committer!\n\n",
"msg_date": "Fri, 22 Feb 2019 15:54:17 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> It tracks things that happen in the general backends. Possibly we should also consider counting the errors actually found when running base backups? OTOH, that part of the code doesn't really track things like databases (as it operates just on the raw data directory underneath), so that implementation would definitely not be as clean...\n\nSorry I just realized that I totally forgot this part of the thread.\n\nWhile it's true that we operate on raw directory, I see that sendDir()\nalready setup a isDbDir var, and if this is true lastDir should\ncontain the oid of the underlying database. Wouldn't it be enough to\ncall sendFile() using this, something like (untested):\n\nif (!sizeonly)\n- sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n+ sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\nisDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n\nand accordingly report any checksum error from sendFile()?\n\n",
"msg_date": "Mon, 4 Mar 2019 20:31:09 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Mon, Mar 4, 2019 at 8:31 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > It tracks things that happen in the general backends. Possibly we should also consider counting the errors actually found when running base backups? OTOH, that part of the code doesn't really track things like databases (as it operates just on the raw data directory underneath), so that implementation would definitely not be as clean...\n>\n> Sorry I just realized that I totally forgot this part of the thread.\n>\n> While it's true that we operate on raw directory, I see that sendDir()\n> already setup a isDbDir var, and if this is true lastDir should\n> contain the oid of the underlying database. Wouldn't it be enough to\n> call sendFile() using this, something like (untested):\n>\n> if (!sizeonly)\n> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n>\n> and accordingly report any checksum error from sendFile()?\n\nSo this seem to work just fine without adding much code. PFA v3 of\nMagnus' patch including error reporting for BASE_BACKUP.",
"msg_date": "Fri, 8 Mar 2019 21:54:54 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > It tracks things that happen in the general backends. Possibly we should\n> also consider counting the errors actually found when running base backups?\n> OTOH, that part of the code doesn't really track things like databases (as\n> it operates just on the raw data directory underneath), so that\n> implementation would definitely not be as clean...\n>\n> Sorry I just realized that I totally forgot this part of the thread.\n>\n> While it's true that we operate on raw directory, I see that sendDir()\n> already setup a isDbDir var, and if this is true lastDir should\n> contain the oid of the underlying database. Wouldn't it be enough to\n> call sendFile() using this, something like (untested):\n>\n> if (!sizeonly)\n> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n>\n> and accordingly report any checksum error from sendFile()?\n>\n\nThat seems it was easy enough. PFA an updated patch that does this, and\nalso rebased so it doesn't conflict on oid.\n\n(yes, while moving this from draft to publish after lunch, I realized that\nyou put a patch togerher for about the same. So let's merge it)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>",
"msg_date": "Fri, 8 Mar 2019 15:35:22 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 12:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> >\n>> > It tracks things that happen in the general backends. Possibly we should also consider counting the errors actually found when running base backups? OTOH, that part of the code doesn't really track things like databases (as it operates just on the raw data directory underneath), so that implementation would definitely not be as clean...\n>>\n>> Sorry I just realized that I totally forgot this part of the thread.\n>>\n>> While it's true that we operate on raw directory, I see that sendDir()\n>> already setup a isDbDir var, and if this is true lastDir should\n>> contain the oid of the underlying database. Wouldn't it be enough to\n>> call sendFile() using this, something like (untested):\n>>\n>> if (!sizeonly)\n>> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n>> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n>> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n>>\n>> and accordingly report any checksum error from sendFile()?\n>\n>\n> That seems it was easy enough. PFA an updated patch that does this, and also rebased so it doesn't conflict on oid.\n>\n> (yes, while moving this from draft to publish after lunch, I realized that you put a patch togerher for about the same. So let's merge it)\n\nThanks! Our implementations are quite similar, so I'm fine with most\nof the changes :) I'm just not sure about having two distinct\nfunctions for reporting failures, given that there's only one caller\nfor each. On the other hand it avoids to include miscadmin.h in\nbufpage.c.\n\nThat's just a detail, so I'm marking it (again) as ready for committer!\n\n",
"msg_date": "Sat, 9 Mar 2019 09:34:32 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 9:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 9, 2019 at 12:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>\n> >> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >> >\n> >> > It tracks things that happen in the general backends. Possibly we should also consider counting the errors actually found when running base backups? OTOH, that part of the code doesn't really track things like databases (as it operates just on the raw data directory underneath), so that implementation would definitely not be as clean...\n> >>\n> >> Sorry I just realized that I totally forgot this part of the thread.\n> >>\n> >> While it's true that we operate on raw directory, I see that sendDir()\n> >> already setup a isDbDir var, and if this is true lastDir should\n> >> contain the oid of the underlying database. Wouldn't it be enough to\n> >> call sendFile() using this, something like (untested):\n> >>\n> >> if (!sizeonly)\n> >> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n> >> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n> >> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n> >>\n> >> and accordingly report any checksum error from sendFile()?\n> >\n> > That seems it was easy enough. PFA an updated patch that does this, and also rebased so it doesn't conflict on oid.\n> >\n\nSorry, I have again new comments after a little bit more thinking.\nI'm wondering if we can do something about shared objects while we're\nat it. They don't belong to any database, so it's a little bit\northogonal to this proposal, but it seems quite important to track\nerror on those too!\n\nWhat about adding a new field in PgStat_GlobalStats for that? We can\nuse the same lastDir to easily detect such objects and slightly adapt\nsendFile again, which seems quite straightforward.\n\n",
"msg_date": "Sat, 9 Mar 2019 19:41:45 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 12:33 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sat, Mar 9, 2019 at 12:35 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >> >\n> >> > It tracks things that happen in the general backends. Possibly we\n> should also consider counting the errors actually found when running base\n> backups? OTOH, that part of the code doesn't really track things like\n> databases (as it operates just on the raw data directory underneath), so\n> that implementation would definitely not be as clean...\n> >>\n> >> Sorry I just realized that I totally forgot this part of the thread.\n> >>\n> >> While it's true that we operate on raw directory, I see that sendDir()\n> >> already setup a isDbDir var, and if this is true lastDir should\n> >> contain the oid of the underlying database. Wouldn't it be enough to\n> >> call sendFile() using this, something like (untested):\n> >>\n> >> if (!sizeonly)\n> >> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n> >> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n> >> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n> >>\n> >> and accordingly report any checksum error from sendFile()?\n> >\n> >\n> > That seems it was easy enough. PFA an updated patch that does this, and\n> also rebased so it doesn't conflict on oid.\n> >\n> > (yes, while moving this from draft to publish after lunch, I realized\n> that you put a patch togerher for about the same. So let's merge it)\n>\n> Thanks! Our implementations are quite similar, so I'm fine with most\n> of the changes :) I'm just not sure about having two distinct\n> functions for reporting failures, given that there's only one caller\n> for each. On the other hand it avoids to include miscadmin.h in\n> bufpage.c.\n>\n\nYeah, and it brings \"cosistence\" to at least the calling point(s) within\nregular backends.\n\n\n\n> That's just a detail, so I'm marking it (again) as ready for committer!\n>\n\nThanks, and pushed :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Mar 9, 2019 at 12:33 AM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sat, Mar 9, 2019 at 12:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> >\n>> > It tracks things that happen in the general backends. Possibly we should also consider counting the errors actually found when running base backups? OTOH, that part of the code doesn't really track things like databases (as it operates just on the raw data directory underneath), so that implementation would definitely not be as clean...\n>>\n>> Sorry I just realized that I totally forgot this part of the thread.\n>>\n>> While it's true that we operate on raw directory, I see that sendDir()\n>> already setup a isDbDir var, and if this is true lastDir should\n>> contain the oid of the underlying database. Wouldn't it be enough to\n>> call sendFile() using this, something like (untested):\n>>\n>> if (!sizeonly)\n>> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n>> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n>> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n>>\n>> and accordingly report any checksum error from sendFile()?\n>\n>\n> That seems it was easy enough. PFA an updated patch that does this, and also rebased so it doesn't conflict on oid.\n>\n> (yes, while moving this from draft to publish after lunch, I realized that you put a patch togerher for about the same. So let's merge it)\n\nThanks! Our implementations are quite similar, so I'm fine with most\nof the changes :) I'm just not sure about having two distinct\nfunctions for reporting failures, given that there's only one caller\nfor each. On the other hand it avoids to include miscadmin.h in\nbufpage.c.Yeah, and it brings \"cosistence\" to at least the calling point(s) within regular backends. \nThat's just a detail, so I'm marking it (again) as ready for committer!\nThanks, and pushed :)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 9 Mar 2019 10:48:07 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sat, Mar 9, 2019 at 9:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Mar 9, 2019 at 12:35 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > >\n> > > On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > >>\n> > >> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > >> >\n> > >> > It tracks things that happen in the general backends. Possibly we\n> should also consider counting the errors actually found when running base\n> backups? OTOH, that part of the code doesn't really track things like\n> databases (as it operates just on the raw data directory underneath), so\n> that implementation would definitely not be as clean...\n> > >>\n> > >> Sorry I just realized that I totally forgot this part of the thread.\n> > >>\n> > >> While it's true that we operate on raw directory, I see that sendDir()\n> > >> already setup a isDbDir var, and if this is true lastDir should\n> > >> contain the oid of the underlying database. Wouldn't it be enough to\n> > >> call sendFile() using this, something like (untested):\n> > >>\n> > >> if (!sizeonly)\n> > >> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n> > >> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n> > >> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n> > >>\n> > >> and accordingly report any checksum error from sendFile()?\n> > >\n> > > That seems it was easy enough. PFA an updated patch that does this,\n> and also rebased so it doesn't conflict on oid.\n> > >\n>\n> Sorry, I have again new comments after a little bit more thinking.\n> I'm wondering if we can do something about shared objects while we're\n> at it. They don't belong to any database, so it's a little bit\n> orthogonal to this proposal, but it seems quite important to track\n> error on those too!\n>\n> What about adding a new field in PgStat_GlobalStats for that? We can\n> use the same lastDir to easily detect such objects and slightly adapt\n> sendFile again, which seems quite straightforward.\n>\n\nAh, didn't spot that one until after I pushed :/ Sorry about that.\n\nHmm. That's an interesting thought. And then add a column to\npg_stat_bgwriter, I assume? (Which is an ever increasingly bad name for the\nview, but that's unrelated to this)\n\nQuestion is then what number that should show -- only the checksum counter\nin non-database-fields, or the total number across the cluster?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sat, Mar 9, 2019 at 9:34 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 9, 2019 at 12:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Mon, Mar 4, 2019 at 11:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>\n> >> On Fri, Feb 22, 2019 at 3:01 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >> >\n> >> > It tracks things that happen in the general backends. Possibly we should also consider counting the errors actually found when running base backups? OTOH, that part of the code doesn't really track things like databases (as it operates just on the raw data directory underneath), so that implementation would definitely not be as clean...\n> >>\n> >> Sorry I just realized that I totally forgot this part of the thread.\n> >>\n> >> While it's true that we operate on raw directory, I see that sendDir()\n> >> already setup a isDbDir var, and if this is true lastDir should\n> >> contain the oid of the underlying database. Wouldn't it be enough to\n> >> call sendFile() using this, something like (untested):\n> >>\n> >> if (!sizeonly)\n> >> - sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true);\n> >> + sent = sendFile(pathbuf, pathbuf + basepathlen + 1, &statbuf, true,\n> >> isDbDir ? pg_atoi(lastDir+1, 4) : InvalidOid);\n> >>\n> >> and accordingly report any checksum error from sendFile()?\n> >\n> > That seems it was easy enough. PFA an updated patch that does this, and also rebased so it doesn't conflict on oid.\n> >\n\nSorry, I have again new comments after a little bit more thinking.\nI'm wondering if we can do something about shared objects while we're\nat it. They don't belong to any database, so it's a little bit\northogonal to this proposal, but it seems quite important to track\nerror on those too!\n\nWhat about adding a new field in PgStat_GlobalStats for that? We can\nuse the same lastDir to easily detect such objects and slightly adapt\nsendFile again, which seems quite straightforward.\nAh, didn't spot that one until after I pushed :/ Sorry about that.Hmm. That's an interesting thought. And then add a column to pg_stat_bgwriter, I assume? (Which is an ever increasingly bad name for the view, but that's unrelated to this)Question is then what number that should show -- only the checksum counter in non-database-fields, or the total number across the cluster?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 9 Mar 2019 10:49:50 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 7:48 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sat, Mar 9, 2019 at 12:33 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> Thanks! Our implementations are quite similar, so I'm fine with most\n>> of the changes :) I'm just not sure about having two distinct\n>> functions for reporting failures, given that there's only one caller\n>> for each. On the other hand it avoids to include miscadmin.h in\n>> bufpage.c.\n>\n>\n> Yeah, and it brings \"cosistence\" to at least the calling point(s) within regular backends.\n>\n>\n>>\n>> That's just a detail, so I'm marking it (again) as ready for committer!\n>\n>\n> Thanks, and pushed :)\n\nThanks :)\n\n",
"msg_date": "Sat, 9 Mar 2019 19:53:41 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 7:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> Sorry, I have again new comments after a little bit more thinking.\n>> I'm wondering if we can do something about shared objects while we're\n>> at it. They don't belong to any database, so it's a little bit\n>> orthogonal to this proposal, but it seems quite important to track\n>> error on those too!\n>>\n>> What about adding a new field in PgStat_GlobalStats for that? We can\n>> use the same lastDir to easily detect such objects and slightly adapt\n>> sendFile again, which seems quite straightforward.\n>\n>\n> Ah, didn't spot that one until after I pushed :/ Sorry about that.\n\nNo problem, I should have thought about it sooner anyway.\n\n> Hmm. That's an interesting thought. And then add a column to pg_stat_bgwriter, I assume?\n\nYes, and a new entry for PgStat_Shared_Reset_Target I guess.\n\n (Which is an ever increasingly bad name for the view, but that's\nunrelated to this)\n\nyeah :/\n\n> Question is then what number that should show -- only the checksum counter in non-database-fields, or the total number across the cluster?\n\nI'd say only for non-database-fields errors, especially if we can\nreset each counters separately. If necessary, we can add a new view\nto give a global overview of checksum errors for DBA convenience.\n\n",
"msg_date": "Sat, 9 Mar 2019 19:58:19 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 7:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 9, 2019 at 7:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>\n> >> Sorry, I have again new comments after a little bit more thinking.\n> >> I'm wondering if we can do something about shared objects while we're\n> >> at it. They don't belong to any database, so it's a little bit\n> >> orthogonal to this proposal, but it seems quite important to track\n> >> error on those too!\n> >>\n> >> What about adding a new field in PgStat_GlobalStats for that? We can\n> >> use the same lastDir to easily detect such objects and slightly adapt\n> >> sendFile again, which seems quite straightforward.\n>\n> > Question is then what number that should show -- only the checksum counter in non-database-fields, or the total number across the cluster?\n>\n> I'd say only for non-database-fields errors, especially if we can\n> reset each counters separately. If necessary, we can add a new view\n> to give a global overview of checksum errors for DBA convenience.\n\nSo, after reading current implementation, I don't think that\nPgStat_GlobalStats is the right place. It's already enough of a mess,\nand clearly pg_stat_reset_shared('bgwriter') would not make any sense\nif it did reset the shared relations checksum errors (though arguably\nthe fact that's it's resetting checkpointer stats right now is hardly\nbetter), and handling a different target to reset part of\nPgStat_GlobalStats counters would be an ugly kludge.\n\nI'm considering adding a new PgStat_ChecksumStats for that purpose\ninstead, but I don't know if that's acceptable to do so in the last\ncommitfest. It seems worthwhile to add it eventually, since we'll\nprobably end up having more things to report to users related to\nchecksum. Online enabling of checksum could be the most immediate\npotential target.\n\nIf that's acceptable, I was thinking this new stat could have those\nfields with the first drop:\n\n- number of non-db-related checksum checks done\n- number of non-db-related checksum checks failed\n- last stats reset\n\n(and adding the number of checks for db-related blocks done with the\ncurrent checksum errors counter). Maybe also adding a\npg_checksum_stats view that would summarise all the counters in one\nplace.\n\nIt'll obviously add some traffic to the stats collector, but I'd hope\nnot too much, since BufferAlloc shouldn't be that frequent, and\nBASE_BACKUP reports stats only once per file.\n\n",
"msg_date": "Sun, 10 Mar 2019 13:13:50 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sun, Mar 10, 2019 at 1:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 9, 2019 at 7:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Mar 9, 2019 at 7:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > >\n> > > On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >>\n> > >> Sorry, I have again new comments after a little bit more thinking.\n> > >> I'm wondering if we can do something about shared objects while we're\n> > >> at it. They don't belong to any database, so it's a little bit\n> > >> orthogonal to this proposal, but it seems quite important to track\n> > >> error on those too!\n> > >>\n> > >> What about adding a new field in PgStat_GlobalStats for that? We can\n> > >> use the same lastDir to easily detect such objects and slightly adapt\n> > >> sendFile again, which seems quite straightforward.\n> >\n> > > Question is then what number that should show -- only the checksum counter in non-database-fields, or the total number across the cluster?\n> >\n> > I'd say only for non-database-fields errors, especially if we can\n> > reset each counters separately. If necessary, we can add a new view\n> > to give a global overview of checksum errors for DBA convenience.\n>\n> I'm considering adding a new PgStat_ChecksumStats for that purpose\n> instead, but I don't know if that's acceptable to do so in the last\n> commitfest. It seems worthwhile to add it eventually, since we'll\n> probably end up having more things to report to users related to\n> checksum. Online enabling of checksum could be the most immediate\n> potential target.\n\nI wasn't aware that we were already storing informations about shared\nobjects in PgStat_StatDBEntry, with an InvalidOid as databaseid\n(though we don't have any system view that are actually showing\ninformation for such objects).\n\nAs a result I ended up simply adding counters for the number of total\nchecks and the timestamp of the last failure in PgStat_StatDBEntry,\nmaking attached patch very lightweight. I moved all the checksum\nrelated counters out of pg_stat_database in a new pg_stat_checksum\nview. It avoids to make pg_stat_database too wide, and also allows to\ndisplay information about shared object in this new view (some of the\nother counters don't really make sense for shared objects or could\nbreak existing monitoring query). While at it, I tried to add a\nlittle bit of documentation wrt. checksum monitoring.\n\n",
"msg_date": "Wed, 13 Mar 2019 16:53:26 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Mar 13, 2019 at 4:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sun, Mar 10, 2019 at 1:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Mar 9, 2019 at 7:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 9, 2019 at 7:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > > >\n> > > > On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >>\n> > > >> Sorry, I have again new comments after a little bit more thinking.\n> > > >> I'm wondering if we can do something about shared objects while we're\n> > > >> at it. They don't belong to any database, so it's a little bit\n> > > >> orthogonal to this proposal, but it seems quite important to track\n> > > >> error on those too!\n> > > >>\n> > > >> What about adding a new field in PgStat_GlobalStats for that? We can\n> > > >> use the same lastDir to easily detect such objects and slightly adapt\n> > > >> sendFile again, which seems quite straightforward.\n> > >\n> > > > Question is then what number that should show -- only the checksum counter in non-database-fields, or the total number across the cluster?\n> > >\n> > > I'd say only for non-database-fields errors, especially if we can\n> > > reset each counters separately. If necessary, we can add a new view\n> > > to give a global overview of checksum errors for DBA convenience.\n> >\n> > I'm considering adding a new PgStat_ChecksumStats for that purpose\n> > instead, but I don't know if that's acceptable to do so in the last\n> > commitfest. It seems worthwhile to add it eventually, since we'll\n> > probably end up having more things to report to users related to\n> > checksum. Online enabling of checksum could be the most immediate\n> > potential target.\n>\n> I wasn't aware that we were already storing informations about shared\n> objects in PgStat_StatDBEntry, with an InvalidOid as databaseid\n> (though we don't have any system view that are actually showing\n> information for such objects).\n>\n> As a result I ended up simply adding counters for the number of total\n> checks and the timestamp of the last failure in PgStat_StatDBEntry,\n> making attached patch very lightweight. I moved all the checksum\n> related counters out of pg_stat_database in a new pg_stat_checksum\n> view. It avoids to make pg_stat_database too wide, and also allows to\n> display information about shared object in this new view (some of the\n> other counters don't really make sense for shared objects or could\n> break existing monitoring query). While at it, I tried to add a\n> little bit of documentation wrt. checksum monitoring.\n\nand of course I forgot to attach the patch.",
"msg_date": "Wed, 13 Mar 2019 16:54:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Mar 13, 2019 at 4:54 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Wed, Mar 13, 2019 at 4:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sun, Mar 10, 2019 at 1:13 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > >\n> > > On Sat, Mar 9, 2019 at 7:58 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > > >\n> > > > On Sat, Mar 9, 2019 at 7:50 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > > > >\n> > > > > On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > > > >>\n> > > > >> Sorry, I have again new comments after a little bit more thinking.\n> > > > >> I'm wondering if we can do something about shared objects while\n> we're\n> > > > >> at it. They don't belong to any database, so it's a little bit\n> > > > >> orthogonal to this proposal, but it seems quite important to track\n> > > > >> error on those too!\n> > > > >>\n> > > > >> What about adding a new field in PgStat_GlobalStats for that? We\n> can\n> > > > >> use the same lastDir to easily detect such objects and slightly\n> adapt\n> > > > >> sendFile again, which seems quite straightforward.\n> > > >\n> > > > > Question is then what number that should show -- only the checksum\n> counter in non-database-fields, or the total number across the cluster?\n> > > >\n> > > > I'd say only for non-database-fields errors, especially if we can\n> > > > reset each counters separately. If necessary, we can add a new view\n> > > > to give a global overview of checksum errors for DBA convenience.\n> > >\n> > > I'm considering adding a new PgStat_ChecksumStats for that purpose\n> > > instead, but I don't know if that's acceptable to do so in the last\n> > > commitfest. It seems worthwhile to add it eventually, since we'll\n> > > probably end up having more things to report to users related to\n> > > checksum. Online enabling of checksum could be the most immediate\n> > > potential target.\n> >\n> > I wasn't aware that we were already storing informations about shared\n> > objects in PgStat_StatDBEntry, with an InvalidOid as databaseid\n> > (though we don't have any system view that are actually showing\n> > information for such objects).\n> >\n> > As a result I ended up simply adding counters for the number of total\n> > checks and the timestamp of the last failure in PgStat_StatDBEntry,\n> > making attached patch very lightweight. I moved all the checksum\n> > related counters out of pg_stat_database in a new pg_stat_checksum\n> > view. It avoids to make pg_stat_database too wide, and also allows to\n> > display information about shared object in this new view (some of the\n> > other counters don't really make sense for shared objects or could\n> > break existing monitoring query). While at it, I tried to add a\n> > little bit of documentation wrt. checksum monitoring.\n>\n> and of course I forgot to attach the patch.\n>\n\nDoes it really make any sense to track \"number of checksum checks\"? In any\nsort of interesting database that's just going to be an insanely high\nnumber, isn't it? (And also, to stay consistent with checksum failures, we\nshould of course also count the checks done in base backups, which is not\nin the patch. But I'm more thinking we should drop it)\n\nI do like the addition of the \"last failure\" column, that's really useful.\n\nHaving thought some more about this, I wonder if the right thing to do is\nto actually add a row to pg_stat_database for the global stats, rather than\ninvent a separate view. I can see the argument going both ways, but\nparticularly with the name pg_stat_checksums we are setting a pattern that\nwill create one view for each counter. That's not very good, I think.\n\nIn the end I'm somewhat split on the idea of pg_stat_database with a NULL\nrow or pg_stat_checkpoints. What do others think?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Mar 13, 2019 at 4:54 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Wed, Mar 13, 2019 at 4:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sun, Mar 10, 2019 at 1:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Mar 9, 2019 at 7:58 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 9, 2019 at 7:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > > >\n> > > > On Sat, Mar 9, 2019 at 10:41 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >>\n> > > >> Sorry, I have again new comments after a little bit more thinking.\n> > > >> I'm wondering if we can do something about shared objects while we're\n> > > >> at it. They don't belong to any database, so it's a little bit\n> > > >> orthogonal to this proposal, but it seems quite important to track\n> > > >> error on those too!\n> > > >>\n> > > >> What about adding a new field in PgStat_GlobalStats for that? We can\n> > > >> use the same lastDir to easily detect such objects and slightly adapt\n> > > >> sendFile again, which seems quite straightforward.\n> > >\n> > > > Question is then what number that should show -- only the checksum counter in non-database-fields, or the total number across the cluster?\n> > >\n> > > I'd say only for non-database-fields errors, especially if we can\n> > > reset each counters separately. If necessary, we can add a new view\n> > > to give a global overview of checksum errors for DBA convenience.\n> >\n> > I'm considering adding a new PgStat_ChecksumStats for that purpose\n> > instead, but I don't know if that's acceptable to do so in the last\n> > commitfest. It seems worthwhile to add it eventually, since we'll\n> > probably end up having more things to report to users related to\n> > checksum. Online enabling of checksum could be the most immediate\n> > potential target.\n>\n> I wasn't aware that we were already storing informations about shared\n> objects in PgStat_StatDBEntry, with an InvalidOid as databaseid\n> (though we don't have any system view that are actually showing\n> information for such objects).\n>\n> As a result I ended up simply adding counters for the number of total\n> checks and the timestamp of the last failure in PgStat_StatDBEntry,\n> making attached patch very lightweight. I moved all the checksum\n> related counters out of pg_stat_database in a new pg_stat_checksum\n> view. It avoids to make pg_stat_database too wide, and also allows to\n> display information about shared object in this new view (some of the\n> other counters don't really make sense for shared objects or could\n> break existing monitoring query). While at it, I tried to add a\n> little bit of documentation wrt. checksum monitoring.\n\nand of course I forgot to attach the patch.\nDoes it really make any sense to track \"number of checksum checks\"? In any sort of interesting database that's just going to be an insanely high number, isn't it? (And also, to stay consistent with checksum failures, we should of course also count the checks done in base backups, which is not in the patch. But I'm more thinking we should drop it)I do like the addition of the \"last failure\" column, that's really useful.Having thought some more about this, I wonder if the right thing to do is to actually add a row to pg_stat_database for the global stats, rather than invent a separate view. I can see the argument going both ways, but particularly with the name pg_stat_checksums we are setting a pattern that will create one view for each counter. That's not very good, I think.In the end I'm somewhat split on the idea of pg_stat_database with a NULL row or pg_stat_checkpoints. What do others think?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 30 Mar 2019 14:33:45 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 2:33 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Wed, Mar 13, 2019 at 4:54 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Wed, Mar 13, 2019 at 4:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > As a result I ended up simply adding counters for the number of total\n>> > checks and the timestamp of the last failure in PgStat_StatDBEntry,\n>> > making attached patch very lightweight. I moved all the checksum\n>> > related counters out of pg_stat_database in a new pg_stat_checksum\n>> > view. It avoids to make pg_stat_database too wide, and also allows to\n>> > display information about shared object in this new view (some of the\n>> > other counters don't really make sense for shared objects or could\n>> > break existing monitoring query). While at it, I tried to add a\n>> > little bit of documentation wrt. checksum monitoring.\n>>\n>> and of course I forgot to attach the patch.\n>\n>\n> Does it really make any sense to track \"number of checksum checks\"? In any sort of interesting database that's just going to be an insanely high number, isn't it? (And also, to stay consistent with checksum failures, we should of course also count the checks done in base backups, which is not in the patch. But I'm more thinking we should drop it)\n\nThanks for looking at it!\n\nIt's surely going to be a huge number on databases with a large number\nof buffer eviction and/or frequent pg_basebackup. The idea was to be\nable to know if the possible lack of failure was due to lack of check\nat all or because the server appears to be healthy, without spamming\ngettimeofday calls. If having a last_check() is better, I'm fine with\nit. If it's useless, let's drop it.\n\nThe number of checks was supposed to also be tracked in base_backups, with\n\n@@ -1527,6 +1527,8 @@ sendFile(const char *readfilename, const char\n*tarfilename, struct stat *statbuf\n \"failures in file \\\"%s\\\" will not \"\n \"be reported\", readfilename)));\n }\n+ else if (block_retry == false)\n+ checksum_checks++;\n\n> Having thought some more about this, I wonder if the right thing to do is to actually add a row to pg_stat_database for the global stats, rather than invent a separate view. I can see the argument going both ways, but particularly with the name pg_stat_checksums we are setting a pattern that will create one view for each counter. That's not very good, I think.\n>\n> In the end I'm somewhat split on the idea of pg_stat_database with a NULL row or pg_stat_checkpoints. What do others think?\n\nI agree that having a separate view for each counter if a bad idea.\nBut what I was thinking is that we'll probably end up with a view to\ntrack per-db online checksum activation progress/activity/status at\nsome point (similar to pg_stat_progress_vacuum), so why not starting\nwith this dedicated view right now and add new counters later, either\nin pgstat and/or some shmem, as long as we keep the view name as SQL\ninterface.\n\nAnyway, I don't have a strong preference for any implementation, so\nI'll be happy to send an updated patch with what ends up being\npreferred.\n\n\n",
"msg_date": "Sat, 30 Mar 2019 15:57:31 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 3:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sat, Mar 30, 2019 at 2:33 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > On Wed, Mar 13, 2019 at 4:54 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> >>\n> >> On Wed, Mar 13, 2019 at 4:53 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> >> >\n> >> > As a result I ended up simply adding counters for the number of total\n> >> > checks and the timestamp of the last failure in PgStat_StatDBEntry,\n> >> > making attached patch very lightweight. I moved all the checksum\n> >> > related counters out of pg_stat_database in a new pg_stat_checksum\n> >> > view. It avoids to make pg_stat_database too wide, and also allows to\n> >> > display information about shared object in this new view (some of the\n> >> > other counters don't really make sense for shared objects or could\n> >> > break existing monitoring query). While at it, I tried to add a\n> >> > little bit of documentation wrt. checksum monitoring.\n> >>\n> >> and of course I forgot to attach the patch.\n> >\n> >\n> > Does it really make any sense to track \"number of checksum checks\"? In\n> any sort of interesting database that's just going to be an insanely high\n> number, isn't it? (And also, to stay consistent with checksum failures, we\n> should of course also count the checks done in base backups, which is not\n> in the patch. But I'm more thinking we should drop it)\n>\n> Thanks for looking at it!\n>\n> It's surely going to be a huge number on databases with a large number\n> of buffer eviction and/or frequent pg_basebackup. The idea was to be\n> able to know if the possible lack of failure was due to lack of check\n> at all or because the server appears to be healthy, without spamming\n> gettimeofday calls. If having a last_check() is better, I'm fine with\n> it. If it's useless, let's drop it.\n>\n\nI'm not sure either of them are really useful, but would be happy to take\ninput from others :)\n\n\nThe number of checks was supposed to also be tracked in base_backups, with\n>\n\nOh, that's a sloppy review. I see it's there. However, it doesn't appear to\ncount up in the *normal* backend path...\n\nMy vote is still to drop it completely, but if we're keeping it, it has to\ngo in both paths.\n\n\n> Having thought some more about this, I wonder if the right thing to do is\n> to actually add a row to pg_stat_database for the global stats, rather than\n> invent a separate view. I can see the argument going both ways, but\n> particularly with the name pg_stat_checksums we are setting a pattern that\n> will create one view for each counter. That's not very good, I think.\n> >\n> > In the end I'm somewhat split on the idea of pg_stat_database with a\n> NULL row or pg_stat_checkpoints. What do others think?\n>\n> I agree that having a separate view for each counter if a bad idea.\n> But what I was thinking is that we'll probably end up with a view to\n> track per-db online checksum activation progress/activity/status at\n> some point (similar to pg_stat_progress_vacuum), so why not starting\n> with this dedicated view right now and add new counters later, either\n> in pgstat and/or some shmem, as long as we keep the view name as SQL\n> interface.\n>\n\nTechnically, that should be in pg_stat_progress_checksums to be consistent\n:) So whichever way we turn, it's going to be inconsistent with something.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Mar 30, 2019 at 3:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sat, Mar 30, 2019 at 2:33 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Wed, Mar 13, 2019 at 4:54 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Wed, Mar 13, 2019 at 4:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > As a result I ended up simply adding counters for the number of total\n>> > checks and the timestamp of the last failure in PgStat_StatDBEntry,\n>> > making attached patch very lightweight. I moved all the checksum\n>> > related counters out of pg_stat_database in a new pg_stat_checksum\n>> > view. It avoids to make pg_stat_database too wide, and also allows to\n>> > display information about shared object in this new view (some of the\n>> > other counters don't really make sense for shared objects or could\n>> > break existing monitoring query). While at it, I tried to add a\n>> > little bit of documentation wrt. checksum monitoring.\n>>\n>> and of course I forgot to attach the patch.\n>\n>\n> Does it really make any sense to track \"number of checksum checks\"? In any sort of interesting database that's just going to be an insanely high number, isn't it? (And also, to stay consistent with checksum failures, we should of course also count the checks done in base backups, which is not in the patch. But I'm more thinking we should drop it)\n\nThanks for looking at it!\n\nIt's surely going to be a huge number on databases with a large number\nof buffer eviction and/or frequent pg_basebackup. The idea was to be\nable to know if the possible lack of failure was due to lack of check\nat all or because the server appears to be healthy, without spamming\ngettimeofday calls. If having a last_check() is better, I'm fine with\nit. If it's useless, let's drop it.I'm not sure either of them are really useful, but would be happy to take input from others :)\nThe number of checks was supposed to also be tracked in base_backups, withOh, that's a sloppy review. I see it's there. However, it doesn't appear to count up in the *normal* backend path...My vote is still to drop it completely, but if we're keeping it, it has to go in both paths.\n> Having thought some more about this, I wonder if the right thing to do is to actually add a row to pg_stat_database for the global stats, rather than invent a separate view. I can see the argument going both ways, but particularly with the name pg_stat_checksums we are setting a pattern that will create one view for each counter. That's not very good, I think.\n>\n> In the end I'm somewhat split on the idea of pg_stat_database with a NULL row or pg_stat_checkpoints. What do others think?\n\nI agree that having a separate view for each counter if a bad idea.\nBut what I was thinking is that we'll probably end up with a view to\ntrack per-db online checksum activation progress/activity/status at\nsome point (similar to pg_stat_progress_vacuum), so why not starting\nwith this dedicated view right now and add new counters later, either\nin pgstat and/or some shmem, as long as we keep the view name as SQL\ninterface.Technically, that should be in pg_stat_progress_checksums to be consistent :) So whichever way we turn, it's going to be inconsistent with something.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 30 Mar 2019 16:01:50 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "Sorry for delay, I had to catch a train.\n\nOn Sat, Mar 30, 2019 at 4:02 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> My vote is still to drop it completely, but if we're keeping it, it has to go in both paths.\n\nOk. For now I'm attaching v2, which drops this field, rename the view\nto pg_stat_checksums (terminal s), and use the policy for choosing\nrandom oid in the 8000..9999 range for new functions.\n\nI'd also have to get more feedback on this. For now, I'll add this\nthread to the pg12 open items, as a follow up of the initial code\ndrop.\n\n> Technically, that should be in pg_stat_progress_checksums to be consistent :) So whichever way we turn, it's going to be inconsistent with something.\n\nIndeed :)",
"msg_date": "Sat, 30 Mar 2019 18:15:11 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 06:15:11PM +0100, Julien Rouhaud wrote:\n> I'd also have to get more feedback on this. For now, I'll add this\n> thread to the pg12 open items, as a follow up of the initial code\n> drop.\n\nCatching up here... I think that having a completely separate view\nwith one row for each database and one row for shared objects makes\nthe most sense based on what has been proposed on this thread. Being\nable to track checksum failures for shared catalogs is really\nsomething I'd like to be able to see easily, and I have seen\ncorruption involving such objects from time to time. I think that we\nshould have a design which is extensible. One thing which is not\nproposed on this patch, and I am fine with it as a first draft, is\nthat we don't have any information about the broken block number and\nthe file involved. My gut tells me that we'd want a separate view,\nlike pg_stat_checksums_details with one tuple per (dboid, rel, fork,\nblck) to be complete. But that's just for future work.\n\nFor the progress part, we would most likely have a separate view for\nthat as well, as the view should show no rows if there is no operation\nin progress.\n\nThe patch looks rather clean to me, I have some comments.\n\n- <application>pg_checksums</application>. The exit status is zero if there\n- are no checksum errors when checking them, and nonzero if at least one\n- checksum failure is detected. If enabling or disabling checksums, the\n- exit status is nonzero if the operation failed.\n+ <application>pg_checksums</application>. As a consequence, the\n+ <structname>pg_stat_checksums</structnameview won't reflect this activity.\n+ The exit status is zero if there are no checksum errors when checking them,\n+ and nonzero if at least one checksum failure is detected. If enabling or\n+ disabling checksums, the exit status is nonzero if the operation failed.\n\nThe docs of pg_checksums already clearly state that the cluster needs\nto be offline, so I am not sure that this addition is necessary.\n\n@@ -1539,6 +1539,8 @@ pgstat_report_checksum_failures_in_db(Oid dboid,\nint failurecount)\n\nPlease note that there is no need to have the list of arguments in the\ncomment block at the top of pgstat_report_checksum_failures_in_db().\n \n+\tif ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n+\t\tresult = 0;\n+\telse\n+\t\tresult = dbentry->last_checksum_failure;\n+\n+\tif (result == 0)\n+\t\tPG_RETURN_NULL();\n+\telse\n+\t\tPG_RETURN_TIMESTAMPTZ(result);\n+}\n\nNo need for two ifs here. What about just that?\nif (NULL)\n PG_RETURN_NULL();\nelse\n PG_RETURN_TIMESTAMPTZ(last_checksum_failure);\n--\nMichael",
"msg_date": "Tue, 2 Apr 2019 13:56:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Mar 30, 2019 at 06:15:11PM +0100, Julien Rouhaud wrote:\n> > I'd also have to get more feedback on this. For now, I'll add this\n> > thread to the pg12 open items, as a follow up of the initial code\n> > drop.\n>\n> Catching up here... I think that having a completely separate view\n> with one row for each database and one row for shared objects makes\n> the most sense based on what has been proposed on this thread. Being\n> able to track checksum failures for shared catalogs is really\n> something I'd like to be able to see easily, and I have seen\n> corruption involving such objects from time to time. I think that we\n> should have a design which is extensible.\n\nOk!\n\n> One thing which is not\n> proposed on this patch, and I am fine with it as a first draft, is\n> that we don't have any information about the broken block number and\n> the file involved. My gut tells me that we'd want a separate view,\n> like pg_stat_checksums_details with one tuple per (dboid, rel, fork,\n> blck) to be complete. But that's just for future work.\n\nThat could indeed be nice.\n\n> For the progress part, we would most likely have a separate view for\n> that as well, as the view should show no rows if there is no operation\n> in progress.\n\nOk.\n\n> The patch looks rather clean to me, I have some comments.\n>\n> - <application>pg_checksums</application>. The exit status is zero if there\n> - are no checksum errors when checking them, and nonzero if at least one\n> - checksum failure is detected. If enabling or disabling checksums, the\n> - exit status is nonzero if the operation failed.\n> + <application>pg_checksums</application>. As a consequence, the\n> + <structname>pg_stat_checksums</structnameview won't reflect this activity.\n> + The exit status is zero if there are no checksum errors when checking them,\n> + and nonzero if at least one checksum failure is detected. If enabling or\n> + disabling checksums, the exit status is nonzero if the operation failed.\n>\n> The docs of pg_checksums already clearly state that the cluster needs\n> to be offline, so I am not sure that this addition is necessary.\n\nAgreed, removed.\n\n> @@ -1539,6 +1539,8 @@ pgstat_report_checksum_failures_in_db(Oid dboid,\n> int failurecount)\n>\n> Please note that there is no need to have the list of arguments in the\n> comment block at the top of pgstat_report_checksum_failures_in_db().\n\nIndeed, fixed.\n\n> + if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> + result = 0;\n> + else\n> + result = dbentry->last_checksum_failure;\n> +\n> + if (result == 0)\n> + PG_RETURN_NULL();\n> + else\n> + PG_RETURN_TIMESTAMPTZ(result);\n> +}\n>\n> No need for two ifs here. What about just that?\n> if (NULL)\n> PG_RETURN_NULL();\n> else\n> PG_RETURN_TIMESTAMPTZ(last_checksum_failure);\n\nI do agree, but this is done like this everywhere in pgstatfuncs.c, so\nI think it's better to keep it as-is for consistency.",
"msg_date": "Tue, 2 Apr 2019 07:43:12 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Tue, Apr 02, 2019 at 07:43:12AM +0200, Julien Rouhaud wrote:\n> On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> One thing which is not\n>> proposed on this patch, and I am fine with it as a first draft, is\n>> that we don't have any information about the broken block number and\n>> the file involved. My gut tells me that we'd want a separate view,\n>> like pg_stat_checksums_details with one tuple per (dboid, rel, fork,\n>> blck) to be complete. But that's just for future work.\n> \n> That could indeed be nice.\n\nActually, backpedaling on this one... pg_stat_checksums_details may\nbe a bad idea as we could finish with one row per broken block. If\na corruption is spreading quickly, pgstat would not be able to sustain\nthat amount of objects. Having pg_stat_checksums would allow us to\nplugin more data easily based on the last failure state:\n- last relid of failure\n- last fork type of failure\n- last block number of failure.\nNot saying to do that now, but having that in pg_stat_database does\nnot seem very natural to me. And on top of that we would have an\nextra row full of NULLs for shared objects in pg_stat_database if we\nadopt the unique view approach... I find that rather ugly.\n\n>> No need for two ifs here. What about just that?\n>> if (NULL)\n>> PG_RETURN_NULL();\n>> else\n>> PG_RETURN_TIMESTAMPTZ(last_checksum_failure);\n> \n> I do agree, but this is done like this everywhere in pgstatfuncs.c, so\n> I think it's better to keep it as-is for consistency.\n\nOkay, this is not an issue for me.\n\nThe patch looks fine to me as-is. Let's see what Magnus or others have to\nsay about it. I can take care of this open item if necessary but\nthat's not my commit so I'd rather not step on Magnus' toes.\n--\nMichael",
"msg_date": "Tue, 2 Apr 2019 15:47:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Tue, Apr 2, 2019 at 8:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Apr 02, 2019 at 07:43:12AM +0200, Julien Rouhaud wrote:\n> > On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >> One thing which is not\n> >> proposed on this patch, and I am fine with it as a first draft, is\n> >> that we don't have any information about the broken block number and\n> >> the file involved. My gut tells me that we'd want a separate view,\n> >> like pg_stat_checksums_details with one tuple per (dboid, rel, fork,\n> >> blck) to be complete. But that's just for future work.\n> >\n> > That could indeed be nice.\n>\n> Actually, backpedaling on this one... pg_stat_checksums_details may\n> be a bad idea as we could finish with one row per broken block. If\n> a corruption is spreading quickly, pgstat would not be able to sustain\n> that amount of objects. Having pg_stat_checksums would allow us to\n> plugin more data easily based on the last failure state:\n> - last relid of failure\n> - last fork type of failure\n> - last block number of failure.\n> Not saying to do that now, but having that in pg_stat_database does\n> not seem very natural to me. And on top of that we would have an\n> extra row full of NULLs for shared objects in pg_stat_database if we\n> adopt the unique view approach... I find that rather ugly.\n>\n\nI think that tracking each and every block is of course a non-starter, as\nyou've noticed.\n\nI'm really not sure how much those three extra fields help, TBH. As I see\nit the real usecase for this is automated monitoring and quick-checks of\nthe kind of \"is my db currently broken somewhere\", in combination with \"did\nthis occur recently\" (for people who have never looked at their stats).\n\nThis gives people enough information to know where to go look in the logs.\n\nI mean, what's the actual usecase for tracking relid/fork/block of the\n*last* failure only? To monitor and see if it changes? What do I do when I\nhave 10 failures, and I only know about the last one? (I have to go to the\nlogs anyway)\n\nI think having the count and hte last time make sense, but I'm very\nsceptical about the rest.\n\nI can somewhat agree that splitting it on a per database level might even\nat that be overdoing it. What might actually be more interesting from a\nfailure-location perspective would be tablespace, rather than any of the\nothers. Or we could reduce it down to just putting it in pg_stat_bgwriter\nand only count global values perhaps, if in the end we don't think the\nsplit-per-database is reasonable?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Apr 2, 2019 at 8:47 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Apr 02, 2019 at 07:43:12AM +0200, Julien Rouhaud wrote:\n> On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> One thing which is not\n>> proposed on this patch, and I am fine with it as a first draft, is\n>> that we don't have any information about the broken block number and\n>> the file involved. My gut tells me that we'd want a separate view,\n>> like pg_stat_checksums_details with one tuple per (dboid, rel, fork,\n>> blck) to be complete. But that's just for future work.\n> \n> That could indeed be nice.\n\nActually, backpedaling on this one... pg_stat_checksums_details may\nbe a bad idea as we could finish with one row per broken block. If\na corruption is spreading quickly, pgstat would not be able to sustain\nthat amount of objects. Having pg_stat_checksums would allow us to\nplugin more data easily based on the last failure state:\n- last relid of failure\n- last fork type of failure\n- last block number of failure.\nNot saying to do that now, but having that in pg_stat_database does\nnot seem very natural to me. And on top of that we would have an\nextra row full of NULLs for shared objects in pg_stat_database if we\nadopt the unique view approach... I find that rather ugly.I think that tracking each and every block is of course a non-starter, as you've noticed.I'm really not sure how much those three extra fields help, TBH. As I see it the real usecase for this is automated monitoring and quick-checks of the kind of \"is my db currently broken somewhere\", in combination with \"did this occur recently\" (for people who have never looked at their stats).This gives people enough information to know where to go look in the logs.I mean, what's the actual usecase for tracking relid/fork/block of the *last* failure only? To monitor and see if it changes? What do I do when I have 10 failures, and I only know about the last one? (I have to go to the logs anyway)I think having the count and hte last time make sense, but I'm very sceptical about the rest.I can somewhat agree that splitting it on a per database level might even at that be overdoing it. What might actually be more interesting from a failure-location perspective would be tablespace, rather than any of the others. Or we could reduce it down to just putting it in pg_stat_bgwriter and only count global values perhaps, if in the end we don't think the split-per-database is reasonable?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 2 Apr 2019 19:06:35 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Tue, Apr 02, 2019 at 07:06:35PM +0200, Magnus Hagander wrote:\n> I think having the count and hte last time make sense, but I'm very\n> sceptical about the rest.\n\nThere may be other things which we are not considering on this\nthread. I don't know.\n\n> I can somewhat agree that splitting it on a per database level might even\n> at that be overdoing it. What might actually be more interesting from a\n> failure-location perspective would be tablespace, rather than any of the\n> others. Or we could reduce it down to just putting it in pg_stat_bgwriter\n> and only count global values perhaps, if in the end we don't think the\n> split-per-database is reasonable?\n\nA split per database or per tablespace is I think a very good thing.\nThis helps in tracking down which partitions have gone crazy, and a\nsingle global counter does not allow that.\n--\nMichael",
"msg_date": "Wed, 3 Apr 2019 10:43:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > I can somewhat agree that splitting it on a per database level might even\n> > at that be overdoing it. What might actually be more interesting from a\n> > failure-location perspective would be tablespace, rather than any of the\n> > others. Or we could reduce it down to just putting it in pg_stat_bgwriter\n> > and only count global values perhaps, if in the end we don't think the\n> > split-per-database is reasonable?\n>\n> A split per database or per tablespace is I think a very good thing.\n> This helps in tracking down which partitions have gone crazy, and a\n> single global counter does not allow that.\n\nIndeed, a per-tablespace would be much more convenient to track the\nproblem down at the physical level, but we don't have the required\ninfrastructure for that yet, and it seems quite late to add it now.\nIMHO, a per-database has also some value, as it can help to track down\nissues at the application level.\n\nMaybe we could add a new column to the view (for instance \"source\")\nwhich would always be 'database', and we could later add\nper-tablespace counters, keeping the view compatibility.\n\n\n",
"msg_date": "Wed, 3 Apr 2019 10:44:32 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 10:44 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Wed, Apr 3, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > > I can somewhat agree that splitting it on a per database level might\n> even\n> > > at that be overdoing it. What might actually be more interesting from a\n> > > failure-location perspective would be tablespace, rather than any of\n> the\n> > > others. Or we could reduce it down to just putting it in\n> pg_stat_bgwriter\n> > > and only count global values perhaps, if in the end we don't think the\n> > > split-per-database is reasonable?\n> >\n> > A split per database or per tablespace is I think a very good thing.\n> > This helps in tracking down which partitions have gone crazy, and a\n> > single global counter does not allow that.\n>\n> Indeed, a per-tablespace would be much more convenient to track the\n> problem down at the physical level, but we don't have the required\n> infrastructure for that yet, and it seems quite late to add it now.\n> IMHO, a per-database has also some value, as it can help to track down\n> issues at the application level.\n>\n> Maybe we could add a new column to the view (for instance \"source\")\n> which would always be 'database', and we could later add\n> per-tablespace counters, keeping the view compatibility.\n>\n\nUgh.\n\nIf we wanted per tablespace counters, shouldn't we have a\npg_stat_tablespace instead? So we'd have a checksum failures counter in\npg_state_database separated by database, and one in pg_stat_tablespace\nseparated by tablespace? (Along with probably a bunch of other entries for\ntablespaces)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Apr 3, 2019 at 10:44 AM Julien Rouhaud <rjuju123@gmail.com> wrote:On Wed, Apr 3, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > I can somewhat agree that splitting it on a per database level might even\n> > at that be overdoing it. What might actually be more interesting from a\n> > failure-location perspective would be tablespace, rather than any of the\n> > others. Or we could reduce it down to just putting it in pg_stat_bgwriter\n> > and only count global values perhaps, if in the end we don't think the\n> > split-per-database is reasonable?\n>\n> A split per database or per tablespace is I think a very good thing.\n> This helps in tracking down which partitions have gone crazy, and a\n> single global counter does not allow that.\n\nIndeed, a per-tablespace would be much more convenient to track the\nproblem down at the physical level, but we don't have the required\ninfrastructure for that yet, and it seems quite late to add it now.\nIMHO, a per-database has also some value, as it can help to track down\nissues at the application level.\n\nMaybe we could add a new column to the view (for instance \"source\")\nwhich would always be 'database', and we could later add\nper-tablespace counters, keeping the view compatibility.\nUgh.If we wanted per tablespace counters, shouldn't we have a pg_stat_tablespace instead? So we'd have a checksum failures counter in pg_state_database separated by database, and one in pg_stat_tablespace separated by tablespace? (Along with probably a bunch of other entries for tablespaces)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 3 Apr 2019 11:31:24 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 11:31 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Wed, Apr 3, 2019 at 10:44 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Wed, Apr 3, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> >\n>> > > I can somewhat agree that splitting it on a per database level might even\n>> > > at that be overdoing it. What might actually be more interesting from a\n>> > > failure-location perspective would be tablespace, rather than any of the\n>> > > others. Or we could reduce it down to just putting it in pg_stat_bgwriter\n>> > > and only count global values perhaps, if in the end we don't think the\n>> > > split-per-database is reasonable?\n>> >\n>> > A split per database or per tablespace is I think a very good thing.\n>> > This helps in tracking down which partitions have gone crazy, and a\n>> > single global counter does not allow that.\n>>\n>> Indeed, a per-tablespace would be much more convenient to track the\n>> problem down at the physical level, but we don't have the required\n>> infrastructure for that yet, and it seems quite late to add it now.\n>> IMHO, a per-database has also some value, as it can help to track down\n>> issues at the application level.\n>>\n>> Maybe we could add a new column to the view (for instance \"source\")\n>> which would always be 'database', and we could later add\n>> per-tablespace counters, keeping the view compatibility.\n>\n>\n> Ugh.\n>\n> If we wanted per tablespace counters, shouldn't we have a pg_stat_tablespace instead? So we'd have a checksum failures counter in pg_state_database separated by database, and one in pg_stat_tablespace separated by tablespace? (Along with probably a bunch of other entries for tablespaces)\n\nBut there's still the problem of reporting errors on shared relation,\nso pg_stat_database doesn't really fit for that. If we go with a\nchecksum centric view, it'd be strange to have some of the counters in\nanother view.\n\n\n",
"msg_date": "Wed, 3 Apr 2019 11:56:14 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Apr 03, 2019 at 11:56:14AM +0200, Julien Rouhaud wrote:\n> But there's still the problem of reporting errors on shared relation,\n> so pg_stat_database doesn't really fit for that. If we go with a\n> checksum centric view, it'd be strange to have some of the counters in\n> another view.\n\nHaving pg_stat_database filled with a phantom row full of NULLs to\ntrack checksum failures of shared objects would be confusing I think.\nI personally quite like the separate view approach, with one row per\ndatabase, but one opinion does not stand as an agreement.\n\nAnyway, even if we have no agreement on the shape of what we'd like to\ndo, I don't think that HEAD is in a proper shape now because we just\ndon't track a portion of the objects which could have checksum\nfailures. So we should either revert the patch currently committed,\nor add tracking for shared objects, but definitely not keep the code\nin a state in-between.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 13:22:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 6:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Apr 03, 2019 at 11:56:14AM +0200, Julien Rouhaud wrote:\n> > But there's still the problem of reporting errors on shared relation,\n> > so pg_stat_database doesn't really fit for that. If we go with a\n> > checksum centric view, it'd be strange to have some of the counters in\n> > another view.\n>\n> Having pg_stat_database filled with a phantom row full of NULLs to\n> track checksum failures of shared objects would be confusing I think.\n> I personally quite like the separate view approach, with one row per\n> database, but one opinion does not stand as an agreement.\n>\n\nIt wouldn't be just that, but it would make sense to include things like\nblks_read/blks_hit there as well, wouldn't it? As well as read/write time.\nThings we don't track today, but it could be useful to do so.\n\nBut yeah, I'm not strongly in either direction, so if others feel strongly\na separate view is better, then we should do a separate view.\n\n\nAnyway, even if we have no agreement on the shape of what we'd like to\n> do, I don't think that HEAD is in a proper shape now because we just\n> don't track a portion of the objects which could have checksum\n> failures. So we should either revert the patch currently committed,\n> or add tracking for shared objects, but definitely not keep the code\n> in a state in-between.\n>\n\nDefinitely. That's why we're discussing it now :) Maybe we should put it on\nthe open items list, because we definitely don't want to ship it one way\nand then change our mind in the next version.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 4, 2019 at 6:22 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Apr 03, 2019 at 11:56:14AM +0200, Julien Rouhaud wrote:\n> But there's still the problem of reporting errors on shared relation,\n> so pg_stat_database doesn't really fit for that. If we go with a\n> checksum centric view, it'd be strange to have some of the counters in\n> another view.\n\nHaving pg_stat_database filled with a phantom row full of NULLs to\ntrack checksum failures of shared objects would be confusing I think.\nI personally quite like the separate view approach, with one row per\ndatabase, but one opinion does not stand as an agreement.It wouldn't be just that, but it would make sense to include things like blks_read/blks_hit there as well, wouldn't it? As well as read/write time. Things we don't track today, but it could be useful to do so.But yeah, I'm not strongly in either direction, so if others feel strongly a separate view is better, then we should do a separate view.\nAnyway, even if we have no agreement on the shape of what we'd like to\ndo, I don't think that HEAD is in a proper shape now because we just\ndon't track a portion of the objects which could have checksum\nfailures. So we should either revert the patch currently committed,\nor add tracking for shared objects, but definitely not keep the code\nin a state in-between.Definitely. That's why we're discussing it now :) Maybe we should put it on the open items list, because we definitely don't want to ship it one way and then change our mind in the next version.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 4 Apr 2019 10:25:16 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 10:25 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Thu, Apr 4, 2019 at 6:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Wed, Apr 03, 2019 at 11:56:14AM +0200, Julien Rouhaud wrote:\n>> > But there's still the problem of reporting errors on shared relation,\n>> > so pg_stat_database doesn't really fit for that. If we go with a\n>> > checksum centric view, it'd be strange to have some of the counters in\n>> > another view.\n>>\n>> Having pg_stat_database filled with a phantom row full of NULLs to\n>> track checksum failures of shared objects would be confusing I think.\n>> I personally quite like the separate view approach, with one row per\n>> database, but one opinion does not stand as an agreement.\n>\n> It wouldn't be just that, but it would make sense to include things like blks_read/blks_hit there as well, wouldn't it? As well as read/write time. Things we don't track today, but it could be useful to do so.\n\nActually we do track counters for shared relations (see\npgstat_report_stat), we just don't expose them in any view. But it's\nstill possible to get the counters manually:\n\n# select pg_stat_get_db_blocks_hit(0);\n pg_stat_get_db_blocks_hit\n---------------------------\n 2710329\n(1 row)\n\nMy main concern is that pg_stat_get_db_numbackends(0) report something\nlike the total number of backend (though it seems that there's an\nextra connection accounted for, I don't know which process it's), so\nif we expose it in pg_stat_database, sum(numbackends) won't make sense\nanymore.\n\n>> Anyway, even if we have no agreement on the shape of what we'd like to\n>> do, I don't think that HEAD is in a proper shape now because we just\n>> don't track a portion of the objects which could have checksum\n>> failures. So we should either revert the patch currently committed,\n>> or add tracking for shared objects, but definitely not keep the code\n>> in a state in-between.\n>\n>\n> Definitely. That's why we're discussing it now :) Maybe we should put it on the open items list, because we definitely don't want to ship it one way and then change our mind in the next version.\n\nI already added an open item for that.\n\n\n",
"msg_date": "Thu, 4 Apr 2019 10:47:37 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 10:47 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Apr 4, 2019 at 10:25 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > On Thu, Apr 4, 2019 at 6:22 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >>\n> >> On Wed, Apr 03, 2019 at 11:56:14AM +0200, Julien Rouhaud wrote:\n> >> > But there's still the problem of reporting errors on shared relation,\n> >> > so pg_stat_database doesn't really fit for that. If we go with a\n> >> > checksum centric view, it'd be strange to have some of the counters in\n> >> > another view.\n> >>\n> >> Having pg_stat_database filled with a phantom row full of NULLs to\n> >> track checksum failures of shared objects would be confusing I think.\n> >> I personally quite like the separate view approach, with one row per\n> >> database, but one opinion does not stand as an agreement.\n> >\n> > It wouldn't be just that, but it would make sense to include things like\n> blks_read/blks_hit there as well, wouldn't it? As well as read/write time.\n> Things we don't track today, but it could be useful to do so.\n>\n> Actually we do track counters for shared relations (see\n> pgstat_report_stat), we just don't expose them in any view. But it's\n> still possible to get the counters manually:\n>\n> # select pg_stat_get_db_blocks_hit(0);\n> pg_stat_get_db_blocks_hit\n> ---------------------------\n> 2710329\n> (1 row)\n>\n\nOh, right, we do actually collect it, we just don't show is. So that's\nanother argument *for* having it in pg_stat_database. Or at least not for\nhaving it in a checksum specific view, because then we should really make a\nseparate view for this as well.\n\n\n\nMy main concern is that pg_stat_get_db_numbackends(0) report something\n> like the total number of backend (though it seems that there's an\n> extra connection accounted for, I don't know which process it's), so\n> if we expose it in pg_stat_database, sum(numbackends) won't make sense\n> anymore.\n>\n\nWe could also just hardcoded it so that one always shows 0?\n\n\n>> Anyway, even if we have no agreement on the shape of what we'd like to\n> >> do, I don't think that HEAD is in a proper shape now because we just\n> >> don't track a portion of the objects which could have checksum\n> >> failures. So we should either revert the patch currently committed,\n> >> or add tracking for shared objects, but definitely not keep the code\n> >> in a state in-between.\n> >\n> >\n> > Definitely. That's why we're discussing it now :) Maybe we should put it\n> on the open items list, because we definitely don't want to ship it one way\n> and then change our mind in the next version.\n>\n> I already added an open item for that.\n>\n\nGood.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 4, 2019 at 10:47 AM Julien Rouhaud <rjuju123@gmail.com> wrote:On Thu, Apr 4, 2019 at 10:25 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Thu, Apr 4, 2019 at 6:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Wed, Apr 03, 2019 at 11:56:14AM +0200, Julien Rouhaud wrote:\n>> > But there's still the problem of reporting errors on shared relation,\n>> > so pg_stat_database doesn't really fit for that. If we go with a\n>> > checksum centric view, it'd be strange to have some of the counters in\n>> > another view.\n>>\n>> Having pg_stat_database filled with a phantom row full of NULLs to\n>> track checksum failures of shared objects would be confusing I think.\n>> I personally quite like the separate view approach, with one row per\n>> database, but one opinion does not stand as an agreement.\n>\n> It wouldn't be just that, but it would make sense to include things like blks_read/blks_hit there as well, wouldn't it? As well as read/write time. Things we don't track today, but it could be useful to do so.\n\nActually we do track counters for shared relations (see\npgstat_report_stat), we just don't expose them in any view. But it's\nstill possible to get the counters manually:\n\n# select pg_stat_get_db_blocks_hit(0);\n pg_stat_get_db_blocks_hit\n---------------------------\n 2710329\n(1 row)Oh, right, we do actually collect it, we just don't show is. So that's another argument *for* having it in pg_stat_database. Or at least not for having it in a checksum specific view, because then we should really make a separate view for this as well.\nMy main concern is that pg_stat_get_db_numbackends(0) report something\nlike the total number of backend (though it seems that there's an\nextra connection accounted for, I don't know which process it's), so\nif we expose it in pg_stat_database, sum(numbackends) won't make sense\nanymore.We could also just hardcoded it so that one always shows 0?\n>> Anyway, even if we have no agreement on the shape of what we'd like to\n>> do, I don't think that HEAD is in a proper shape now because we just\n>> don't track a portion of the objects which could have checksum\n>> failures. So we should either revert the patch currently committed,\n>> or add tracking for shared objects, but definitely not keep the code\n>> in a state in-between.\n>\n>\n> Definitely. That's why we're discussing it now :) Maybe we should put it on the open items list, because we definitely don't want to ship it one way and then change our mind in the next version.\n\nI already added an open item for that.\nGood.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 4 Apr 2019 13:24:56 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 1:25 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Thu, Apr 4, 2019 at 10:47 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> Actually we do track counters for shared relations (see\n>> pgstat_report_stat), we just don't expose them in any view. But it's\n>> still possible to get the counters manually:\n>>\n>> # select pg_stat_get_db_blocks_hit(0);\n>> pg_stat_get_db_blocks_hit\n>> ---------------------------\n>> 2710329\n>> (1 row)\n>\n>\n> Oh, right, we do actually collect it, we just don't show is. So that's another argument *for* having it in pg_stat_database. Or at least not for having it in a checksum specific view, because then we should really make a separate view for this as well.\n\nOk, so let's expose all the shared counters in pg_stat_database and\nremove the pg_stat_checksum view.\n\n>> My main concern is that pg_stat_get_db_numbackends(0) report something\n>> like the total number of backend (though it seems that there's an\n>> extra connection accounted for, I don't know which process it's), so\n>> if we expose it in pg_stat_database, sum(numbackends) won't make sense\n>> anymore.\n>\n> We could also just hardcoded it so that one always shows 0?\n\nThat's a bit hacky, but that's probably the best compromise. Attached\nv4 with all those changes.",
"msg_date": "Thu, 4 Apr 2019 14:53:55 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 2:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Apr 4, 2019 at 1:25 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > On Thu, Apr 4, 2019 at 10:47 AM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> >>\n> >> Actually we do track counters for shared relations (see\n> >> pgstat_report_stat), we just don't expose them in any view. But it's\n> >> still possible to get the counters manually:\n> >>\n> >> # select pg_stat_get_db_blocks_hit(0);\n> >> pg_stat_get_db_blocks_hit\n> >> ---------------------------\n> >> 2710329\n> >> (1 row)\n> >\n> >\n> > Oh, right, we do actually collect it, we just don't show is. So that's\n> another argument *for* having it in pg_stat_database. Or at least not for\n> having it in a checksum specific view, because then we should really make a\n> separate view for this as well.\n>\n> Ok, so let's expose all the shared counters in pg_stat_database and\n> remove the pg_stat_checksum view.\n>\n> >> My main concern is that pg_stat_get_db_numbackends(0) report something\n> >> like the total number of backend (though it seems that there's an\n> >> extra connection accounted for, I don't know which process it's), so\n> >> if we expose it in pg_stat_database, sum(numbackends) won't make sense\n> >> anymore.\n> >\n> > We could also just hardcoded it so that one always shows 0?\n>\n> That's a bit hacky, but that's probably the best compromise. Attached\n> v4 with all those changes.\n>\n\nI'm not sure I like the idea of using \"<shared_objects>\" as the database\nname. It's not very likely that somebody would be using that as a name for\ntheir database, but i's not impossible. But it also just looks strrange.\nWouldn't NULL be a more appropriate choice?\n\nLikewise, shouldn't we return NULL as the number of backends for the shared\ncounters, rather than 0?\n\nMicro-nit:\n+ <entry>Time at which the last data page checksum failures was\ndetected in\ns/failures/failure/\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 4, 2019 at 2:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Thu, Apr 4, 2019 at 1:25 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Thu, Apr 4, 2019 at 10:47 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> Actually we do track counters for shared relations (see\n>> pgstat_report_stat), we just don't expose them in any view. But it's\n>> still possible to get the counters manually:\n>>\n>> # select pg_stat_get_db_blocks_hit(0);\n>> pg_stat_get_db_blocks_hit\n>> ---------------------------\n>> 2710329\n>> (1 row)\n>\n>\n> Oh, right, we do actually collect it, we just don't show is. So that's another argument *for* having it in pg_stat_database. Or at least not for having it in a checksum specific view, because then we should really make a separate view for this as well.\n\nOk, so let's expose all the shared counters in pg_stat_database and\nremove the pg_stat_checksum view.\n\n>> My main concern is that pg_stat_get_db_numbackends(0) report something\n>> like the total number of backend (though it seems that there's an\n>> extra connection accounted for, I don't know which process it's), so\n>> if we expose it in pg_stat_database, sum(numbackends) won't make sense\n>> anymore.\n>\n> We could also just hardcoded it so that one always shows 0?\n\nThat's a bit hacky, but that's probably the best compromise. Attached\nv4 with all those changes.\nI'm not sure I like the idea of using \"<shared_objects>\" as the database name. It's not very likely that somebody would be using that as a name for their database, but i's not impossible. But it also just looks strrange. Wouldn't NULL be a more appropriate choice?Likewise, shouldn't we return NULL as the number of backends for the shared counters, rather than 0?Micro-nit:+ <entry>Time at which the last data page checksum failures was detected ins/failures/failure/-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 7 Apr 2019 16:36:14 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "Thanks for looking it it!\n\nOn Sun, Apr 7, 2019 at 4:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> I'm not sure I like the idea of using \"<shared_objects>\" as the database name. It's not very likely that somebody would be using that as a name for their database, but i's not impossible. But it also just looks strrange. Wouldn't NULL be a more appropriate choice?\n>\n> Likewise, shouldn't we return NULL as the number of backends for the shared counters, rather than 0?\nI wanted to make things more POLA-compliant, but maybe it was a bad\nidea. I changed it for NULL here and for numbackends.\n\n> Micro-nit:\n> + <entry>Time at which the last data page checksum failures was detected in\n> s/failures/failure/\n\nOops.\n\nv5 attached.",
"msg_date": "Sun, 7 Apr 2019 18:29:50 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sun, Apr 7, 2019 at 6:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Thanks for looking it it!\n>\n> On Sun, Apr 7, 2019 at 4:36 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> > I'm not sure I like the idea of using \"<shared_objects>\" as the database\n> name. It's not very likely that somebody would be using that as a name for\n> their database, but i's not impossible. But it also just looks strrange.\n> Wouldn't NULL be a more appropriate choice?\n> >\n> > Likewise, shouldn't we return NULL as the number of backends for the\n> shared counters, rather than 0?\n> I wanted to make things more POLA-compliant, but maybe it was a bad\n> idea. I changed it for NULL here and for numbackends.\n>\n> > Micro-nit:\n> > + <entry>Time at which the last data page checksum failures was\n> detected in\n> > s/failures/failure/\n>\n> Oops.\n>\n> v5 attached.\n>\n\nThanks. Pushed!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Apr 7, 2019 at 6:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Thanks for looking it it!\n\nOn Sun, Apr 7, 2019 at 4:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> I'm not sure I like the idea of using \"<shared_objects>\" as the database name. It's not very likely that somebody would be using that as a name for their database, but i's not impossible. But it also just looks strrange. Wouldn't NULL be a more appropriate choice?\n>\n> Likewise, shouldn't we return NULL as the number of backends for the shared counters, rather than 0?\nI wanted to make things more POLA-compliant, but maybe it was a bad\nidea. I changed it for NULL here and for numbackends.\n\n> Micro-nit:\n> + <entry>Time at which the last data page checksum failures was detected in\n> s/failures/failure/\n\nOops.\n\nv5 attached.\nThanks. Pushed!-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 12 Apr 2019 14:18:11 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 2:18 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sun, Apr 7, 2019 at 6:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> v5 attached.\n>\n> Thanks. Pushed!\n\nThanks!\n\n\n",
"msg_date": "Fri, 12 Apr 2019 15:54:23 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "I started looking at this the other night but I see Magnus beat me in\ncommitting it...\n\nOn Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net> wrote:\n> On Sun, Apr 7, 2019 at 6:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> Thanks for looking it it!\n>> On Sun, Apr 7, 2019 at 4:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> >\n>> > I'm not sure I like the idea of using \"<shared_objects>\" as the database name. It's not very likely that somebody would be using that as a name for their database, but i's not impossible. But it also just looks strrange. Wouldn't NULL be a more appropriate choice?\n>> >\n>> > Likewise, shouldn't we return NULL as the number of backends for the shared counters, rather than 0?\n>> I wanted to make things more POLA-compliant, but maybe it was a bad\n>> idea. I changed it for NULL here and for numbackends.\n>>\n\nISTM the argument here is go with zero since you have zero connections\nvs go with null since you can't actually connect, so it doesn't make\nsense. (There is a third argument about making it -1 since you can't\nconnect, but that breaks sum(numbackends) so it's easily dismissed.) I\nthink I would have gone for 0 personally, but what ended up surprising\nme was that a bunch of other stuff like xact_commit show zero when\nAFAICT the above reasoning would apply the same to those columns.\n(unless there is a way to commit a transaction in the global objects\nthat I don't know about).\n\n>> > Micro-nit:\n>> > + <entry>Time at which the last data page checksum failures was detected in\n>> > s/failures/failure/\n>>\n>> Oops.\n>>\n>> v5 attached.\n>\n\nWhat originally got me looking at this was the idea of returning -1\n(or maybe null) for checksum failures for cases when checksums are not\nenabled. This seems a little more complicated to set up, but seems\nlike it might ward off people thinking they are safe due to no\nchecksum error reports when they actually aren't.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sat, 13 Apr 2019 14:46:27 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 8:46 PM Robert Treat <rob@xzilla.net> wrote:\n\n>\n> On Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > On Sun, Apr 7, 2019 at 6:28 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> >> Thanks for looking it it!\n> >> On Sun, Apr 7, 2019 at 4:36 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >> >\n> >> > I'm not sure I like the idea of using \"<shared_objects>\" as the\n> database name. It's not very likely that somebody would be using that as a\n> name for their database, but i's not impossible. But it also just looks\n> strrange. Wouldn't NULL be a more appropriate choice?\n> >> >\n> >> > Likewise, shouldn't we return NULL as the number of backends for the\n> shared counters, rather than 0?\n> >> I wanted to make things more POLA-compliant, but maybe it was a bad\n> >> idea. I changed it for NULL here and for numbackends.\n> >>\n>\n> ISTM the argument here is go with zero since you have zero connections\n> vs go with null since you can't actually connect, so it doesn't make\n> sense. (There is a third argument about making it -1 since you can't\n> connect, but that breaks sum(numbackends) so it's easily dismissed.) I\n> think I would have gone for 0 personally, but what ended up surprising\n> me was that a bunch of other stuff like xact_commit show zero when\n> AFAICT the above reasoning would apply the same to those columns.\n> (unless there is a way to commit a transaction in the global objects\n> that I don't know about).\n>\n\nThat's a good point. I mean, you can commit a transaction that involves\nchanges of global objects, but it counts in the database that you were\nconneced to.\n\nWe should probably at least make it consistent and make it NULL in all or 0\nin all.\n\nI'm -1 for using -1 (!), for the very reason that you mention. But either\nchanging the numbackends to 0, or the others to NULL would work for\nconsistency. I'm leaning towards the 0 as well.\n\n\n>> > Micro-nit:\n> >> > + <entry>Time at which the last data page checksum failures was\n> detected in\n> >> > s/failures/failure/\n> >>\n> >> Oops.\n> >>\n> >> v5 attached.\n> >\n>\n> What originally got me looking at this was the idea of returning -1\n> (or maybe null) for checksum failures for cases when checksums are not\n> enabled. This seems a little more complicated to set up, but seems\n> like it might ward off people thinking they are safe due to no\n> checksum error reports when they actually aren't.\n>\n\nNULL seems like the reasonable thing to return there. I'm not sure what\nyou're referring to with a little more complicated to set up, thought? Do\nyou mean somehow for the end user?\n\nCode-wise it seems it should be simple -- just do an \"if checksums disabled\nthen return null\" in the two functions.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Apr 13, 2019 at 8:46 PM Robert Treat <rob@xzilla.net> wrote:\nOn Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net> wrote:\n> On Sun, Apr 7, 2019 at 6:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> Thanks for looking it it!\n>> On Sun, Apr 7, 2019 at 4:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> >\n>> > I'm not sure I like the idea of using \"<shared_objects>\" as the database name. It's not very likely that somebody would be using that as a name for their database, but i's not impossible. But it also just looks strrange. Wouldn't NULL be a more appropriate choice?\n>> >\n>> > Likewise, shouldn't we return NULL as the number of backends for the shared counters, rather than 0?\n>> I wanted to make things more POLA-compliant, but maybe it was a bad\n>> idea. I changed it for NULL here and for numbackends.\n>>\n\nISTM the argument here is go with zero since you have zero connections\nvs go with null since you can't actually connect, so it doesn't make\nsense. (There is a third argument about making it -1 since you can't\nconnect, but that breaks sum(numbackends) so it's easily dismissed.) I\nthink I would have gone for 0 personally, but what ended up surprising\nme was that a bunch of other stuff like xact_commit show zero when\nAFAICT the above reasoning would apply the same to those columns.\n(unless there is a way to commit a transaction in the global objects\nthat I don't know about).That's a good point. I mean, you can commit a transaction that involves changes of global objects, but it counts in the database that you were conneced to.We should probably at least make it consistent and make it NULL in all or 0 in all.I'm -1 for using -1 (!), for the very reason that you mention. But either changing the numbackends to 0, or the others to NULL would work for consistency. I'm leaning towards the 0 as well.\n>> > Micro-nit:\n>> > + <entry>Time at which the last data page checksum failures was detected in\n>> > s/failures/failure/\n>>\n>> Oops.\n>>\n>> v5 attached.\n>\n\nWhat originally got me looking at this was the idea of returning -1\n(or maybe null) for checksum failures for cases when checksums are not\nenabled. This seems a little more complicated to set up, but seems\nlike it might ward off people thinking they are safe due to no\nchecksum error reports when they actually aren't.NULL seems like the reasonable thing to return there. I'm not sure what you're referring to with a little more complicated to set up, thought? Do you mean somehow for the end user?Code-wise it seems it should be simple -- just do an \"if checksums disabled then return null\" in the two functions.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 14 Apr 2019 19:12:10 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "Sorry for late reply,\n\nOn Sun, Apr 14, 2019 at 7:12 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sat, Apr 13, 2019 at 8:46 PM Robert Treat <rob@xzilla.net> wrote:\n>>\n>> On Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> ISTM the argument here is go with zero since you have zero connections\n>> vs go with null since you can't actually connect, so it doesn't make\n>> sense. (There is a third argument about making it -1 since you can't\n>> connect, but that breaks sum(numbackends) so it's easily dismissed.) I\n>> think I would have gone for 0 personally, but what ended up surprising\n>> me was that a bunch of other stuff like xact_commit show zero when\n>> AFAICT the above reasoning would apply the same to those columns.\n>> (unless there is a way to commit a transaction in the global objects\n>> that I don't know about).\n>\n>\n> That's a good point. I mean, you can commit a transaction that involves changes of global objects, but it counts in the database that you were conneced to.\n>\n> We should probably at least make it consistent and make it NULL in all or 0 in all.\n>\n> I'm -1 for using -1 (!), for the very reason that you mention. But either changing the numbackends to 0, or the others to NULL would work for consistency. I'm leaning towards the 0 as well.\n\n+1 for 0 :) Especially since it's less code in the view.\n\n>> What originally got me looking at this was the idea of returning -1\n>> (or maybe null) for checksum failures for cases when checksums are not\n>> enabled. This seems a little more complicated to set up, but seems\n>> like it might ward off people thinking they are safe due to no\n>> checksum error reports when they actually aren't.\n>\n>\n> NULL seems like the reasonable thing to return there. I'm not sure what you're referring to with a little more complicated to set up, thought? Do you mean somehow for the end user?\n>\n> Code-wise it seems it should be simple -- just do an \"if checksums disabled then return null\" in the two functions.\n\nThat's indeed a good point! Lack of checksum error is distinct from\nchecksums not activated and we should make it obvious.\n\nI don't know if that counts as an open item, but I attach a patch for\nall points discussed here.",
"msg_date": "Mon, 15 Apr 2019 21:31:54 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 3:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Sorry for late reply,\n>\n> On Sun, Apr 14, 2019 at 7:12 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sat, Apr 13, 2019 at 8:46 PM Robert Treat <rob@xzilla.net> wrote:\n> >>\n> >> On Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >> ISTM the argument here is go with zero since you have zero connections\n> >> vs go with null since you can't actually connect, so it doesn't make\n> >> sense. (There is a third argument about making it -1 since you can't\n> >> connect, but that breaks sum(numbackends) so it's easily dismissed.) I\n> >> think I would have gone for 0 personally, but what ended up surprising\n> >> me was that a bunch of other stuff like xact_commit show zero when\n> >> AFAICT the above reasoning would apply the same to those columns.\n> >> (unless there is a way to commit a transaction in the global objects\n> >> that I don't know about).\n> >\n> >\n> > That's a good point. I mean, you can commit a transaction that involves changes of global objects, but it counts in the database that you were conneced to.\n> >\n> > We should probably at least make it consistent and make it NULL in all or 0 in all.\n> >\n> > I'm -1 for using -1 (!), for the very reason that you mention. But either changing the numbackends to 0, or the others to NULL would work for consistency. I'm leaning towards the 0 as well.\n>\n> +1 for 0 :) Especially since it's less code in the view.\n>\n\n+1 for 0\n\n> >> What originally got me looking at this was the idea of returning -1\n> >> (or maybe null) for checksum failures for cases when checksums are not\n> >> enabled. This seems a little more complicated to set up, but seems\n> >> like it might ward off people thinking they are safe due to no\n> >> checksum error reports when they actually aren't.\n> >\n> >\n> > NULL seems like the reasonable thing to return there. I'm not sure what you're referring to with a little more complicated to set up, thought? Do you mean somehow for the end user?\n> >\n> > Code-wise it seems it should be simple -- just do an \"if checksums disabled then return null\" in the two functions.\n>\n> That's indeed a good point! Lack of checksum error is distinct from\n> checksums not activated and we should make it obvious.\n>\n> I don't know if that counts as an open item, but I attach a patch for\n> all points discussed here.\n\nISTM we should mention shared objects in both places in the docs, and\nwant \"NULL if data checksums\" rather than \"NULL is data checksums\".\nAttaching slightly modified patch with those changes, but otherwise\nLGTM.\n\nRobert Treat\nhttps://xzilla.net",
"msg_date": "Tue, 16 Apr 2019 11:38:49 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 5:39 PM Robert Treat <rob@xzilla.net> wrote:\n\n> On Mon, Apr 15, 2019 at 3:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Sorry for late reply,\n> >\n> > On Sun, Apr 14, 2019 at 7:12 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > >\n> > > On Sat, Apr 13, 2019 at 8:46 PM Robert Treat <rob@xzilla.net> wrote:\n> > >>\n> > >> On Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > >> ISTM the argument here is go with zero since you have zero connections\n> > >> vs go with null since you can't actually connect, so it doesn't make\n> > >> sense. (There is a third argument about making it -1 since you can't\n> > >> connect, but that breaks sum(numbackends) so it's easily dismissed.) I\n> > >> think I would have gone for 0 personally, but what ended up surprising\n> > >> me was that a bunch of other stuff like xact_commit show zero when\n> > >> AFAICT the above reasoning would apply the same to those columns.\n> > >> (unless there is a way to commit a transaction in the global objects\n> > >> that I don't know about).\n> > >\n> > >\n> > > That's a good point. I mean, you can commit a transaction that\n> involves changes of global objects, but it counts in the database that you\n> were conneced to.\n> > >\n> > > We should probably at least make it consistent and make it NULL in all\n> or 0 in all.\n> > >\n> > > I'm -1 for using -1 (!), for the very reason that you mention. But\n> either changing the numbackends to 0, or the others to NULL would work for\n> consistency. I'm leaning towards the 0 as well.\n> >\n> > +1 for 0 :) Especially since it's less code in the view.\n> >\n>\n> +1 for 0\n>\n> > >> What originally got me looking at this was the idea of returning -1\n> > >> (or maybe null) for checksum failures for cases when checksums are not\n> > >> enabled. This seems a little more complicated to set up, but seems\n> > >> like it might ward off people thinking they are safe due to no\n> > >> checksum error reports when they actually aren't.\n> > >\n> > >\n> > > NULL seems like the reasonable thing to return there. I'm not sure\n> what you're referring to with a little more complicated to set up, thought?\n> Do you mean somehow for the end user?\n> > >\n> > > Code-wise it seems it should be simple -- just do an \"if checksums\n> disabled then return null\" in the two functions.\n> >\n> > That's indeed a good point! Lack of checksum error is distinct from\n> > checksums not activated and we should make it obvious.\n> >\n> > I don't know if that counts as an open item, but I attach a patch for\n> > all points discussed here.\n>\n> ISTM we should mention shared objects in both places in the docs, and\n> want \"NULL if data checksums\" rather than \"NULL is data checksums\".\n> Attaching slightly modified patch with those changes, but otherwise\n> LGTM.\n>\n\n Interestingly enough, that patch comes out as corrupt. I have no idea why\nthough :) v1 is fine.\n\nSo I tried merging back your changes into it, and then pushing. Please\ndoublecheck I didn't miss something :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Apr 16, 2019 at 5:39 PM Robert Treat <rob@xzilla.net> wrote:On Mon, Apr 15, 2019 at 3:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Sorry for late reply,\n>\n> On Sun, Apr 14, 2019 at 7:12 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sat, Apr 13, 2019 at 8:46 PM Robert Treat <rob@xzilla.net> wrote:\n> >>\n> >> On Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >> ISTM the argument here is go with zero since you have zero connections\n> >> vs go with null since you can't actually connect, so it doesn't make\n> >> sense. (There is a third argument about making it -1 since you can't\n> >> connect, but that breaks sum(numbackends) so it's easily dismissed.) I\n> >> think I would have gone for 0 personally, but what ended up surprising\n> >> me was that a bunch of other stuff like xact_commit show zero when\n> >> AFAICT the above reasoning would apply the same to those columns.\n> >> (unless there is a way to commit a transaction in the global objects\n> >> that I don't know about).\n> >\n> >\n> > That's a good point. I mean, you can commit a transaction that involves changes of global objects, but it counts in the database that you were conneced to.\n> >\n> > We should probably at least make it consistent and make it NULL in all or 0 in all.\n> >\n> > I'm -1 for using -1 (!), for the very reason that you mention. But either changing the numbackends to 0, or the others to NULL would work for consistency. I'm leaning towards the 0 as well.\n>\n> +1 for 0 :) Especially since it's less code in the view.\n>\n\n+1 for 0\n\n> >> What originally got me looking at this was the idea of returning -1\n> >> (or maybe null) for checksum failures for cases when checksums are not\n> >> enabled. This seems a little more complicated to set up, but seems\n> >> like it might ward off people thinking they are safe due to no\n> >> checksum error reports when they actually aren't.\n> >\n> >\n> > NULL seems like the reasonable thing to return there. I'm not sure what you're referring to with a little more complicated to set up, thought? Do you mean somehow for the end user?\n> >\n> > Code-wise it seems it should be simple -- just do an \"if checksums disabled then return null\" in the two functions.\n>\n> That's indeed a good point! Lack of checksum error is distinct from\n> checksums not activated and we should make it obvious.\n>\n> I don't know if that counts as an open item, but I attach a patch for\n> all points discussed here.\n\nISTM we should mention shared objects in both places in the docs, and\nwant \"NULL if data checksums\" rather than \"NULL is data checksums\".\nAttaching slightly modified patch with those changes, but otherwise\nLGTM. Interestingly enough, that patch comes out as corrupt. I have no idea why though :) v1 is fine.So I tried merging back your changes into it, and then pushing. Please doublecheck I didn't miss something :)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 17 Apr 2019 13:55:14 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 1:55 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Apr 16, 2019 at 5:39 PM Robert Treat <rob@xzilla.net> wrote:\n>>\n>> On Mon, Apr 15, 2019 at 3:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > I don't know if that counts as an open item, but I attach a patch for\n>> > all points discussed here.\n>>\n>> ISTM we should mention shared objects in both places in the docs, and\n>> want \"NULL if data checksums\" rather than \"NULL is data checksums\".\n>> Attaching slightly modified patch with those changes, but otherwise\n>> LGTM.\n\nThanks, that's indeed embarassing typos. And agreed for mentioning\nshared objects in both places.\n\n>\n> Interestingly enough, that patch comes out as corrupt. I have no idea why though :) v1 is fine.\n>\n> So I tried merging back your changes into it, and then pushing. Please doublecheck I didn't miss something :)\n\nThanks! I double checked and it all looks fine.\n\n\n",
"msg_date": "Wed, 17 Apr 2019 15:07:04 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 9:07 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Apr 17, 2019 at 1:55 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Tue, Apr 16, 2019 at 5:39 PM Robert Treat <rob@xzilla.net> wrote:\n> >>\n> >> On Mon, Apr 15, 2019 at 3:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> >\n> >> > I don't know if that counts as an open item, but I attach a patch for\n> >> > all points discussed here.\n> >>\n> >> ISTM we should mention shared objects in both places in the docs, and\n> >> want \"NULL if data checksums\" rather than \"NULL is data checksums\".\n> >> Attaching slightly modified patch with those changes, but otherwise\n> >> LGTM.\n>\n> Thanks, that's indeed embarassing typos. And agreed for mentioning\n> shared objects in both places.\n>\n> >\n> > Interestingly enough, that patch comes out as corrupt. I have no idea why though :) v1 is fine.\n> >\n> > So I tried merging back your changes into it, and then pushing. Please doublecheck I didn't miss something :)\n>\n> Thanks! I double checked and it all looks fine.\n\n+1\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 17 Apr 2019 09:51:25 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "\n\nOn 4/2/19 7:06 PM, Magnus Hagander wrote:\n> On Tue, Apr 2, 2019 at 8:47 AM Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> \n> On Tue, Apr 02, 2019 at 07:43:12AM +0200, Julien Rouhaud wrote:\n> > On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> >> One thing which is not\n> >> proposed on this patch, and I am fine with it as a first draft, is\n> >> that we don't have any information about the broken block number and\n> >> the file involved. My gut tells me that we'd want a separate view,\n> >> like pg_stat_checksums_details with one tuple per (dboid, rel, fork,\n> >> blck) to be complete. But that's just for future work.\n> >\n> > That could indeed be nice.\n> \n> Actually, backpedaling on this one... pg_stat_checksums_details may\n> be a bad idea as we could finish with one row per broken block. If\n> a corruption is spreading quickly, pgstat would not be able to sustain\n> that amount of objects. Having pg_stat_checksums would allow us to\n> plugin more data easily based on the last failure state:\n> - last relid of failure\n> - last fork type of failure\n> - last block number of failure.\n> Not saying to do that now, but having that in pg_stat_database does\n> not seem very natural to me. And on top of that we would have an\n> extra row full of NULLs for shared objects in pg_stat_database if we\n> adopt the unique view approach... I find that rather ugly.\n> \n> \n> I think that tracking each and every block is of course a non-starter, as you've noticed.\n\nI think that's less of a concern now that the stats collector process has gone and that the stats are now collected in shared memory, what do you think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 14:34:40 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 2:35 PM Drouvot, Bertrand <\nbertranddrouvot.pg@gmail.com> wrote:\n\n>\n>\n> On 4/2/19 7:06 PM, Magnus Hagander wrote:\n> > On Tue, Apr 2, 2019 at 8:47 AM Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> wrote:\n> >\n> > On Tue, Apr 02, 2019 at 07:43:12AM +0200, Julien Rouhaud wrote:\n> > > On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <\n> michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> > >> One thing which is not\n> > >> proposed on this patch, and I am fine with it as a first draft,\n> is\n> > >> that we don't have any information about the broken block number\n> and\n> > >> the file involved. My gut tells me that we'd want a separate\n> view,\n> > >> like pg_stat_checksums_details with one tuple per (dboid, rel,\n> fork,\n> > >> blck) to be complete. But that's just for future work.\n> > >\n> > > That could indeed be nice.\n> >\n> > Actually, backpedaling on this one... pg_stat_checksums_details may\n> > be a bad idea as we could finish with one row per broken block. If\n> > a corruption is spreading quickly, pgstat would not be able to\n> sustain\n> > that amount of objects. Having pg_stat_checksums would allow us to\n> > plugin more data easily based on the last failure state:\n> > - last relid of failure\n> > - last fork type of failure\n> > - last block number of failure.\n> > Not saying to do that now, but having that in pg_stat_database does\n> > not seem very natural to me. And on top of that we would have an\n> > extra row full of NULLs for shared objects in pg_stat_database if we\n> > adopt the unique view approach... I find that rather ugly.\n> >\n> >\n> > I think that tracking each and every block is of course a non-starter,\n> as you've noticed.\n>\n> I think that's less of a concern now that the stats collector process has\n> gone and that the stats are now collected in shared memory, what do you\n> think?\n>\n\nIt would be less of a concern yes, but I think it still would be a concern.\nIf you have a large amount of corruption you could quickly get to millions\nof rows to keep track of which would definitely be a problem in shared\nmemory as well, wouldn't it?\n\nBut perhaps we could keep a list of \"the last 100 checksum failures\" or\nsomething like that?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Dec 8, 2022 at 2:35 PM Drouvot, Bertrand <bertranddrouvot.pg@gmail.com> wrote:\n\nOn 4/2/19 7:06 PM, Magnus Hagander wrote:\n> On Tue, Apr 2, 2019 at 8:47 AM Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> \n> On Tue, Apr 02, 2019 at 07:43:12AM +0200, Julien Rouhaud wrote:\n> > On Tue, Apr 2, 2019 at 6:56 AM Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> >> One thing which is not\n> >> proposed on this patch, and I am fine with it as a first draft, is\n> >> that we don't have any information about the broken block number and\n> >> the file involved. My gut tells me that we'd want a separate view,\n> >> like pg_stat_checksums_details with one tuple per (dboid, rel, fork,\n> >> blck) to be complete. But that's just for future work.\n> >\n> > That could indeed be nice.\n> \n> Actually, backpedaling on this one... pg_stat_checksums_details may\n> be a bad idea as we could finish with one row per broken block. If\n> a corruption is spreading quickly, pgstat would not be able to sustain\n> that amount of objects. Having pg_stat_checksums would allow us to\n> plugin more data easily based on the last failure state:\n> - last relid of failure\n> - last fork type of failure\n> - last block number of failure.\n> Not saying to do that now, but having that in pg_stat_database does\n> not seem very natural to me. And on top of that we would have an\n> extra row full of NULLs for shared objects in pg_stat_database if we\n> adopt the unique view approach... I find that rather ugly.\n> \n> \n> I think that tracking each and every block is of course a non-starter, as you've noticed.\n\nI think that's less of a concern now that the stats collector process has gone and that the stats are now collected in shared memory, what do you think?It would be less of a concern yes, but I think it still would be a concern. If you have a large amount of corruption you could quickly get to millions of rows to keep track of which would definitely be a problem in shared memory as well, wouldn't it?But perhaps we could keep a list of \"the last 100 checksum failures\" or something like that? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 11 Dec 2022 21:18:42 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 09:18:42PM +0100, Magnus Hagander wrote:\n> It would be less of a concern yes, but I think it still would be a concern.\n> If you have a large amount of corruption you could quickly get to millions\n> of rows to keep track of which would definitely be a problem in shared\n> memory as well, wouldn't it?\n\nYes. I have discussed this item with Bertrand off-list and I share\nthe same concern. This would lead to an lot of extra workload on a\nlarge seqscan for a corrupted relation when the stats are written\n(shutdown delay) while bloating shared memory with potentially\nmillions of items even if variable lists are handled through a dshash\nand DSM.\n\n> But perhaps we could keep a list of \"the last 100 checksum failures\" or\n> something like that?\n\nApplying a threshold is one solution. Now, a second thing I have seen\nin the past is that some disk partitions were busted but not others,\nand the current database-level counters are not enough to make a\ndifference when it comes to grab patterns in this area. A list of the\nlast N failures may be able to show some pattern, but that would be\nlike analyzing things with a lot of noise without a clear conclusion.\nAnyway, the workload caused by the threshold number had better be\nmeasured before being decided (large set of relation files with a full\nrange of blocks corrupted, much better if these are in the OS cache\nwhen scanned), which does not change the need of a benchmark.\n\nWhat about just adding a counter tracking the number of checksum\nfailures for relfilenodes in a new structure related to them (note\nthat I did not write PgStat_StatTabEntry)?\n\nIf we do that, it is then possible to cross-check the failures with\ntablespaces, which would point to disk areas that are more sensitive\nto corruption.\n--\nMichael",
"msg_date": "Mon, 12 Dec 2022 08:40:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-12 08:40:04 +0900, Michael Paquier wrote:\n> What about just adding a counter tracking the number of checksum\n> failures for relfilenodes in a new structure related to them (note\n> that I did not write PgStat_StatTabEntry)?\n\nWhy were you thinking of tracking it separately from PgStat_StatTabEntry?\n\nI think there's a good argument for starting to track some stats based on the\nrelfilenode, rather the oid, because it'd allow us to track e.g. the number of\nwrites for a relation too (we don't have the oid when writing out\nbuffers). But that's a relatively large change...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 11 Dec 2022 16:51:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 04:51:49PM -0800, Andres Freund wrote:\n> Why were you thinking of tracking it separately from PgStat_StatTabEntry?\n\nWe only know the relfilenode when loading the page on a checksum\nfailure, not its parent relation, and there are things like physical\nbase backups where we would not know them anyway because we may not be\nconnected to a database. Or perhaps it would be possible to link\ntable entries with their relfilenodes using some tweaks in the stat\nAPIs? I am sure that you know the business in this area better than I\ndo currently :)\n\n> I think there's a good argument for starting to track some stats based on the\n> relfilenode, rather the oid, because it'd allow us to track e.g. the number of\n> writes for a relation too (we don't have the oid when writing out\n> buffers). But that's a relatively large change...\n\nYeah. I was thinking among the lines of sync requests and sync\nfailures, as well.\n--\nMichael",
"msg_date": "Mon, 12 Dec 2022 10:08:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Dec 11, 2022 at 04:51:49PM -0800, Andres Freund wrote:\n>> I think there's a good argument for starting to track some stats based on the\n>> relfilenode, rather the oid, because it'd allow us to track e.g. the number of\n>> writes for a relation too (we don't have the oid when writing out\n>> buffers). But that's a relatively large change...\n\n> Yeah. I was thinking among the lines of sync requests and sync\n> failures, as well.\n\nI think a stats table indexed solely by relfilenode wouldn't be a great\nidea, because you'd lose all the stats about a table as soon as you\ndo vacuum full/cluster/rewriting-alter-table/etc. Can we create two\nindex structures over the same stats table entries, so you can look\nup by either relfilenode or OID? I'm not quite sure how to manage\nrewrites, where you transiently have two relfilenodes for \"the\nsame\" table ... maybe we could allow multiple pointers to the same\nstats entry??\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Dec 2022 20:48:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 08:48:15PM -0500, Tom Lane wrote:\n> I think a stats table indexed solely by relfilenode wouldn't be a great\n> idea, because you'd lose all the stats about a table as soon as you\n> do vacuum full/cluster/rewriting-alter-table/etc. Can we create two\n> index structures over the same stats table entries, so you can look\n> up by either relfilenode or OID? I'm not quite sure how to manage\n> rewrites, where you transiently have two relfilenodes for \"the\n> same\" table ... maybe we could allow multiple pointers to the same\n> stats entry??\n\nFWIW, I am not sure that I would care much if we were to dropped the\nstats associated to a relfilenode on a rewrite. In terms of checksum\nfailures, tuples are deformed so if there is one checksum failure a\nrewrite would just not happen. The potential complexity is not really\nappealing compared to the implementation simplicity and its gains, and\nrewrites are lock-heavy so I'd like to think that people avoid them\n(cough)..\n--\nMichael",
"msg_date": "Mon, 12 Dec 2022 13:09:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "\n\nOn 12/12/22 12:40 AM, Michael Paquier wrote:\n> On Sun, Dec 11, 2022 at 09:18:42PM +0100, Magnus Hagander wrote:\n>> It would be less of a concern yes, but I think it still would be a concern.\n>> If you have a large amount of corruption you could quickly get to millions\n>> of rows to keep track of which would definitely be a problem in shared\n>> memory as well, wouldn't it?\n> \n> Yes. I have discussed this item with Bertrand off-list and I share\n> the same concern. This would lead to an lot of extra workload on a\n> large seqscan for a corrupted relation when the stats are written\n> (shutdown delay) while bloating shared memory with potentially\n> millions of items even if variable lists are handled through a dshash\n> and DSM.\n> \n>> But perhaps we could keep a list of \"the last 100 checksum failures\" or\n>> something like that?\n> \n> Applying a threshold is one solution. Now, a second thing I have seen\n> in the past is that some disk partitions were busted but not others,\n> and the current database-level counters are not enough to make a\n> difference when it comes to grab patterns in this area. A list of the\n> last N failures may be able to show some pattern, but that would be\n> like analyzing things with a lot of noise without a clear conclusion.\n> Anyway, the workload caused by the threshold number had better be\n> measured before being decided (large set of relation files with a full\n> range of blocks corrupted, much better if these are in the OS cache\n> when scanned), which does not change the need of a benchmark.\n> \n> What about just adding a counter tracking the number of checksum\n> failures for relfilenodes \n\nAgree about your concern for tracking the corruption for every single block.\nI like this idea for relfilenodes tracking instead. Indeed it looks like this is enough useful historical information to work with.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 07:58:25 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "\n\nOn 12/12/22 5:09 AM, Michael Paquier wrote:\n> On Sun, Dec 11, 2022 at 08:48:15PM -0500, Tom Lane wrote:\n>> I think a stats table indexed solely by relfilenode wouldn't be a great\n>> idea, because you'd lose all the stats about a table as soon as you\n>> do vacuum full/cluster/rewriting-alter-table/etc. Can we create two\n>> index structures over the same stats table entries, so you can look\n>> up by either relfilenode or OID? I'm not quite sure how to manage\n>> rewrites, where you transiently have two relfilenodes for \"the\n>> same\" table ... maybe we could allow multiple pointers to the same\n>> stats entry??\n> \n> FWIW, I am not sure that I would care much if we were to dropped the\n> stats associated to a relfilenode on a rewrite. In terms of checksum\n> failures, tuples are deformed so if there is one checksum failure a\n> rewrite would just not happen. The potential complexity is not really\n> appealing compared to the implementation simplicity and its gains, and\n> rewrites are lock-heavy so I'd like to think that people avoid them\n> (cough)..\n\nAgree that this is less \"problematic\" for the checksum use case.\nOn the other hand, losing IO stats (as the ones we could add later on, suggested by Andres up-thread) looks more of a concern to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 08:15:37 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 12:40 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Sun, Dec 11, 2022 at 09:18:42PM +0100, Magnus Hagander wrote:\n> > It would be less of a concern yes, but I think it still would be a\n> concern.\n> > If you have a large amount of corruption you could quickly get to\n> millions\n> > of rows to keep track of which would definitely be a problem in shared\n> > memory as well, wouldn't it?\n>\n> Yes. I have discussed this item with Bertrand off-list and I share\n> the same concern. This would lead to an lot of extra workload on a\n> large seqscan for a corrupted relation when the stats are written\n> (shutdown delay) while bloating shared memory with potentially\n> millions of items even if variable lists are handled through a dshash\n> and DSM.\n>\n> > But perhaps we could keep a list of \"the last 100 checksum failures\" or\n> > something like that?\n>\n> Applying a threshold is one solution. Now, a second thing I have seen\n> in the past is that some disk partitions were busted but not others,\n> and the current database-level counters are not enough to make a\n> difference when it comes to grab patterns in this area. A list of the\n> last N failures may be able to show some pattern, but that would be\n> like analyzing things with a lot of noise without a clear conclusion.\n\nAnyway, the workload caused by the threshold number had better be\n> measured before being decided (large set of relation files with a full\n> range of blocks corrupted, much better if these are in the OS cache\n> when scanned), which does not change the need of a benchmark.\n>\n> What about just adding a counter tracking the number of checksum\n> failures for relfilenodes in a new structure related to them (note\n> that I did not write PgStat_StatTabEntry)?\n>\n> If we do that, it is then possible to cross-check the failures with\n> tablespaces, which would point to disk areas that are more sensitive\n> to corruption.\n>\n\nIf that's the concern, then perhaps the level we should be tracking things\non is tablespace? We don't have any stats per tablespace today I believe,\nbut that doesn't mean we couldn't create that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Dec 12, 2022 at 12:40 AM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Dec 11, 2022 at 09:18:42PM +0100, Magnus Hagander wrote:\n> It would be less of a concern yes, but I think it still would be a concern.\n> If you have a large amount of corruption you could quickly get to millions\n> of rows to keep track of which would definitely be a problem in shared\n> memory as well, wouldn't it?\n\nYes. I have discussed this item with Bertrand off-list and I share\nthe same concern. This would lead to an lot of extra workload on a\nlarge seqscan for a corrupted relation when the stats are written\n(shutdown delay) while bloating shared memory with potentially\nmillions of items even if variable lists are handled through a dshash\nand DSM.\n\n> But perhaps we could keep a list of \"the last 100 checksum failures\" or\n> something like that?\n\nApplying a threshold is one solution. Now, a second thing I have seen\nin the past is that some disk partitions were busted but not others,\nand the current database-level counters are not enough to make a\ndifference when it comes to grab patterns in this area. A list of the\nlast N failures may be able to show some pattern, but that would be\nlike analyzing things with a lot of noise without a clear conclusion. \nAnyway, the workload caused by the threshold number had better be\nmeasured before being decided (large set of relation files with a full\nrange of blocks corrupted, much better if these are in the OS cache\nwhen scanned), which does not change the need of a benchmark.\n\nWhat about just adding a counter tracking the number of checksum\nfailures for relfilenodes in a new structure related to them (note\nthat I did not write PgStat_StatTabEntry)?\n\nIf we do that, it is then possible to cross-check the failures with\ntablespaces, which would point to disk areas that are more sensitive\nto corruption.If that's the concern, then perhaps the level we should be tracking things on is tablespace? We don't have any stats per tablespace today I believe, but that doesn't mean we couldn't create that.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 12 Dec 2022 10:33:14 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "\n\nOn 12/12/22 8:15 AM, Drouvot, Bertrand wrote:\n> \n> \n> On 12/12/22 5:09 AM, Michael Paquier wrote:\n>> On Sun, Dec 11, 2022 at 08:48:15PM -0500, Tom Lane wrote:\n>>> I think a stats table indexed solely by relfilenode wouldn't be a great\n>>> idea, because you'd lose all the stats about a table as soon as you\n>>> do vacuum full/cluster/rewriting-alter-table/etc. Can we create two\n>>> index structures over the same stats table entries, so you can look\n>>> up by either relfilenode or OID? I'm not quite sure how to manage\n>>> rewrites, where you transiently have two relfilenodes for \"the\n>>> same\" table ... maybe we could allow multiple pointers to the same\n>>> stats entry??\n>>\n>> FWIW, I am not sure that I would care much if we were to dropped the\n>> stats associated to a relfilenode on a rewrite. In terms of checksum\n>> failures, tuples are deformed so if there is one checksum failure a\n>> rewrite would just not happen. The potential complexity is not really\n>> appealing compared to the implementation simplicity and its gains, and\n>> rewrites are lock-heavy so I'd like to think that people avoid them\n>> (cough)..\n> \n> Agree that this is less \"problematic\" for the checksum use case.\n> On the other hand, losing IO stats (as the ones we could add later on, suggested by Andres up-thread) looks more of a concern to me.\n> \n\nOne option could be to have a dedicated PgStat_StatRelFileNodeEntry and populate the related PgStat_StatTabEntry when calling the new to be created pgstat_relfilenode_flush_cb()? (That's what we are doing currently to\nflush some of the table stats to the database stats for example).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 11:14:34 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-11 20:48:15 -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Sun, Dec 11, 2022 at 04:51:49PM -0800, Andres Freund wrote:\n> >> I think there's a good argument for starting to track some stats based on the\n> >> relfilenode, rather the oid, because it'd allow us to track e.g. the number of\n> >> writes for a relation too (we don't have the oid when writing out\n> >> buffers). But that's a relatively large change...\n> \n> > Yeah. I was thinking among the lines of sync requests and sync\n> > failures, as well.\n> \n> I think a stats table indexed solely by relfilenode wouldn't be a great\n> idea, because you'd lose all the stats about a table as soon as you\n> do vacuum full/cluster/rewriting-alter-table/etc.\n\nI don't think that'd be a huge issue - we already have code to keep some\nstats as part of rewrites that change the oid of a relation. We could do\nthe same for rewrites that just change the relfilenode.\n\nHowever, I'm not sure it's a good idea to keep the stats during\nrewrites. Most rewrites end up not copying dead tuples, for example, so\nkeeping the old counts of updated tuples doesn't really make sense.\n\n\n> Can we create two index structures over the same stats table entries,\n> so you can look up by either relfilenode or OID?\n\nWe could likely do that, yes. I think we'd have one \"stats body\" and\nmultiple hash table entries pointing to one. The complicated bit would\nlikely be that we'd need some additional refcounting to know when\nthere's no references to the \"stats body\" left.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:11:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Checksum errors in pg_stat_database"
}
] |
[
{
"msg_contents": "Hi,\n\nOnly gaur shows useful logs:\n\n SELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot',\n'test_decoding');\n! ERROR: could not access file \"test_decoding\": No such file or directory\n\nDoes this mean it didn't build the test_decoding module?\n\nOf the failing animals, damselfly builds with the highest frequency,\nand it reports the following 4 commits between the first failure[1]\nand the preceding success (and has been failing ever since):\n\n962da60591 Tue Jan 1 01:39:34 2019 UTC Fix generation of padding\nmessage before encrypting Elgamal in pgcrypto\nbedda9fbb7 Mon Dec 31 21:57:57 2018 UTC Process EXTRA_INSTALL\nserially, during the first temp-install.\ne7ebc8c285 Mon Dec 31 21:55:04 2018 UTC Send EXTRA_INSTALL errors to\ninstall.log, not stderr.\n7c97b0f55e Mon Dec 31 21:51:18 2018 UTC pg_regress: Promptly detect\nfailed postmaster startup.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2019-01-01%2010%3A39%3A41\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Sat, 12 Jan 2019 00:06:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> Only gaur shows useful logs:\n\n> SELECT 'init' FROM\n> pg_create_logical_replication_slot('regression_slot',\n> 'test_decoding');\n> ! ERROR: could not access file \"test_decoding\": No such file or directory\n\n> Does this mean it didn't build the test_decoding module?\n\nI'm wondering if it built it but didn't install it, as a result of\nsome problem with\n\n> bedda9fbb7 Mon Dec 31 21:57:57 2018 UTC Process EXTRA_INSTALL\n> serially, during the first temp-install.\n\nWill take a look later, but since gaur is so slow, it may be awhile\nbefore I have any answers.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 11 Jan 2019 09:48:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@enterprisedb.com> writes:\n>> Does this mean it didn't build the test_decoding module?\n\n> I'm wondering if it built it but didn't install it, as a result of\n> some problem with\n>> bedda9fbb7 Mon Dec 31 21:57:57 2018 UTC Process EXTRA_INSTALL\n>> serially, during the first temp-install.\n\nSo it appears that in v10,\n\n\t./configure ... --enable-tap-tests ...\n\tmake\n\tmake install\n\tcd contrib/test_decoding\n\tmake check\n\nfails due to failure to install test_decoding into the tmp_install\ntree, while it works in v11. Moreover, that's not specific to\ngaur: it happens on my Linux box too. I'm not very sure why only\nthree buildfarm animals are unhappy --- maybe in the buildfarm\ncontext it requires a specific combination of options to show the\nproblem.\n\nThere's no obvious difference between bedda9fbb and 6dd690be3,\nso I surmise that that patch depended somehow on some previous\nwork that only went into v11 not v10. Haven't found what, yet.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 11 Jan 2019 16:31:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "I wrote:\n> There's no obvious difference between bedda9fbb and 6dd690be3,\n> so I surmise that that patch depended somehow on some previous\n> work that only went into v11 not v10. Haven't found what, yet.\n\nAh, looks like it was 42e61c774. I'll push a fix shortly.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 11 Jan 2019 17:20:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "I wrote:\n> So it appears that in v10,\n> \t./configure ... --enable-tap-tests ...\n> \tmake\n> \tmake install\n> \tcd contrib/test_decoding\n> \tmake check\n> fails due to failure to install test_decoding into the tmp_install\n> tree, while it works in v11. Moreover, that's not specific to\n> gaur: it happens on my Linux box too. I'm not very sure why only\n> three buildfarm animals are unhappy --- maybe in the buildfarm\n> context it requires a specific combination of options to show the\n> problem.\n\nWhile I think I've fixed this bug, I'm still quite confused about why\nonly some buildfarm animals showed the problem. Comparing log files,\nit seems that the ones that were working were relying on having\ndone a complete temp-install at a higher level, while the ones that\nwere failing were trying to make a temp install from scratch in\ncontrib/test_decoding and hence seeing the bug. For example,\nlongfin's test-decoding-check log starts out\n\nnapshot: 2019-01-11 21:12:17\n\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../../src/test/regress all\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../../../src/port all\n/Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../backend submake-errcodes\nmake[3]: Nothing to be done for `submake-errcodes'.\n\nwhile gaur's starts out\n\nSnapshot: 2019-01-11 07:30:45\n\nrm -rf '/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install\n/bin/sh ../../config/install-sh -c -d '/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install/log\nmake -C '../..' DESTDIR='/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install install >'/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install/log/install.log 2>&1\nmake -j1 checkprep >>'/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install/log/install.log 2>&1\nmake -C ../../src/test/regress all\nmake[1]: Entering directory `/home/bfarm/bf-data/REL_10_STABLE/pgsql.build/src/test/regress'\nmake -C ../../../src/port all\nmake[2]: Entering directory `/home/bfarm/bf-data/REL_10_STABLE/pgsql.build/src/port'\nmake -C ../backend submake-errcodes\nmake[3]: Entering directory `/home/bfarm/bf-data/REL_10_STABLE/pgsql.build/src/backend'\nmake[3]: Nothing to be done for `submake-errcodes'.\n\nThese two animals are running the same buildfarm client version,\nand I don't see any relevant difference in their configurations,\nso why are they behaving differently? Andrew, any ideas?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 11 Jan 2019 18:33:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "\nOn 1/11/19 6:33 PM, Tom Lane wrote:\n> I wrote:\n>> So it appears that in v10,\n>> \t./configure ... --enable-tap-tests ...\n>> \tmake\n>> \tmake install\n>> \tcd contrib/test_decoding\n>> \tmake check\n>> fails due to failure to install test_decoding into the tmp_install\n>> tree, while it works in v11. Moreover, that's not specific to\n>> gaur: it happens on my Linux box too. I'm not very sure why only\n>> three buildfarm animals are unhappy --- maybe in the buildfarm\n>> context it requires a specific combination of options to show the\n>> problem.\n> While I think I've fixed this bug, I'm still quite confused about why\n> only some buildfarm animals showed the problem. Comparing log files,\n> it seems that the ones that were working were relying on having\n> done a complete temp-install at a higher level, while the ones that\n> were failing were trying to make a temp install from scratch in\n> contrib/test_decoding and hence seeing the bug. For example,\n> longfin's test-decoding-check log starts out\n>\n> napshot: 2019-01-11 21:12:17\n>\n> /Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../../src/test/regress all\n> /Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../../../src/port all\n> /Applications/Xcode.app/Contents/Developer/usr/bin/make -C ../backend submake-errcodes\n> make[3]: Nothing to be done for `submake-errcodes'.\n>\n> while gaur's starts out\n>\n> Snapshot: 2019-01-11 07:30:45\n>\n> rm -rf '/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install\n> /bin/sh ../../config/install-sh -c -d '/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install/log\n> make -C '../..' DESTDIR='/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install install >'/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install/log/install.log 2>&1\n> make -j1 checkprep >>'/home/bfarm/bf-data/REL_10_STABLE/pgsql.build'/tmp_install/log/install.log 2>&1\n> make -C ../../src/test/regress all\n> make[1]: Entering directory `/home/bfarm/bf-data/REL_10_STABLE/pgsql.build/src/test/regress'\n> make -C ../../../src/port all\n> make[2]: Entering directory `/home/bfarm/bf-data/REL_10_STABLE/pgsql.build/src/port'\n> make -C ../backend submake-errcodes\n> make[3]: Entering directory `/home/bfarm/bf-data/REL_10_STABLE/pgsql.build/src/backend'\n> make[3]: Nothing to be done for `submake-errcodes'.\n>\n> These two animals are running the same buildfarm client version,\n> and I don't see any relevant difference in their configurations,\n> so why are they behaving differently? Andrew, any ideas?\n>\n> \t\t\t\n\n\n\nPossibly an error in \nhttps://github.com/PGBuildFarm/client-code/commit/3026438dcefebcc6fe2d44eb7b60812e257a0614\n\n\nIt looks like longfin detects that it has all it needs to proceed, and\nso calls make with \"NO_INSTALL=yes\", but gaur doesn't. Not sure why\nthat would be - if anything I'd expect the test to fail on OSX rather\nthan HP-UX. Is there something weird about naming of library files on HP-UX?\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Sat, 12 Jan 2019 13:34:36 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 1/11/19 6:33 PM, Tom Lane wrote:\n>> While I think I've fixed this bug, I'm still quite confused about why\n>> only some buildfarm animals showed the problem.\n\n> ... Is there something weird about naming of library files on HP-UX?\n\nDoh! I looked right at this code last night, but it failed to click:\n\n # these files should be present if we've temp_installed everything,\n # and not if we haven't. The represent core, contrib and test_modules.\n return ( (-d $tmp_loc)\n && (-f \"$bindir/postgres\" || -f \"$bindir/postgres.exe\")\n && (-f \"$libdir/hstore.so\" || -f \"$libdir/hstore.dll\")\n && (-f \"$libdir/test_parser.so\" || -f \"$libdir/test_parser.dll\"));\n\nOn HPUX (at least the version gaur is running), the extension for\nshared libraries is \".sl\" not \".so\".\n\nThat doesn't explain the failures on damselfly and koreaceratops,\nbut they're both running very old buildfarm clients, which most\nlikely just don't have the optimization to share a temp-install.\n\nI wonder if it's practical to scrape DLSUFFIX out of src/Makefile.port\ninstead of listing all the possibilities here. But I'm not sure how\nyou'd deal with this bit in Makefile.hpux:\n\nifeq ($(host_cpu), ia64)\n DLSUFFIX = .so\nelse\n DLSUFFIX = .sl\nendif\n\nAnyway, the bigger picture here is that the shared-temp-install\noptimization is masking bugs in local \"make check\" rules. Not\nsure how much we care about that, though. Any such bug is only\nof interest to developers, and it only matters if someone actually\nstumbles over it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 12 Jan 2019 14:03:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "\nOn 1/12/19 2:03 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 1/11/19 6:33 PM, Tom Lane wrote:\n>>> While I think I've fixed this bug, I'm still quite confused about why\n>>> only some buildfarm animals showed the problem.\n>> ... Is there something weird about naming of library files on HP-UX?\n> Doh! I looked right at this code last night, but it failed to click:\n>\n> # these files should be present if we've temp_installed everything,\n> # and not if we haven't. The represent core, contrib and test_modules.\n> return ( (-d $tmp_loc)\n> && (-f \"$bindir/postgres\" || -f \"$bindir/postgres.exe\")\n> && (-f \"$libdir/hstore.so\" || -f \"$libdir/hstore.dll\")\n> && (-f \"$libdir/test_parser.so\" || -f \"$libdir/test_parser.dll\"));\n>\n> On HPUX (at least the version gaur is running), the extension for\n> shared libraries is \".sl\" not \".so\".\n>\n> That doesn't explain the failures on damselfly and koreaceratops,\n> but they're both running very old buildfarm clients, which most\n> likely just don't have the optimization to share a temp-install.\n\n\nYes, they are on an older version that doesn't use the NO_TEMP_INSTALL\nflag at all.\n\n\n\n> I wonder if it's practical to scrape DLSUFFIX out of src/Makefile.port\n> instead of listing all the possibilities here. But I'm not sure how\n> you'd deal with this bit in Makefile.hpux:\n>\n> ifeq ($(host_cpu), ia64)\n> DLSUFFIX = .so\n> else\n> DLSUFFIX = .sl\n> endif\n\n\nI'd rather get make to tell us directly, something like:\n\n\n .PHONY: show_dl_suffix\n show_dl_suffix:\n @echo $(DLSUFFIX)\n\n\nI can arrange something like that in the buildfarm code if we think the\nuse case is too narrow.\n\n\n> Anyway, the bigger picture here is that the shared-temp-install\n> optimization is masking bugs in local \"make check\" rules. Not\n> sure how much we care about that, though. Any such bug is only\n> of interest to developers, and it only matters if someone actually\n> stumbles over it.\n>\n> \t\t\t\n\n\nright.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 13 Jan 2019 08:28:53 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 1/12/19 2:03 PM, Tom Lane wrote:\n>> I wonder if it's practical to scrape DLSUFFIX out of src/Makefile.port\n>> instead of listing all the possibilities here.\n\n> I'd rather get make to tell us directly, something like:\n> .PHONY: show_dl_suffix\n> show_dl_suffix:\n> @echo $(DLSUFFIX)\n\nNo objection here, but of course you'd have to back-patch that into\nall active branches.\n\n(The Darwin case is slightly exciting, but it looks like you'd get\nthe right answer as long as Makefile.shlib doesn't get involved.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 13 Jan 2019 09:24:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
},
{
"msg_contents": "\nOn 1/13/19 9:24 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 1/12/19 2:03 PM, Tom Lane wrote:\n>>> I wonder if it's practical to scrape DLSUFFIX out of src/Makefile.port\n>>> instead of listing all the possibilities here.\n>> I'd rather get make to tell us directly, something like:\n>> .PHONY: show_dl_suffix\n>> show_dl_suffix:\n>> @echo $(DLSUFFIX)\n> No objection here, but of course you'd have to back-patch that into\n> all active branches.\n>\n> (The Darwin case is slightly exciting, but it looks like you'd get\n> the right answer as long as Makefile.shlib doesn't get involved.)\n>\n> \t\t\t\n\n\n\nOK, I'll make that happen.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 13 Jan 2019 10:04:26 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Three animals fail test-decoding-check on REL_10_STABLE"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm writing a tool to process a logical replication stream. The intent is\nto use publications and subscriptions as an initial filter, and then use\nthe replication stream to trigger external events. So my tool should\nconnect to the master in the same manner as a replication slave, but it\ndoes different things with the data.\n\nSo far I've used pg_recvlogical.c as a guide and I'm successfully\nconnecting to the master, creating a replication slot, and subscribing to a\ncouple of publications.\n\nBut now I'm stuck at further interpreting the data. Can anybody point me\nto further documentation or the right code to look at to figure out the\nformat of the WAL data stream?\n\nCheers,\nAmi.\n\nHi,I'm writing a tool to process a logical replication stream. The intent is to use publications and subscriptions as an initial filter, and then use the replication stream to trigger external events. So my tool should connect to the master in the same manner as a replication slave, but it does different things with the data.So far I've used pg_recvlogical.c as a guide and I'm successfully connecting to the master, creating a replication slot, and subscribing to a couple of publications. But now I'm stuck at further interpreting the data. Can anybody point me to further documentation or the right code to look at to figure out the format of the WAL data stream?Cheers,Ami.",
"msg_date": "Fri, 11 Jan 2019 18:01:18 -0500",
"msg_from": "Ami Ganguli <ami.ganguli@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to decode the output from pgoutput"
}
] |
[
{
"msg_contents": "Hi all,\n(Added Kevin in CC)\n\nThere have been over the ages discussions about getting better\nO_DIRECT support to close the gap with other players in the database\nmarket, but I have not actually seen on those lists a patch which\nmakes use of O_DIRECT for relations and SLRUs (perhaps I missed it,\nanyway that would most likely conflict now).\n\nAttached is a toy patch that I have begun using for tests in this\narea. That's nothing really serious at this stage, but you can use\nthat if you would like to see the impact of O_DIRECT. Of course,\nthings get significantly slower. The patch is able to compile, pass\nregression tests, and looks stable. So that's usable for experiments.\nThe patch uses a GUC called direct_io, enabled to true to ease\nregression testing when applying it.\n\nNote that pg_attribute_aligned() cannot be used as that's not an\noption with clang and a couple of other comilers as far as I know, so\nthe patch uses a simple set of placeholder buffers large enough to be\naligned with the OS pages, which should be 4k for Linux by the way,\nand not set to BLCKSZ, but for WAL's O_DIRECT we don't really care\nmuch with such details.\n\nIf there is interest for such things, perhaps we could get a patch\nsorted out, with some angles of attack like:\n- Move to use of page-aligned buffers for relations and SLRUs.\n- Split use of O_DIRECT for SLRU and relations into separate GUCs.\n- Perhaps other things.\nHowever this is a large and very controversial topic, and of course\nmore complex than the experiment attached, still this prototype is fun\nto play with.\n\nThanks for reading!\n--\nMichael",
"msg_date": "Sat, 12 Jan 2019 13:46:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "Hi!\n\n> 12 янв. 2019 г., в 9:46, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> Attached is a toy patch that I have begun using for tests in this\n> area. That's nothing really serious at this stage, but you can use\n> that if you would like to see the impact of O_DIRECT. Of course,\n> things get significantly slower.\n\nCool!\nI've just gathered a group of students to task them with experimenting with shared buffer eviction algorithms during their February internship at Yandex-Sirius edu project. Your patch seems very handy for benchmarks in this area.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 12 Jan 2019 21:13:20 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On Sun, Jan 13, 2019 at 5:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi!\n>\n> > 12 янв. 2019 г., в 9:46, Michael Paquier <michael@paquier.xyz> написал(а):\n> >\n> > Attached is a toy patch that I have begun using for tests in this\n> > area. That's nothing really serious at this stage, but you can use\n> > that if you would like to see the impact of O_DIRECT. Of course,\n> > things get significantly slower.\n>\n> Cool!\n> I've just gathered a group of students to task them with experimenting with shared buffer eviction algorithms during their February internship at Yandex-Sirius edu project. Your patch seems very handy for benchmarks in this area.\n\n+1, thanks for sharing the patch. Even though just turning on\nO_DIRECT is the trivial part of this project, it's good to encourage\ndiscussion. We may indeed become more sensitive to the quality of\nbuffer eviction algorithms, but it seems like the main work to regain\nlost performance will be the background IO scheduling piece:\n\n1. We need a new \"bgreader\" process to do read-ahead. I think you'd\nwant a way to tell it with explicit hints (for example, perhaps\nsequential scans would advertise that they're reading sequentially so\nthat it starts to slurp future blocks into the buffer pool, and\nstreaming replicas might look ahead in the WAL and tell it what's\ncoming). In theory this might be better than the heuristics OSes use\nto guess our access pattern and pre-fetch into the page cache, since\nwe have better information (and of course we're skipping a buffer\nlayer).\n\n2. We need a new kind of bgwriter/syncer that aggressively creates\nclean pages so that foreground processes rarely have to evict (since\nthat is now super slow), but also efficiently finds ranges of dirty\nblocks that it can write in big sequential chunks.\n\n3. We probably want SLRUs to use the main buffer pool, instead of\ntheir own mini-pools, so they can benefit from the above.\n\nWhether we need multiple bgreader and bgwriter processes or perhaps a\ngeneral IO scheduler process may depend on whether we also want to\nswitch to async (multiplexing from a single process). Starting simple\nwith a traditional sync IO and N processes seems OK to me.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Sun, 13 Jan 2019 10:35:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On Sun, Jan 13, 2019 at 10:35:55AM +1300, Thomas Munro wrote:\n> 1. We need a new \"bgreader\" process to do read-ahead. I think you'd\n> want a way to tell it with explicit hints (for example, perhaps\n> sequential scans would advertise that they're reading sequentially so\n> that it starts to slurp future blocks into the buffer pool, and\n> streaming replicas might look ahead in the WAL and tell it what's\n> coming). In theory this might be better than the heuristics OSes use\n> to guess our access pattern and pre-fetch into the page cache, since\n> we have better information (and of course we're skipping a buffer\n> layer).\n\nYes, that could be interesting mainly for analytics by being able to\nsnipe better than the OS readahead.\n\n> 2. We need a new kind of bgwriter/syncer that aggressively creates\n> clean pages so that foreground processes rarely have to evict (since\n> that is now super slow), but also efficiently finds ranges of dirty\n> blocks that it can write in big sequential chunks.\n\nOkay, that's a new idea. A bgwriter able to do syncs in chunks would\nbe also interesting with O_DIRECT, no?\n\n> 3. We probably want SLRUs to use the main buffer pool, instead of\n> their own mini-pools, so they can benefit from the above.\n\nWasn't there a thread about that on -hackers actually? I cannot see\nany reference to it.\n\n> Whether we need multiple bgreader and bgwriter processes or perhaps a\n> general IO scheduler process may depend on whether we also want to\n> switch to async (multiplexing from a single process). Starting simple\n> with a traditional sync IO and N processes seems OK to me.\n\nSo you mean that we could just have a simple switch as a first step?\nOr I misunderstood you :)\n\nOne of the reasons why I have begun this thread is that since we have\nheard about the fsync issues on Linux, I think that there is room\nfor giving our user base more control of their fate without relying on\nthe Linux community decisions to potentially eat data and corrupt a\ncluster with a page dirty bit cleared without its data actually\nflushed. Even the latest kernels are not fixing all the patterns with\nopen fds across processes, switching the problem from one corner of\nthe table to another, and there are folks patching the Linux kernel to\nmake Postgres more reliable from this perspective, and living happily\nwith this option. As long as the option can be controlled and\ndefaults to false, it seems to be that we could do something. Even if\nthe performance is bad, this gives the user control of how he/she\nwants things to be done.\n--\nMichael",
"msg_date": "Sun, 13 Jan 2019 18:02:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "\n\n> 13 янв. 2019 г., в 14:02, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n>> 3. We probably want SLRUs to use the main buffer pool, instead of\n>> their own mini-pools, so they can benefit from the above.\n> \n> Wasn't there a thread about that on -hackers actually? I cannot see\n> any reference to it.\nI think it's here https://www.postgresql.org/message-id/flat/CAEepm%3D0o-%3Dd8QPO%3DYGFiBSqq2p6KOvPVKG3bggZi5Pv4nQw8nw%40mail.gmail.com#bacee3e6612c53c31658b18650e7ffd9\n\n> As long as the option can be controlled and\n> defaults to false, it seems to be that we could do something. Even if\n> the performance is bad, this gives the user control of how he/she\n> wants things to be done.\n\nI like the idea of having this switch, I believe it will make development in this direction easier.\nBut I think there will be complain from users like \"this feature is done wrong\" due to really bad performance.\n\nBest regards, Andrey Borodin.\n",
"msg_date": "Sun, 13 Jan 2019 16:39:16 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On Sun, Jan 13, 2019 at 10:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, Jan 13, 2019 at 10:35:55AM +1300, Thomas Munro wrote:\n> > 1. We need a new \"bgreader\" process to do read-ahead. I think you'd\n> > want a way to tell it with explicit hints (for example, perhaps\n> > sequential scans would advertise that they're reading sequentially so\n> > that it starts to slurp future blocks into the buffer pool, and\n> > streaming replicas might look ahead in the WAL and tell it what's\n> > coming). In theory this might be better than the heuristics OSes use\n> > to guess our access pattern and pre-fetch into the page cache, since\n> > we have better information (and of course we're skipping a buffer\n> > layer).\n>\n> Yes, that could be interesting mainly for analytics by being able to\n> snipe better than the OS readahead.\n>\n> > 2. We need a new kind of bgwriter/syncer that aggressively creates\n> > clean pages so that foreground processes rarely have to evict (since\n> > that is now super slow), but also efficiently finds ranges of dirty\n> > blocks that it can write in big sequential chunks.\n>\n> Okay, that's a new idea. A bgwriter able to do syncs in chunks would\n> be also interesting with O_DIRECT, no?\n\nWell I'm just describing the stuff that the OS is doing for us in\nanother layer. Evicting dirty buffers currently consists of a\nbuffered pwrite(), which we can do a huge number of per second (given\nenough spare RAM), but with O_DIRECT | O_SYNC we'll be limited by\nstorage device random IOPS, so workloads that evict dirty buffers in\nforeground processes regularly will suffer. bgwriter should make sure\nwe always find clean buffers without waiting when we need them.\n\nYeah, I think pwrite() larger than 8KB at a time would be a goal, to\nget large IO request sizes all the way down to the storage.\n\n> > 3. We probably want SLRUs to use the main buffer pool, instead of\n> > their own mini-pools, so they can benefit from the above.\n>\n> Wasn't there a thread about that on -hackers actually? I cannot see\n> any reference to it.\n\nhttps://www.postgresql.org/message-id/flat/20180814213500.GA74618%4060f81dc409fc.ant.amazon.com\n\n> > Whether we need multiple bgreader and bgwriter processes or perhaps a\n> > general IO scheduler process may depend on whether we also want to\n> > switch to async (multiplexing from a single process). Starting simple\n> > with a traditional sync IO and N processes seems OK to me.\n>\n> So you mean that we could just have a simple switch as a first step?\n> Or I misunderstood you :)\n\nI just meant that if we take over all the read-ahead and write-behind\nwork and use classic synchronous IO syscalls like pread()/pwrite(),\nwe'll probably need multiple processes to do it, depending on how much\nIO concurrency the storage layer can take.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Mon, 14 Jan 2019 00:53:15 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "From: Michael Paquier [mailto:michael@paquier.xyz]\n> One of the reasons why I have begun this thread is that since we have heard\n> about the fsync issues on Linux, I think that there is room for giving our\n> user base more control of their fate without relying on the Linux community\n> decisions to potentially eat data and corrupt a cluster with a page dirty\n> bit cleared without its data actually flushed. Even the latest kernels\n> are not fixing all the patterns with open fds across processes, switching\n> the problem from one corner of the table to another, and there are folks\n> patching the Linux kernel to make Postgres more reliable from this\n> perspective, and living happily with this option. As long as the option\n> can be controlled and defaults to false, it seems to be that we could do\n> something. Even if the performance is bad, this gives the user control\n> of how he/she wants things to be done.\n\nThank you for starting an interesting topic. We probably want the direct I/O. On a INSERT and UPDATE heavy system with PostgreSQL 9.2, we suffered from occasional high response times due to the Linux page cache activity. Postgres processes competed for the page cache to read/write the data files, write online and archive WAL files, and write the server log files (auto_explain and autovacuum workers emitted a lot of logs.) The user with Oracle experience asked why PostgreSQL doesn't handle database I/O by itself...\n\nAnd I wonder how useful the direct I/O for low latency devices like the persistent memory. The overhead of the page cache may become relatively higher.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Tue, 15 Jan 2019 00:50:23 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "\n> 12 янв. 2019 г., в 9:46, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> Note that pg_attribute_aligned() cannot be used as that's not an\n> option with clang and a couple of other comilers as far as I know, so\n> the patch uses a simple set of placeholder buffers large enough to be\n> aligned with the OS pages, which should be 4k for Linux by the way,\n> and not set to BLCKSZ, but for WAL's O_DIRECT we don't really care\n> much with such details.\n\nIs it possible to avoid those memcopy's by aligning available buffers instead?\nI couldn't understand this from the patch and this thread.\n\nBest regards, Andrey Borodin.\n",
"msg_date": "Tue, 15 Jan 2019 11:19:48 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On Tue, Jan 15, 2019 at 11:19:48AM +0500, Andrey Borodin wrote:\n> Is it possible to avoid those memcpy's by aligning available buffers\n> instead? I couldn't understand this from the patch and this thread.\n\nSure, it had better do that. That's just a lazy implementation.\n--\nMichael",
"msg_date": "Tue, 15 Jan 2019 17:28:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On 1/15/19 11:28 AM, Michael Paquier wrote:\n\n> On Tue, Jan 15, 2019 at 11:19:48AM +0500, Andrey Borodin wrote:\n>> Is it possible to avoid those memcpy's by aligning available buffers\n>> instead? I couldn't understand this from the patch and this thread.\n> Sure, it had better do that. That's just a lazy implementation.\n\n\nHi!\n\nCould you specify all cases when buffers will not be aligned with BLCKSZ?\n\nAFAIC shared and temp buffers are aligned. And what ones are not?\n\n\n-- \nRegards, Maksim Milyutin\n\n\n",
"msg_date": "Tue, 15 Jan 2019 19:40:12 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On Tue, Jan 15, 2019 at 07:40:12PM +0300, Maksim Milyutin wrote:\n> Could you specify all cases when buffers will not be aligned with BLCKSZ?\n> \n> AFAIC shared and temp buffers are aligned. And what ones are not?\n\nSLRU buffers are not aligned with the OS pages (aka alignment with\n4096 at least). There are also a bunch of code paths where the callers\nof mdread() or mdwrite() don't do that, which makes a correct patch\nmore invasive.\n--\nMichael",
"msg_date": "Wed, 16 Jan 2019 10:54:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
},
{
"msg_contents": "On Sat, Jan 12, 2019 at 4:36 PM Thomas Munro\n<thomas.munro@enterprisedb.com> wrote:\n> 1. We need a new \"bgreader\" process to do read-ahead. I think you'd\n> want a way to tell it with explicit hints (for example, perhaps\n> sequential scans would advertise that they're reading sequentially so\n> that it starts to slurp future blocks into the buffer pool, and\n> streaming replicas might look ahead in the WAL and tell it what's\n> coming). In theory this might be better than the heuristics OSes use\n> to guess our access pattern and pre-fetch into the page cache, since\n> we have better information (and of course we're skipping a buffer\n> layer).\n\nRight, like if we're reading the end of relation file 16384, we can\nprefetch the beginning of 16384.1, but the OS won't know to do that.\n\n> 2. We need a new kind of bgwriter/syncer that aggressively creates\n> clean pages so that foreground processes rarely have to evict (since\n> that is now super slow), but also efficiently finds ranges of dirty\n> blocks that it can write in big sequential chunks.\n\nYeah.\n\n> 3. We probably want SLRUs to use the main buffer pool, instead of\n> their own mini-pools, so they can benefit from the above.\n\nRight. I think this is important, and it makes me think that maybe\nMichael's patch won't help us much in the end. I believe that the\nnumber of pages that are needed for clog data, at least, can very\nsignificantly depending on workload and machine size, so there's not\none number there that is going to work for everybody, and the\nalgorithms the SLRU code uses for page management have O(n) stuff in\nthem, so they don't scale well to large numbers of SLRU buffers\nanyway. I think we should try to unify the SLRU stuff with\nshared_buffers, and then have a test patch like Michael's (not for\ncommit) which we can use to see the impact of that, and then try to\nreduce that impact with the stuff you mention under #1 and #2.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 16 Jan 2019 11:16:51 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT for relations and SLRUs (Prototype)"
}
] |
[
{
"msg_contents": "Hello!\n\nI have written a small patch to modify the default names for foreign key\nconstraints. Currently if the foreign key is composed of multiple columns we\nonly use the first one in the constraint name. This can lead to similar\nconstraint names when two foreign keys start with the same column. This sort\nof situation may commonly occur in a multi-tenant environment.\n\n> CREATE TABLE users (tenant_id int, id int, PRIMARY KEY (tenant_id, id));\n> CREATE TABLE posts (tenant_id int, id int, PRIMARY KEY (tenant_id, id));\n> CREATE TABLE comments (\ntenant_id int,\nid int,\npost_id int,\ncommenter_id int,\nFOREIGN KEY (tenant_id, post_id) REFERENCES posts,\nFOREIGN KEY (tenant_id, commenter_id) REFERENCES users\n )\n> \\d comments\n Table \"public.comments\"\n\n Foreign-key constraints:\n \"comments_tenant_id_fkey\" FOREIGN KEY (tenant_id, commenter_id)\nREFERENCES users(tenant_id, id)\n \"comments_tenant_id_fkey1\" FOREIGN KEY (tenant_id, post_id)\nREFERENCES posts(tenant_id, id)\n\nThe two constraints have nearly identical names. With my patch the default names\nwill include both column names, so we have we will instead have this output:\n\n Foreign-key constraints:\n \"comments_tenant_id_commenter_id_fkey\" FOREIGN KEY (tenant_id,\ncommenter_id) REFERENCES users(tenant_id, id)\n \"comments_tenant_id_post_id_fkey\" FOREIGN KEY (tenant_id, post_id)\nREFERENCES posts(tenant_id, id)\n\nThis makes the default names for foreign keys in line with the default names\nfor indexes. Hopefully an uncontroversial change!\n\nThe logic for creating index names is in the function ChooseIndexNameAddition\nin src/backend/commands/indexcmds.c. There is also similar logic fore creating\nnames for statistics in ChooseExtendedStatisticNameAddition in\nsrc/backend/commands/statscmds.c.\n\nI pretty much just copied and pasted the implementation from\nChooseIndexNameAddition and placed it in src/backend/commands/tablecmds.c.\nThe new function is called ChooseForeignKeyConstraintNameAddition. I updated\nthe comments in indexcmds.c and statscmds.c to also reference this new function.\nEach of the three versions takes in the columns in slightly different forms, so\nI don't think creating a single implementation of this small bit of logic is\ndesirable, and I have no idea where such a util function would go.\n\nRegression tests are in src/test/regress/sql/foreign_key.sql. I create two\ncomposite foreign keys on table, one via the CREATE TABLE statement, and the\nother in a ALTER TABLE statement. The generated names of the constraints are\nthen queried from the pg_constraint table.\n\n\nThis is my first submission to Postgres, so I'm not entirely sure what the\nprotocol is here to get this merged; should I add this patch to the 2019-03\nCommitfest?\n\nHappy to hear any feedback!\n\n- Paul Martinez",
"msg_date": "Sat, 12 Jan 2019 16:55:12 -0800",
"msg_from": "Paul Martinez <hellopfm@gmail.com>",
"msg_from_op": true,
"msg_subject": "PATCH: Include all columns in default names for foreign key\n constraints."
},
{
"msg_contents": "On 13/01/2019 01:55, Paul Martinez wrote:\n> This is my first submission to Postgres, so I'm not entirely sure what the\n> protocol is here to get this merged; should I add this patch to the 2019-03\n> Commitfest?\n\nI haven't looked at the patch yet, but I think it's a good idea and\nanyway yes, please add it to the next commitfest.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n",
"msg_date": "Sun, 13 Jan 2019 03:01:23 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: Include all columns in default names for foreign key\n constraints."
},
{
"msg_contents": "On 13/01/2019 01:55, Paul Martinez wrote:\n> The two constraints have nearly identical names. With my patch the default names\n> will include both column names, so we have we will instead have this output:\n> \n> Foreign-key constraints:\n> \"comments_tenant_id_commenter_id_fkey\" FOREIGN KEY (tenant_id,\n> commenter_id) REFERENCES users(tenant_id, id)\n> \"comments_tenant_id_post_id_fkey\" FOREIGN KEY (tenant_id, post_id)\n> REFERENCES posts(tenant_id, id)\n> \n> This makes the default names for foreign keys in line with the default names\n> for indexes. Hopefully an uncontroversial change!\n\nI think this is a good change.\n\n> I pretty much just copied and pasted the implementation from\n> ChooseIndexNameAddition and placed it in src/backend/commands/tablecmds.c.\n\nThe use of \"name2\" in the comment doesn't make sense outside the context\nof indexcmds.c. Maybe rewrite that a bit.\n\n> Regression tests are in src/test/regress/sql/foreign_key.sql. I create two\n> composite foreign keys on table, one via the CREATE TABLE statement, and the\n> other in a ALTER TABLE statement. The generated names of the constraints are\n> then queried from the pg_constraint table.\n\nExisting regression tests already exercise this, and they are failing\nall over the place because of the changes of the generated names. That\nis to be expected. You should investigate those failures and adjust the\n\"expected\" files. Then you probably don't need your additional tests.\n\nIt might be worth having a test that runs into the 63-character length\nlimit somehow.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Feb 2019 11:11:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: Include all columns in default names for foreign key\n constraints."
},
{
"msg_contents": "Thanks for the comments!\n\nOn Fri, Feb 8, 2019 at 2:11 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 13/01/2019 01:55, Paul Martinez wrote:\n> > I pretty much just copied and pasted the implementation from\n> > ChooseIndexNameAddition and placed it in src/backend/commands/tablecmds.c.\n>\n> The use of \"name2\" in the comment doesn't make sense outside the context\n> of indexcmds.c. Maybe rewrite that a bit.\n\nUpdated.\n\n> > Regression tests are in src/test/regress/sql/foreign_key.sql. I create two\n> > composite foreign keys on table, one via the CREATE TABLE statement, and the\n> > other in a ALTER TABLE statement. The generated names of the constraints are\n> > then queried from the pg_constraint table.\n>\n> Existing regression tests already exercise this, and they are failing\n> all over the place because of the changes of the generated names. That\n> is to be expected. You should investigate those failures and adjust the\n> \"expected\" files. Then you probably don't need your additional tests.\n>\n> It might be worth having a test that runs into the 63-character length\n> limit somehow.\n\nYikes, sorry about that. Some tests are failing on my machine because of dynamic\nlinking issues and I totally missed all the foreign key failures. I think I've\nfixed all the tests. I changed the test I added to test the 63-character limit.\n\nAttached is an updated patch.\n\n- Paul",
"msg_date": "Sat, 9 Mar 2019 13:27:29 -0800",
"msg_from": "Paul Martinez <hellopfm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PATCH: Include all columns in default names for foreign key\n constraints."
},
{
"msg_contents": "On 2019-03-09 22:27, Paul Martinez wrote:\n> Yikes, sorry about that. Some tests are failing on my machine because of dynamic\n> linking issues and I totally missed all the foreign key failures. I think I've\n> fixed all the tests. I changed the test I added to test the 63-character limit.\n> \n> Attached is an updated patch.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 13 Mar 2019 14:29:06 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: Include all columns in default names for foreign key\n constraints."
}
] |
[
{
"msg_contents": "A reminder that the CFP for PGCon 2019 closes on 19 January.\nThat's a Saturday, but in reality, we don't start closing access off\nuntil Monday, so you have the whole weekend to get your submission in.\n\nPGCon 2019 will be on 30-31 May 2019 at University of Ottawa.\n\n* 28-29 May (Tue-Wed) tutorials\n* 29 May (Wed) The Unconference\n* 30-31 May (Thu-Fri) talks - the main part of the conference\n\nSee http://www.pgcon.org/2019/ <http://www.pgcon.org/2019/>\n\nWe are now accepting proposals for the main part of the conference (30-31 May).\nProposals can be quite simple. We do not require academic-style papers.\n\nIf you are doing something interesting with PostgreSQL, please submit\na proposal. You might be one of the backend hackers or work on a\nPostgreSQL related project and want to share your know-how with\nothers. You might be developing an interesting system using PostgreSQL\nas the foundation. Perhaps you migrated from another database to\nPostgreSQL and would like to share details. These, and other stories\nare welcome. Both users and developers are encouraged to share their\nexperiences.\n\nHere are a some ideas to jump start your proposal process:\n\n- novel ways in which PostgreSQL is used\n- migration of production systems from another database\n- data warehousing\n- tuning PostgreSQL for different work loads\n- replication and clustering\n- hacking the PostgreSQL code\n- PostgreSQL derivatives and forks\n- applications built around PostgreSQL\n- benchmarking and performance engineering\n- case studies\n- location-aware and mapping software with PostGIS\n- The latest PostgreSQL features and features in development\n- research and teaching with PostgreSQL\n- things the PostgreSQL project could do better\n- integrating PostgreSQL with 3rd-party software\n\nBoth users and developers are encouraged to share their experiences.\n\nThe schedule is:\n\n1 Dec 2018 Proposal acceptance begins\n19 Jan 2019 Proposal acceptance ends\n19 Feb 2019 Confirmation of accepted proposals\n\nNOTE: the call for lightning talks will go out very close to the conference.\nDo not submit lightning talks proposals until then.\n\nSee also http://www.pgcon.org/2019/papers.php <http://www.pgcon.org/2019/papers.php>\n\n\nInstructions for submitting a proposal to PGCon 2019 are available\nfrom: http://www.pgcon.org/2019/submissions.php <http://www.pgcon.org/2019/submissions.php>\n\n-- \nDan Langille - BSDCan / PGCon\ndan@langille.org <mailto:dan@langille.org>\n\n\nA reminder that the CFP for PGCon 2019 closes on 19 January.That's a Saturday, but in reality, we don't start closing access offuntil Monday, so you have the whole weekend to get your submission in.PGCon 2019 will be on 30-31 May 2019 at University of Ottawa.* 28-29 May (Tue-Wed) tutorials* 29 May (Wed) The Unconference* 30-31 May (Thu-Fri) talks - the main part of the conferenceSee http://www.pgcon.org/2019/We are now accepting proposals for the main part of the conference (30-31 May).Proposals can be quite simple. We do not require academic-style papers.If you are doing something interesting with PostgreSQL, please submita proposal. You might be one of the backend hackers or work on aPostgreSQL related project and want to share your know-how withothers. You might be developing an interesting system using PostgreSQLas the foundation. Perhaps you migrated from another database toPostgreSQL and would like to share details. These, and other storiesare welcome. Both users and developers are encouraged to share theirexperiences.Here are a some ideas to jump start your proposal process:- novel ways in which PostgreSQL is used- migration of production systems from another database- data warehousing- tuning PostgreSQL for different work loads- replication and clustering- hacking the PostgreSQL code- PostgreSQL derivatives and forks- applications built around PostgreSQL- benchmarking and performance engineering- case studies- location-aware and mapping software with PostGIS- The latest PostgreSQL features and features in development- research and teaching with PostgreSQL- things the PostgreSQL project could do better- integrating PostgreSQL with 3rd-party softwareBoth users and developers are encouraged to share their experiences.The schedule is:1 Dec 2018 Proposal acceptance begins19 Jan 2019 Proposal acceptance ends19 Feb 2019 Confirmation of accepted proposalsNOTE: the call for lightning talks will go out very close to the conference.Do not submit lightning talks proposals until then.See also http://www.pgcon.org/2019/papers.phpInstructions for submitting a proposal to PGCon 2019 are availablefrom: http://www.pgcon.org/2019/submissions.php\n-- Dan Langille - BSDCan / PGCondan@langille.org",
"msg_date": "Sun, 13 Jan 2019 17:31:47 -0500",
"msg_from": "Dan Langille <dan@langille.org>",
"msg_from_op": true,
"msg_subject": "PGCon 2019 CFP closes on 19 January"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on pluggable storage (in particular, while cleaning it up\nover the last few days), I grew concerned with widely heapam.h is\nincluded in other headers. E.g. the executor (via execnodes.h,\nexecutor.h relying on heapam.h) shouldn't depend on heapam.h details -\nparticularly after pluggable storage, but also before. To me that's\nunnecessary leakage across abstraction boundaries.\n\nIn the attached patch I excised all heapam.h includes from other\nheaders. There were basically two things required to do so:\n\n* In a few places that use HeapScanDesc (which confusingly is a typedef\n in heapam.h, but the underlying struct is in relscan.h) etc, we can\n easily get by just using struct HeapScanDescData * instead.\n\n* I moved the LockTupleMode enum to to lockoptions.h - previously\n executor.h tried to avoid relying on heapam.h, but it ended up\n including heapam.h indirectly, which lead to a couple people\n introducing new uses of the enum. We could just convert those to\n ints like in other places, but I think moving to a smaller header\n seems more appropriate. I don't think lockoptions.h is perfect, but\n it's also not bad?\n\nThis required adding heapam.h includes to a bunch of places, but that\ndoesn't seem too bad. It'll possibly break a few external users, but I\nthink that's acceptable cost - many of those will already/will further\nbe broken in 12 anyway.\n\nI think we should do the same with genam.h, but that seems better done\nseparately.\n\n\nI've a pending local set of patches that splits relation_open/close,\nheap_open/close et al into a separate set of includes, that then allows\nto downgrade the heapam.h include to that new file (given that a large\npercentage of the files really just want heap_open/close and nothing\nelse from heapam.h), which I'll rebase ontop of this if we can agree\nthat this change is a good idea.\n\n\nAlvaro, you'd introduced the split of HeapScanDesc and HeapScanDescData\nbeing in different files (in a3540b0f657c6352) - what do you think about\nthis change? I didn't revert that split, but I think we ought to\nconsider just relying on a forward declared struct in heapam.h instead,\nthis split is pretty confusing and seems to lead to more interdependence\nin a lot of cases.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 13 Jan 2019 16:07:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "On 2019-Jan-13, Andres Freund wrote:\n\n> Alvaro, you'd introduced the split of HeapScanDesc and HeapScanDescData\n> being in different files (in a3540b0f657c6352) - what do you think about\n> this change? I didn't revert that split, but I think we ought to\n> consider just relying on a forward declared struct in heapam.h instead,\n> this split is pretty confusing and seems to lead to more interdependence\n> in a lot of cases.\n\nI wasn't terribly happy with that split, so I'm not opposed to doing\nthings differently. For your consideration, I've had this patch lying\naround for a few years, which (IIRC) reduces the exposure of heapam.h by\nsplitting relscan.h in two. This applies on top of dd778e9d8884 (and as\nI recall it worked well there).\n\nI'll try to have a look at your patch tomorrow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 13 Jan 2019 23:54:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "Hi,\n\nOn 2019-01-13 23:54:58 -0300, Alvaro Herrera wrote:\n> On 2019-Jan-13, Andres Freund wrote:\n> \n> > Alvaro, you'd introduced the split of HeapScanDesc and HeapScanDescData\n> > being in different files (in a3540b0f657c6352) - what do you think about\n> > this change? I didn't revert that split, but I think we ought to\n> > consider just relying on a forward declared struct in heapam.h instead,\n> > this split is pretty confusing and seems to lead to more interdependence\n> > in a lot of cases.\n> \n> I wasn't terribly happy with that split, so I'm not opposed to doing\n> things differently. For your consideration, I've had this patch lying\n> around for a few years, which (IIRC) reduces the exposure of heapam.h by\n> splitting relscan.h in two. This applies on top of dd778e9d8884 (and as\n> I recall it worked well there).\n\nYou forgot to attach that patch... :).\n\nI'm not sure I see a need to split relscan - note my patch makes it so\nthat it's not included by heapam.h anymore, and doing for the same for\ngenam.h would be fairly straightforward. The most interesting bit there\nwould be whether we'd add the includes necessary for Snapshot (imo no),\nRelation (?), ScanKey (imo no), or whether to add the necessary includes\ndirectly.\n\n\n> I'll try to have a look at your patch tomorrow.\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sun, 13 Jan 2019 19:05:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "On 2019-01-13 19:05:03 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2019-01-13 23:54:58 -0300, Alvaro Herrera wrote:\n> > On 2019-Jan-13, Andres Freund wrote:\n> > \n> > > Alvaro, you'd introduced the split of HeapScanDesc and HeapScanDescData\n> > > being in different files (in a3540b0f657c6352) - what do you think about\n> > > this change? I didn't revert that split, but I think we ought to\n> > > consider just relying on a forward declared struct in heapam.h instead,\n> > > this split is pretty confusing and seems to lead to more interdependence\n> > > in a lot of cases.\n> > \n> > I wasn't terribly happy with that split, so I'm not opposed to doing\n> > things differently. For your consideration, I've had this patch lying\n> > around for a few years, which (IIRC) reduces the exposure of heapam.h by\n> > splitting relscan.h in two. This applies on top of dd778e9d8884 (and as\n> > I recall it worked well there).\n> \n> You forgot to attach that patch... :).\n> \n> I'm not sure I see a need to split relscan\n\nOne split I am wondering about however is splitting out the sysstable_\nstuff out of genam.h. It's imo a different component and shouldn't\nreally be in there. Would be quite a bit of rote work to add all the\nnecessary includes over the backend...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sun, 13 Jan 2019 20:14:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "On 2019-01-13 19:05:03 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2019-01-13 23:54:58 -0300, Alvaro Herrera wrote:\n> > On 2019-Jan-13, Andres Freund wrote:\n> > \n> > > Alvaro, you'd introduced the split of HeapScanDesc and HeapScanDescData\n> > > being in different files (in a3540b0f657c6352) - what do you think about\n> > > this change? I didn't revert that split, but I think we ought to\n> > > consider just relying on a forward declared struct in heapam.h instead,\n> > > this split is pretty confusing and seems to lead to more interdependence\n> > > in a lot of cases.\n> > \n> > I wasn't terribly happy with that split, so I'm not opposed to doing\n> > things differently. For your consideration, I've had this patch lying\n> > around for a few years, which (IIRC) reduces the exposure of heapam.h by\n> > splitting relscan.h in two. This applies on top of dd778e9d8884 (and as\n> > I recall it worked well there).\n> \n> You forgot to attach that patch... :).\n> \n> I'm not sure I see a need to split relscan - note my patch makes it so\n> that it's not included by heapam.h anymore, and doing for the same for\n> genam.h would be fairly straightforward. The most interesting bit there\n> would be whether we'd add the includes necessary for Snapshot (imo no),\n> Relation (?), ScanKey (imo no), or whether to add the necessary includes\n> directly.\n\nHere's a patch doing the same for genam as well.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 13 Jan 2019 22:39:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "On 2019-Jan-13, Andres Freund wrote:\n\n> On 2019-01-13 23:54:58 -0300, Alvaro Herrera wrote:\n> > On 2019-Jan-13, Andres Freund wrote:\n> > \n> > > Alvaro, you'd introduced the split of HeapScanDesc and HeapScanDescData\n> > > being in different files (in a3540b0f657c6352) - what do you think about\n> > > this change? I didn't revert that split, but I think we ought to\n> > > consider just relying on a forward declared struct in heapam.h instead,\n> > > this split is pretty confusing and seems to lead to more interdependence\n> > > in a lot of cases.\n> > \n> > I wasn't terribly happy with that split, so I'm not opposed to doing\n> > things differently. For your consideration, I've had this patch lying\n> > around for a few years, which (IIRC) reduces the exposure of heapam.h by\n> > splitting relscan.h in two. This applies on top of dd778e9d8884 (and as\n> > I recall it worked well there).\n> \n> You forgot to attach that patch... :).\n\nOops :-( Here it is anyway. Notmuch reminded me that I had posted this\nbefore, to a pretty cold reception:\nhttps://postgr.es/m/20130917020228.GB7139@eldon.alvh.no-ip.org\nNeedless to say, I disagree with the general sentiment in that thread\nthat header refactoring is pointless and unwelcome.\n\n> I'm not sure I see a need to split relscan - note my patch makes it so\n> that it's not included by heapam.h anymore, and doing for the same for\n> genam.h would be fairly straightforward. The most interesting bit there\n> would be whether we'd add the includes necessary for Snapshot (imo no),\n> Relation (?), ScanKey (imo no), or whether to add the necessary includes\n> directly.\n\nAh, you managed to get heapam.h and genam.h out of execnodes.h, which I\nthink was my main motivation ... that seems good enough to me. I agree\nthat splitting relscan.h may not be necessary after these changes.\n\nAs for struct Relation, note that for that one you only need relcache.h\nwhich should be lean enough, so it doesn't sound too bad to include that\none wherever.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 14 Jan 2019 12:21:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "On 2019-Jan-13, Andres Freund wrote:\n\n> One split I am wondering about however is splitting out the sysstable_\n> stuff out of genam.h. It's imo a different component and shouldn't\n> really be in there. Would be quite a bit of rote work to add all the\n> necessary includes over the backend...\n\nYeah -- unless there's a demonstrable win from this split, I would leave\nthis part alone, unless you regularly carry a shield to PG conferences.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 14 Jan 2019 12:23:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "0001 -- looks reasonable. One hunk in executor.h changes LockTupleMode\nto \"enum LockTupleMode\", but there's no need for that.\n\nAFAIK the only reason to have the struct FooBarData vs. FooBar (ptr)\nsplit is so that it's possible to refer to structs without having\nthe full struct definition. I think changing uses of FooBar to \"struct\nFooBarData *\" defeats the whole purpose -- it becomes pointless noise,\nconfusing the reader for no gain. I've long considered that the struct\ndefinitions should appear in \"internal\" headers (such as\nhtup_details.h), separate from the pointer typedefs, so that it is the\nforward struct declarations (and the pointer typedefs, where there are\nany) that are part of the exposed API for each module, and not the\nstruct definitions. \n\nI think that would be much more invasive, though, and it's unlikely to\nsucceed as easily as this simpler approach is.\n\nI think MissingPtr is a terrible name. Can we change that while at this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 14 Jan 2019 15:36:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "Hi,\n\nOn 2019-01-14 15:36:14 -0300, Alvaro Herrera wrote:\n> 0001 -- looks reasonable. One hunk in executor.h changes LockTupleMode\n> to \"enum LockTupleMode\", but there's no need for that.\n\nOh, that escaped from an earlier version where I briefly forgot that one\ncannot portably forward-declare enums.\n\n\n> AFAIK the only reason to have the struct FooBarData vs. FooBar (ptr)\n> split is so that it's possible to refer to structs without having\n> the full struct definition. I think changing uses of FooBar to \"struct\n> FooBarData *\" defeats the whole purpose -- it becomes pointless noise,\n> confusing the reader for no gain. I've long considered that the struct\n> definitions should appear in \"internal\" headers (such as\n> htup_details.h), separate from the pointer typedefs, so that it is the\n> forward struct declarations (and the pointer typedefs, where there are\n> any) that are part of the exposed API for each module, and not the\n> struct definitions.\n\nI think the whole pointer hiding game that we play is a really really\nbad idea. We should just about *never* have typedefs that hide the fact\nthat something is a pointer. But it's hard to go away from that for\nlegacy reasons.\n\nThe problem with your approach is that it's *eminently* reasonable to\nwant to forward declare a struct in multiple places. Otherwise you end\nup in issues where you include headers like heapam.h solely for a\ntypedef, which obviously doesn't make a ton of sense.\n\nIf we were in C11 we could just forward declare the pointer hiding\ntypedefs in multiple places, and be done with that. But before C11\nredundant typedefs aren't allowed. With the C99 move I'm however not\nsure if there's any surviving supported compiler that doesn't allow\nredundant typedefs as an extension.\n\nGiven the fact that including headers just for a typedef is frequent\noverkill, hiding the typedef in a separate header has basically no\ngain. I also don't quite understand why using a forward declaration with\nstruct in the name is that confusing, especially when it only happens in\nthe header.\n\n\n> I think MissingPtr is a terrible name. Can we change that while at\n> this?\n\nIndeed. I'd just remove the typedef.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 14 Jan 2019 10:47:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "On 2019-Jan-14, Andres Freund wrote:\n\n> I think the whole pointer hiding game that we play is a really really\n> bad idea. We should just about *never* have typedefs that hide the fact\n> that something is a pointer. But it's hard to go away from that for\n> legacy reasons.\n> \n> The problem with your approach is that it's *eminently* reasonable to\n> want to forward declare a struct in multiple places. Otherwise you end\n> up in issues where you include headers like heapam.h solely for a\n> typedef, which obviously doesn't make a ton of sense.\n\nWell, my point is that in an ideal world we would have a header where\nthe struct is declared once in a very lean header, which doesn't include\nalmost anything else, so you can include it into other headers\nliberally. Then the struct definitions are any another (heavy) header,\nwhich *does* need to include lots of other stuff in order to be able to\ndefine the structs fully, and would be #included very sparingly, only in\nthe few .c files that really needed it.\n\nFor example, I would split up execnodes.h so that *only* the\ntypedef/struct declarations are there, and *no* struct definition. Then\nthat header can be #included in other headers that need those to declare\nfunctions -- no problem. Another header (say execstructs.h, whatever)\nwould contain the struct definition and would only be used by executor\n.c files. So when a struct changes, only the executor is recompiled;\nthe rest of the source doesn't care, because execnodes.h (the struct\ndecls) hasn't changed.\n\nThis may be too disruptive. I'm not suggesting that you do things this\nway, only describing my ideal world.\n\nYour idea of \"liberally forward-declaring structs in multiple places\"\nseems like you don't *have* the lean header at all (only the heavy one\nwith all the struct definitions), and instead you distribute bits and\npieces of the lean header randomly to the places that need it. It's\nrepetitive. It gets the job done, but it's not *clean*.\n\n> Given the fact that including headers just for a typedef is frequent\n> overkill, hiding the typedef in a separate header has basically no\n> gain. I also don't quite understand why using a forward declaration with\n> struct in the name is that confusing, especially when it only happens in\n> the header.\n\nOh, that's not the confusing part -- that's just repetitive, nothing\nmore. What's confusing (IMO) is having two names for the same struct\n(one pointer and one full struct). It'd be useful if it was used the\nway I describe above. But that's the settled project style, so I don't\nhave any hopes that it'll ever be changed. \n\nAnyway, I'm not objecting to your patch ... I just don't want it on\nrecord that I'm in love with the current situation :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 14 Jan 2019 17:55:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
},
{
"msg_contents": "Hi,\n\nOn 2019-01-14 17:55:44 -0300, Alvaro Herrera wrote:\n> On 2019-Jan-14, Andres Freund wrote:\n>\n> > I think the whole pointer hiding game that we play is a really really\n> > bad idea. We should just about *never* have typedefs that hide the fact\n> > that something is a pointer. But it's hard to go away from that for\n> > legacy reasons.\n> >\n> > The problem with your approach is that it's *eminently* reasonable to\n> > want to forward declare a struct in multiple places. Otherwise you end\n> > up in issues where you include headers like heapam.h solely for a\n> > typedef, which obviously doesn't make a ton of sense.\n>\n> Well, my point is that in an ideal world we would have a header where\n> the struct is declared once in a very lean header, which doesn't include\n> almost anything else, so you can include it into other headers\n> liberally. Then the struct definitions are any another (heavy) header,\n> which *does* need to include lots of other stuff in order to be able to\n> define the structs fully, and would be #included very sparingly, only in\n> the few .c files that really needed it.\n\n> For example, I would split up execnodes.h so that *only* the\n> typedef/struct declarations are there, and *no* struct definition. Then\n> that header can be #included in other headers that need those to declare\n> functions -- no problem. Another header (say execstructs.h, whatever)\n> would contain the struct definition and would only be used by executor\n> .c files. So when a struct changes, only the executor is recompiled;\n> the rest of the source doesn't care, because execnodes.h (the struct\n> decls) hasn't changed.\n\nIt's surely better than the current state, but it still requires\nrecompiling everything in a more cases than necessary.\n\n\n> This may be too disruptive. I'm not suggesting that you do things this\n> way, only describing my ideal world.\n\nIt'd probably doable by leaving execnodes.h as the heavyweight nodes,\nand execnodetypes.h as the lightweight one, and including the latter\nfrom the former. And then moving users of execnodes over to\nexecnodetypes.\n\n\n> Your idea of \"liberally forward-declaring structs in multiple places\"\n> seems like you don't *have* the lean header at all (only the heavy one\n> with all the struct definitions), and instead you distribute bits and\n> pieces of the lean header randomly to the places that need it. It's\n> repetitive. It gets the job done, but it's not *clean*.\n\nI'm not really buying the repetitiveness bit - it's really primarily\nadding 'struct ' prefix, and sometimes adding a 'Data *' postfix. That's\nnot a lot of duplication. When used in structs there's no need to even\nadd an explicit 'struct <name>;' forward declaration, that's only needed\nfor function parameters. And afterwards there's a lot less entanglement\n- no need to recompile every file just because a new node type has been\nadded etc.\n\n\n> > Given the fact that including headers just for a typedef is frequent\n> > overkill, hiding the typedef in a separate header has basically no\n> > gain. I also don't quite understand why using a forward declaration with\n> > struct in the name is that confusing, especially when it only happens in\n> > the header.\n>\n> Oh, that's not the confusing part -- that's just repetitive, nothing\n> more. What's confusing (IMO) is having two names for the same struct\n> (one pointer and one full struct). It'd be useful if it was used the\n> way I describe above. But that's the settled project style, so I don't\n> have any hopes that it'll ever be changed.\n\nNot within a few days, but we probably can do it over time...\n\n\n> Anyway, I'm not objecting to your patch ... I just don't want it on\n> record that I'm in love with the current situation :-)\n\nCool, I've pushed these now. I'll rebase my patch to split\n(heap|reation)_(open|close)(rv)? patch out of heapam.[ch] now. Brr.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 14 Jan 2019 17:22:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reducing header interdependencies around heapam.h et al."
}
] |
[
{
"msg_contents": "Hi All,\n\nI sent an email with the same problem in pgsql-general mailing but no one\nhas responded, so I try to reach out by asking this question in the hacker\nlist.\n\nIn PG-11, procedures were introduced. In the pg_partman PostgreSQL\nextension, a procedure named run_maintenance_proc was developed to replace\nrun_maintenance function. I was trying to call this procedure in pg_partman\nwith SPI_execute() interface and this is the command being executed:\nCALL \"partman\".run_maintenance_proc(p_analyze := true, p_jobmon := true)\n\nDetailed code please see: https://github.com/pgpartman/pg_partman/pull/242\n\n I received the following error:\n\n2019-01-02 20:13:04.951 PST [26446] ERROR: invalid transaction termination\n2019-01-02 20:13:04.951 PST [26446] CONTEXT: PL/pgSQL function\npartman.run_maintenance_proc(integer,boolean,boolean,boolean) line 45\nat COMMIT\n\nApparently, the transaction control command 'COMMIT' is not allowed in a\nprocedure CALL function. But I can CALL this procedure in psql directly.\n\nAccording to the documentation of CALL, \"If CALL is executed in a\ntransaction block, then the called procedure cannot execute transaction\ncontrol statements. Transaction control statements are only allowed if CALL is\nexecuted in its own transaction.\"\n\nTherefore, it looks like that SPI_execute() is calling the procedure within\na transaction block. So Is there any SPI interface that can be used in an\nextension library to call a procedure with transaction control commands? (I\ntried to use SPI_connect_ext(SPI_OPT_NONATOMIC) to establish a nonatomic\nconnection but it doesn't help.)\n\nThanks,\n\nJiayi Liu\n\nHi All,I sent an email with the same problem in pgsql-general mailing but no one has responded, so I try to reach out by asking this question in the hacker list.In PG-11, procedures were introduced. In the pg_partman PostgreSQL extension, a procedure named run_maintenance_proc was developed to replace run_maintenance function. I was trying to call this procedure in pg_partman with SPI_execute() interface and this is the command being executed:CALL \"partman\".run_maintenance_proc(p_analyze := true, p_jobmon := true)Detailed code please see: https://github.com/pgpartman/pg_partman/pull/242 I received the following error:2019-01-02 20:13:04.951 PST [26446] ERROR: invalid transaction termination\n2019-01-02 20:13:04.951 PST [26446] CONTEXT: PL/pgSQL function partman.run_maintenance_proc(integer,boolean,boolean,boolean) line 45 at COMMITApparently, the transaction control command 'COMMIT' is not allowed in a procedure CALL function. But I can CALL this procedure in psql directly.According to the documentation of CALL, \"If CALL is executed in a transaction block, then the called procedure cannot execute transaction control statements. Transaction control statements are only allowed if CALL is executed in its own transaction.\" Therefore, it looks like that SPI_execute() is calling the procedure within a transaction block. So Is there any SPI interface that can be used in an extension library to call a procedure with transaction control commands? (I tried to use SPI_connect_ext(SPI_OPT_NONATOMIC) to establish a nonatomic connection but it doesn't help.)Thanks,Jiayi Liu",
"msg_date": "Sun, 13 Jan 2019 21:43:40 -0800",
"msg_from": "Jack LIU <toliujiayi@gmail.com>",
"msg_from_op": true,
"msg_subject": "SPI Interface to Call Procedure with Transaction Control Statements?"
},
{
"msg_contents": ">>>>> \"Jack\" == Jack LIU <toliujiayi@gmail.com> writes:\n\n Jack> (I tried to use SPI_connect_ext(SPI_OPT_NONATOMIC) to establish a\n Jack> nonatomic connection but it doesn't help.)\n\nYou need to be specific here about how it didn't help, because this is\nexactly what you're supposed to do, and it should at least change what\nerror you got.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Mon, 14 Jan 2019 06:21:14 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SPI Interface to Call Procedure with Transaction Control\n Statements?"
},
{
"msg_contents": "Hi Andrew,\n\nThis is my code to call the procedure with\nSPI_connect_ext(SPI_OPT_NONATOMIC).\n\nif (run_proc) {\n appendStringInfo(&buf, \"CALL \\\"%s\\\".run_maintenance_proc(p_analyze\n:= %s, p_jobmon := %s);\", partman_schema, analyze, jobmon);\n expected_ret = SPI_OK_UTILITY;\n function_run = \"run_maintenance_proc() procedure\";\n SPI_finish();\n SPI_connect_ext(SPI_OPT_NONATOMIC);\n pgstat_report_activity(STATE_RUNNING, buf.data);\n\n ret = SPI_execute(buf.data, false, 0);\n if (ret != expected_ret)\n elog(FATAL, \"Cannot call pg_partman %s: error code %d\",\nfunction_run, ret);\n }\n\nIt gave the same error:\n\n2019-01-14 22:18:56.898 PST [16048] LOG: pg_partman dynamic background\nworker (dbname=postgres) dynamic background worker initialized with role\nubuntu on database postgres\n2019-01-14 22:18:56.918 PST [16048] ERROR: invalid transaction termination\n2019-01-14 22:18:56.918 PST [16048] CONTEXT: PL/pgSQL function\npartman.run_maintenance_proc(integer,boolean,boolean,boolean) line 45 at\nCOMMIT\nSQL statement \"CALL \"partman\".run_maintenance_proc(p_analyze := true,\np_jobmon := true);\"\n2019-01-14 22:18:56.923 PST [26352] LOG: background worker \"pg_partman\ndynamic background worker (dbname=postgres)\" (PID 16048) exited with exit\ncode 1\n\nThanks,\n\nJack\n\nOn Sun, Jan 13, 2019 at 10:21 PM Andrew Gierth <andrew@tao11.riddles.org.uk>\nwrote:\n\n> >>>>> \"Jack\" == Jack LIU <toliujiayi@gmail.com> writes:\n>\n> Jack> (I tried to use SPI_connect_ext(SPI_OPT_NONATOMIC) to establish a\n> Jack> nonatomic connection but it doesn't help.)\n>\n> You need to be specific here about how it didn't help, because this is\n> exactly what you're supposed to do, and it should at least change what\n> error you got.\n>\n> --\n> Andrew (irc:RhodiumToad)\n>\n\nHi Andrew,This is my code to call the procedure with SPI_connect_ext(SPI_OPT_NONATOMIC).if (run_proc) { appendStringInfo(&buf, \"CALL \\\"%s\\\".run_maintenance_proc(p_analyze := %s, p_jobmon := %s);\", partman_schema, analyze, jobmon); expected_ret = SPI_OK_UTILITY; function_run = \"run_maintenance_proc() procedure\"; SPI_finish(); SPI_connect_ext(SPI_OPT_NONATOMIC); pgstat_report_activity(STATE_RUNNING, buf.data); ret = SPI_execute(buf.data, false, 0); if (ret != expected_ret) elog(FATAL, \"Cannot call pg_partman %s: error code %d\", function_run, ret); }It gave the same error:2019-01-14 22:18:56.898 PST [16048] LOG: pg_partman dynamic background worker (dbname=postgres) dynamic background worker initialized with role ubuntu on database postgres2019-01-14 22:18:56.918 PST [16048] ERROR: invalid transaction termination2019-01-14 22:18:56.918 PST [16048] CONTEXT: PL/pgSQL function partman.run_maintenance_proc(integer,boolean,boolean,boolean) line 45 at COMMIT SQL statement \"CALL \"partman\".run_maintenance_proc(p_analyze := true, p_jobmon := true);\"2019-01-14 22:18:56.923 PST [26352] LOG: background worker \"pg_partman dynamic background worker (dbname=postgres)\" (PID 16048) exited with exit code 1Thanks,JackOn Sun, Jan 13, 2019 at 10:21 PM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:>>>>> \"Jack\" == Jack LIU <toliujiayi@gmail.com> writes:\n\n Jack> (I tried to use SPI_connect_ext(SPI_OPT_NONATOMIC) to establish a\n Jack> nonatomic connection but it doesn't help.)\n\nYou need to be specific here about how it didn't help, because this is\nexactly what you're supposed to do, and it should at least change what\nerror you got.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Mon, 14 Jan 2019 22:20:14 -0800",
"msg_from": "Jack LIU <toliujiayi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SPI Interface to Call Procedure with Transaction Control\n Statements?"
},
{
"msg_contents": ">>>>> \"Jack\" == Jack LIU <toliujiayi@gmail.com> writes:\n\n Jack> Hi Andrew,\n Jack> This is my code to call the procedure with\n Jack> SPI_connect_ext(SPI_OPT_NONATOMIC).\n\nAh. You need to take a look at exec_stmt_call in plpgsql, and do the\nsame things it does with snapshot management (specifically, setting the\nno_snapshot flag on the plan that you're going to execute). SPI forces\natomic mode if the normal snapshot management is in use, because\notherwise a commit inside the procedure would warn about still having a\nsnapshot open.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Tue, 15 Jan 2019 10:49:37 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SPI Interface to Call Procedure with Transaction Control\n Statements?"
},
{
"msg_contents": "On 15/01/2019 11:49, Andrew Gierth wrote:\n>>>>>> \"Jack\" == Jack LIU <toliujiayi@gmail.com> writes:\n> \n> Jack> Hi Andrew,\n> Jack> This is my code to call the procedure with\n> Jack> SPI_connect_ext(SPI_OPT_NONATOMIC).\n> \n> Ah. You need to take a look at exec_stmt_call in plpgsql, and do the\n> same things it does with snapshot management (specifically, setting the\n> no_snapshot flag on the plan that you're going to execute). SPI forces\n> atomic mode if the normal snapshot management is in use, because\n> otherwise a commit inside the procedure would warn about still having a\n> snapshot open.\n\nYeah, eventually we might want to add a new SPI function to do\nnon-atomic calls, but right now you need to go the manual route.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 15 Jan 2019 12:06:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SPI Interface to Call Procedure with Transaction Control\n Statements?"
}
] |
[
{
"msg_contents": "While testing my nbtree heap TID keyspace patch, I came across a case\nwhere amcheck reliably reports corruption. It appeared that a 4 byte\nvarlena index entry that was expected in an index was not actually\npresent. However, index scan queries with the \"missing\" value in their\nqual didn't actually give wrong answers. This was reproducible on the\nmaster branch, too. It turned out that the problem has existed since\nthe heapallindexed enhancement made it into Postgres 11 was committed.\n\nThe heapallindexed enhancement that made it into Postgres 11 assumes\nthat the representation of index tuples produced by index_form_tuple()\n(or all relevant index_form_tuple() callers) is deterministic: for\nevery possible heap tuple input there must be a single possible\n(bitwise) output. There is no real corruption present with the test\ncase, though it's not entirely clear that this is best thought of as a\nbug in amcheck -- I'd prefer to make sure that amcheck's expectations\nare actually met here, rather than have amcheck normalize its input to\neliminate the difference in bitwise representation.\n\nSteps to reproduce are rather delicate -- I stumbled upon the problem\nentirely by accident. I can share the full test case if that helps,\nbut will hold off for now, since it involves a pg_dump that's a few\nmegabytes in size. Here is an outline of what I'm doing:\n\npg_restore -d postgres /home/pg/code/suffix_truncation_test/bib_refs_small.dump\n\npg@postgres:5432 [9532]=# \\d mgd.bib_refs\n Table \"mgd.bib_refs\"\n Column │ Type │ Collation │\nNullable │ Default\n───────────────────┼─────────────────────────────┼───────────┼──────────┼─────────\n _refs_key │ integer │ │ not null │\n _reviewstatus_key │ integer │ │ not null │\n reftype │ character(4) │ │ not null │\n authors │ text │ │ │\n _primary │ character varying(60) │ │ │\n title │ text │ │ │\n journal │ character varying(100) │ │ │\n vol │ character varying(20) │ │ │\n issue │ character varying(25) │ │ │\n date │ character varying(30) │ │ │\n year │ integer │ │ │\n pgs │ character varying(30) │ │ │\n nlmstatus │ character(1) │ │ not null │\n abstract │ text │ │ │\n isreviewarticle │ smallint │ │ not null │\n _createdby_key │ integer │ │ not null │ 1001\n _modifiedby_key │ integer │ │ not null │ 1001\n creation_date │ timestamp without time zone │ │ not null │ now()\n modification_date │ timestamp without time zone │ │ not null │ now()\nIndexes:\n \"bib_refs_pkey\" PRIMARY KEY, btree (_refs_key)\n \"bib_refs_idx_authors\" btree (authors)\n \"bib_refs_idx_createdby_key\" btree (_createdby_key)\n \"bib_refs_idx_isprimary\" btree (_primary)\n \"bib_refs_idx_journal\" btree (journal)\n \"bib_refs_idx_modifiedby_key\" btree (_modifiedby_key)\n \"bib_refs_idx_reviewstatus_key\" btree (_reviewstatus_key)\n \"bib_refs_idx_title\" btree (title)\n \"bib_refs_idx_year\" btree (year)\n\npsql -d postgres -c \"create table bug (like mgd.bib_refs);\"\npsql -d postgres -c \"create index on bug (title);\"\npsql -d postgres -c \"insert into bug select * from mgd.bib_refs;\"\npsql -d postgres -c \"create extension if not exists amcheck;\"\npsql -d postgres -c \"analyze; set maintenance_work_mem='128MB';\"\npsql -d postgres -c \"select bt_index_parent_check('bug_title_idx', true);\"\nERROR: heap tuple (579,4) from table \"bug\" lacks matching index tuple\nwithin index \"bug_title_idx\"\n\nHere are details of the offending datum in the heap:\n\npg@postgres:5432 [9532]=# select title, length(title),\npg_column_size(title) from bug where ctid = '(579,4)';\n─[ RECORD 1 ]──┬────\ntitle │ Final report on the safety assessment of trilaurin,\ntriarachidin, tribehenin, tricaprin, tricaprylin, trierucin,\ntriheptanoin, triheptylundecanoin, triisononanoin, triisopalmitin,\ntriisostearin, trilinolein, trimyristin, trioctanoin, triolein,\ntripalmitin, tripalmitolein, triricinolein, tristearin, triundecanoin,\nglyceryl triacetyl hydroxystearate, glyceryl triacetyl ricinoleate,\nand gl.\nlength │ 390\npg_column_size │ 234\n\nDoes anyone have any idea why the 4 byte varlena (text) datum in the\nsingle attribute index \"bug_title_idx\" is uncompressed, while the\nvalue in the heap is compressed? No other value in any other index\nhappens to trip the problem, though this is complicated real-world\ndatabase with many similar indexes over tens of gigabytes of data (I\nhave quite a number of these \"INSERT ... SELECT\" tests for my nbtree\npatch). What you see here is a partially boiled-down test case.\n\nI've started some preliminary debugging work. A \"REINDEX index\nbug_title_idx\" makes amcheck happy, since the index tuple that points\nto heap tuple '(579,4)' ends up being compressed in exactly the same\nway as it is in the heap. The initial \"INSERT ... SELECT\" clearly\nmakes the executor produce compressed values for heap_insert(), though\nnot for btinsert() in this one instance. I've been able to confirm\nthis from gdb.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 14 Jan 2019 13:13:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The heapallindexed enhancement that made it into Postgres 11 assumes\n> that the representation of index tuples produced by index_form_tuple()\n> (or all relevant index_form_tuple() callers) is deterministic: for\n> every possible heap tuple input there must be a single possible\n> (bitwise) output.\n\nThat assumption seems unbelievably fragile. How badly do things\nbreak when it's violated?\n\nAlso, is the assumption just that a fixed source tuple will generate\nidentical index entries across repeated index_form_tuple attempts?\nOr is it assuming that logically equal index entries will be bitwise\nequal? The latter is broken on its face, because index_form_tuple()\ndoesn't try to hide differences in the toasting state of source\ndatums.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 14 Jan 2019 16:31:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > The heapallindexed enhancement that made it into Postgres 11 assumes\n> > that the representation of index tuples produced by index_form_tuple()\n> > (or all relevant index_form_tuple() callers) is deterministic: for\n> > every possible heap tuple input there must be a single possible\n> > (bitwise) output.\n>\n> That assumption seems unbelievably fragile. How badly do things\n> break when it's violated?\n\nWell, they break. You get a false positive report of corruption, since\nthere isn't a bitwise identical version of the datum from the heap in\nthe index for that same tuple. This seems to be very unlikely in\npractice, but amcheck is concerned with unlikely outcomes.\n\n> Also, is the assumption just that a fixed source tuple will generate\n> identical index entries across repeated index_form_tuple attempts?\n\nI would have said that the assumption is that a fixed source tuple\nwill generate identical index entries. The problem with that is that\nmy idea of what constitutes a fixed input now seems to have been\nfaulty. I didn't think that the executor could mutate TOAST state in a\nway that made this kind of inconsistency possible.\n\n> Or is it assuming that logically equal index entries will be bitwise\n> equal? The latter is broken on its face, because index_form_tuple()\n> doesn't try to hide differences in the toasting state of source\n> datums.\n\nLogical equality as I understand the term doesn't enter into it at all\n-- B-Tree operator class semantics are not involved here. I'm not sure\nif that's what you meant, but I want to be clear on that. amcheck\ncertainly knows that it cannot assume that scankey logical equality is\nthe same thing as bitwise equality.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 14 Jan 2019 13:46:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 1:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would have said that the assumption is that a fixed source tuple\n> will generate identical index entries. The problem with that is that\n> my idea of what constitutes a fixed input now seems to have been\n> faulty. I didn't think that the executor could mutate TOAST state in a\n> way that made this kind of inconsistency possible.\n\nThe source tuple (by which I mean the mgd.bib_refs heap tuple) is a\nHEAP_HASEXTERNAL tuple. If I update it to make a particularly long\ntext field NULL (UPDATE mgd.bib_refs SET abstract = NULL), and then\n\"INSERT INTO bug SELECT * FROM mgd.bib_refs\", amcheck stops\ncomplaining about the index on \"bug.title\" is missing. Even though the\n\"abstract\" field has nothing to do with the index.\n\nThe source of the inconsistency here must be within\nheap_prepare_insert() -- the external datum handling:\n\n /*\n * If the new tuple is too big for storage or contains already toasted\n * out-of-line attributes from some other relation, invoke the toaster.\n */\n if (relation->rd_rel->relkind != RELKIND_RELATION &&\n relation->rd_rel->relkind != RELKIND_MATVIEW)\n {\n /* toast table entries should never be recursively toasted */\n Assert(!HeapTupleHasExternal(tup));\n return tup;\n }\n else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD)\n return toast_insert_or_update(relation, tup, NULL, options);\n else\n return tup;\n\nEven leaving that aside, I really should have spotted that\nTOAST_TUPLE_THRESHOLD is a different thing to TOAST_INDEX_TARGET. The\ntwo things are always controlled independently. Mea culpa.\n\nThe fix here must be to normalize index tuples that are compressed\nwithin amcheck, both during initial fingerprinting, and during\nsubsequent probes of the Bloom filter in bt_tuple_present_callback().\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 14 Jan 2019 14:37:23 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 2:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The fix here must be to normalize index tuples that are compressed\n> within amcheck, both during initial fingerprinting, and during\n> subsequent probes of the Bloom filter in bt_tuple_present_callback().\n\nI happened to talk to Andres about this in person yesterday. He\nthought that there was reason to be concerned about the need for\nlogical normalization beyond TOAST issues. Expression indexes were a\nparticular concern, because they could in principle have a change in\nthe on-disk representation without a change of logical values -- false\npositives could result. He suggested that the long term solution was\nto bring hash operator class hash functions into Bloom filter hashing,\nat least where available.\n\nI wasn't very enthused about this idea, because it will be expensive\nand complicated for an uncertain benefit. There are hardly any btree\noperator classes that can ever have bitwise distinct datums that are\nequal, anyway (leaving aside issues with TOAST). For the cases that do\nexist (e.g. numeric_ops display scale), we may not really want to\nnormalize the differences away. Having an index tuple with a\nnumeric_ops datum containing the wrong display scale but with\neverything else correct still counts as corruption.\n\nIt now occurs to me that if we wanted to go further than simply\nnormalizing away TOAST differences, my pending nbtree patch could\nenable a simpler and more flexible way of doing that than bringing\nhash opclasses into it, at least on the master branch. We could simply\ndo an index look-up for the exact tuple of interest in the event of a\nBloom filter probe indicating its apparent absence (corruption) --\neven heap TID can participate in the search. In addition, that would\ncover the whole universe of logical differences, known and unknown\n(e.g. it wouldn't matter if somebody initialized alignment padding to\nsomething non-zero, since that doesn't cause wrong answers to\nqueries). We might even want to offer an option where the Bloom filter\nis bypassed (we go straight to probing the indexes) some proportion of\nthe time, or when a big misestimation when sizing the Bloom filter is\ndetected (i.e. almost all bits in the Bloom filter bitset are set at\nthe time we start probing the filter).\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 23 Jan 2019 10:59:55 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
},
{
"msg_contents": "On Wed, Jan 23, 2019 at 10:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The fix here must be to normalize index tuples that are compressed\n> > within amcheck, both during initial fingerprinting, and during\n> > subsequent probes of the Bloom filter in bt_tuple_present_callback().\n>\n> I happened to talk to Andres about this in person yesterday. He\n> thought that there was reason to be concerned about the need for\n> logical normalization beyond TOAST issues. Expression indexes were a\n> particular concern, because they could in principle have a change in\n> the on-disk representation without a change of logical values -- false\n> positives could result. He suggested that the long term solution was\n> to bring hash operator class hash functions into Bloom filter hashing,\n> at least where available.\n\nI think that the best way forward is to normalize to compensate for\ninconsistent input datum TOAST state, and leave it at that. ISTM that\nlogical normalization beyond that (based on hashing, or anything else)\ncreates more problems than it solves. I am concerned about cases like\nINCLUDE indexes (which may have datums that lack even a B-Tree\nopclass), and about the logical-though-semantically-relevant facets of\nsome datatypes such as numeric's display scale. If I can get an\nexample from Andres of a case where further logical normalization is\nnecessary to avoid false positives with expression indexes, that may\nchange things. (BTW, I implemented another amcheck enhancement that\nsearches indexes from the root to find matches -- the code is a\ntrivial addition to the new patch series I'm working on, and seems\nlike a better way to do enhanced logical normalization if that proves\nto be truly necessary.)\n\nAttached draft patch fixes the bug by doing fairly simple\nnormalization. I think that TOAST compression of datums in indexes is\nfairly rare in practice, so I'm not very worried about the fact that\nthis won't perform as well as it could with indexes that have a lot of\ncompressed datums. I think that the interface I've added might need to\nbe expanded for other things in the future (e.g., to make amcheck work\nwith nbtree-native duplicate compression), and not worrying about the\nperformance too much helps with that goal.\n\nI'll pick this up next week, and likely commit a fix by Wednesday or\nThursday if there are no objections. I'm not sure if the test case is\nworth including.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 1 Feb 2019 18:27:51 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
},
{
"msg_contents": "On Fri, Feb 1, 2019 at 6:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached draft patch fixes the bug by doing fairly simple\n> normalization. I think that TOAST compression of datums in indexes is\n> fairly rare in practice, so I'm not very worried about the fact that\n> this won't perform as well as it could with indexes that have a lot of\n> compressed datums. I think that the interface I've added might need to\n> be expanded for other things in the future (e.g., to make amcheck work\n> with nbtree-native duplicate compression), and not worrying about the\n> performance too much helps with that goal.\n>\n> I'll pick this up next week, and likely commit a fix by Wednesday or\n> Thursday if there are no objections. I'm not sure if the test case is\n> worth including.\n\nOn second thought, the test should look like this:\n\n$ psql --no-psqlrc --echo-queries -f bug_repro.sql\ndrop table if exists bug;\nDROP TABLE\ncreate table bug (buggy text);\nCREATE TABLE\nalter table bug alter column buggy set storage plain;\nALTER TABLE\ncreate index toasty on bug(buggy);\nCREATE INDEX\nalter table bug alter column buggy set storage extended;\nALTER TABLE\ninsert into bug select repeat ('a', 2100);\nINSERT 0 1\nselect bt_index_parent_check('toasty', true);\npsql:bug_repro.sql:7: ERROR: heap tuple (0,1) from table \"bug\" lacks\nmatching index tuple within index \"toasty\"\n\nThis relies on the fact that the pg_attribute entry for the index is\nmore or less a straight copy of the corresponding pg_attribute entry\nfor the table at the time of the CREATE INDEX. I arrange to make\nstorage of the index attribute plain and storage for the heap\nattribute extended. TOAST is applied inconsistently between\ntoast_insert_or_update() and index_form_tuple() without really relying\non implementation details that are subject to change.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Tue, 5 Feb 2019 19:49:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Non-deterministic IndexTuple toast compression from\n index_form_tuple() + amcheck false positives"
}
] |
[
{
"msg_contents": "Hi,\n\nI've started observing funny valgrind failures on Fedora 28, possibly\nafter upgrading from 3.14.0-1 to 3.14.0-7 a couple of days ago. This\ntime it does not seem like platform-specific issues, though - the\nfailures all look like this:\n\n==20974== Conditional jump or move depends on uninitialised value(s)\n==20974== at 0xA02088: calc_bucket (dynahash.c:870)\n==20974== by 0xA021BA: hash_search_with_hash_value (dynahash.c:963)\n==20974== by 0xA020EE: hash_search (dynahash.c:909)\n==20974== by 0x88DAB3: smgrclosenode (smgr.c:358)\n==20974== by 0x9D6C01: LocalExecuteInvalidationMessage (inval.c:607)\n==20974== by 0x86C44F: ReceiveSharedInvalidMessages (sinval.c:121)\n==20974== by 0x9D6D83: AcceptInvalidationMessages (inval.c:681)\n==20974== by 0x539B6B: AtStart_Cache (xact.c:980)\n==20974== by 0x53AA6C: StartTransaction (xact.c:1915)\n==20974== by 0x53B6F0: StartTransactionCommand (xact.c:2685)\n==20974== by 0x892EFB: start_xact_command (postgres.c:2475)\n==20974== by 0x89083E: exec_simple_query (postgres.c:923)\n==20974== by 0x894E7B: PostgresMain (postgres.c:4143)\n==20974== by 0x7F553D: BackendRun (postmaster.c:4412)\n==20974== by 0x7F4CA1: BackendStartup (postmaster.c:4084)\n==20974== by 0x7F12A0: ServerLoop (postmaster.c:1757)\n==20974== by 0x7F08CF: PostmasterMain (postmaster.c:1365)\n==20974== by 0x728E33: main (main.c:228)\n==20974== Uninitialised value was created by a stack allocation\n==20974== at 0x9D65D4: AddCatcacheInvalidationMessage (inval.c:339)\n==20974==\n\n==20974== Use of uninitialised value of size 8\n==20974== at 0xA021FD: hash_search_with_hash_value (dynahash.c:968)\n==20974== by 0xA020EE: hash_search (dynahash.c:909)\n==20974== by 0x88DAB3: smgrclosenode (smgr.c:358)\n==20974== by 0x9D6C01: LocalExecuteInvalidationMessage (inval.c:607)\n==20974== by 0x86C44F: ReceiveSharedInvalidMessages (sinval.c:121)\n==20974== by 0x9D6D83: AcceptInvalidationMessages (inval.c:681)\n==20974== by 0x539B6B: AtStart_Cache (xact.c:980)\n==20974== by 0x53AA6C: StartTransaction (xact.c:1915)\n==20974== by 0x53B6F0: StartTransactionCommand (xact.c:2685)\n==20974== by 0x892EFB: start_xact_command (postgres.c:2475)\n==20974== by 0x89083E: exec_simple_query (postgres.c:923)\n==20974== by 0x894E7B: PostgresMain (postgres.c:4143)\n==20974== by 0x7F553D: BackendRun (postmaster.c:4412)\n==20974== by 0x7F4CA1: BackendStartup (postmaster.c:4084)\n==20974== by 0x7F12A0: ServerLoop (postmaster.c:1757)\n==20974== by 0x7F08CF: PostmasterMain (postmaster.c:1365)\n==20974== by 0x728E33: main (main.c:228)\n==20974== Uninitialised value was created by a stack allocation\n==20974== at 0x9D65D4: AddCatcacheInvalidationMessage (inval.c:339)\n==20974==\n\nThere are more reports in the attached log, but what they all share is\ndynahash and invalidations. Which might be an arguments against a\npossible valgrind bug, because that would (probably?) affect various\nother places.\n\nIt's reproducible quite far back (a couple thousand commits, at least),\nso it does not seem like caused by a recent commit either.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 15 Jan 2019 03:07:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "strange valgrind failures (again)"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-15 03:07:10 +0100, Tomas Vondra wrote:\n> I've started observing funny valgrind failures on Fedora 28, possibly\n> after upgrading from 3.14.0-1 to 3.14.0-7 a couple of days ago. This\n> time it does not seem like platform-specific issues, though - the\n> failures all look like this:\n\nAny chance you're compiling without USE_VALGRIND defined? IIRC these are\nprecisely what the VALGRIND_MAKE_MEM_DEFINED calls in inval.c are\nintended to fight:\n\t/*\n\t * Define padding bytes in SharedInvalidationMessage structs to be\n\t * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by\n\t * multiple processes, will cause spurious valgrind warnings about\n\t * undefined memory being used. That's because valgrind remembers the\n\t * undefined bytes from the last local process's store, not realizing that\n\t * another process has written since, filling the previously uninitialized\n\t * bytes\n\t */\n\tVALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 14 Jan 2019 18:11:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: strange valgrind failures (again)"
},
{
"msg_contents": "On 1/15/19 3:11 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-01-15 03:07:10 +0100, Tomas Vondra wrote:\n>> I've started observing funny valgrind failures on Fedora 28, possibly\n>> after upgrading from 3.14.0-1 to 3.14.0-7 a couple of days ago. This\n>> time it does not seem like platform-specific issues, though - the\n>> failures all look like this:\n> \n> Any chance you're compiling without USE_VALGRIND defined? IIRC these are\n> precisely what the VALGRIND_MAKE_MEM_DEFINED calls in inval.c are\n> intended to fight:\n> \t/*\n> \t * Define padding bytes in SharedInvalidationMessage structs to be\n> \t * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by\n> \t * multiple processes, will cause spurious valgrind warnings about\n> \t * undefined memory being used. That's because valgrind remembers the\n> \t * undefined bytes from the last local process's store, not realizing that\n> \t * another process has written since, filling the previously uninitialized\n> \t * bytes\n> \t */\n> \tVALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));\n> \n> \n\n... the bang you might have just heard was me facepalming\n\nYes, I've been compiling without USE_VALGRIND, because I've been\nrebuilding using a command from shell history and the command-line grew\na bit too long to notice that.\n\nAnyway, I find it interesting that valgrind notices this particular\nplace and not the other places, and that it only starts happening after\na couple of minutes of running the regression tests (~5 minutes or so).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 15 Jan 2019 03:41:34 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: strange valgrind failures (again)"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-15 03:41:34 +0100, Tomas Vondra wrote:\n> On 1/15/19 3:11 AM, Andres Freund wrote:\n> > On 2019-01-15 03:07:10 +0100, Tomas Vondra wrote:\n> >> I've started observing funny valgrind failures on Fedora 28, possibly\n> >> after upgrading from 3.14.0-1 to 3.14.0-7 a couple of days ago. This\n> >> time it does not seem like platform-specific issues, though - the\n> >> failures all look like this:\n> > \n> > Any chance you're compiling without USE_VALGRIND defined? IIRC these are\n> > precisely what the VALGRIND_MAKE_MEM_DEFINED calls in inval.c are\n> > intended to fight:\n> > \t/*\n> > \t * Define padding bytes in SharedInvalidationMessage structs to be\n> > \t * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by\n> > \t * multiple processes, will cause spurious valgrind warnings about\n> > \t * undefined memory being used. That's because valgrind remembers the\n> > \t * undefined bytes from the last local process's store, not realizing that\n> > \t * another process has written since, filling the previously uninitialized\n> > \t * bytes\n> > \t */\n> > \tVALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));\n> > \n> > \n> \n> ... the bang you might have just heard was me facepalming\n\nHeh ;)\n\n\n> Anyway, I find it interesting that valgrind notices this particular\n> place and not the other places, and that it only starts happening after\n> a couple of minutes of running the regression tests (~5 minutes or so).\n\nIIRC you basically need to fill the space for sinvals for this to\nmatter, and individual backends need to be old enough to have previously\nused the same space. So it's not that easy to trigger. I don't think\nwe needed many other such tricks to make valgrind work / other things\nlike this have been solved via valgrind.supp, so it's not that\nsurprising that you didn't find anything else...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 14 Jan 2019 18:46:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: strange valgrind failures (again)"
}
] |
[
{
"msg_contents": "My nbtree patch [1] needs to call index_getprocinfo() with an\nexclusive buffer lock held during a leaf page split. This has an\nundetectable self-deadlock (LWLock self-deadlock) risk: a syscache\nlookup against pg_proc might have a catcache miss, ending with an\nindex scan that needs to access the very same buffer. That's not\nacceptable.\n\nThere is very similar code to this in SP-GiST: there is a\nindex_getprocinfo() call within doPickSplit(), to get the user-defined\nmethod for a split (SPGIST_PICKSPLIT_PROC). My nbtree patch builds a\nnew insertion scankey to determine how many attributes we can safely\ntruncate away in new pivot tuples -- it would be tricky to do this\nlookup outside of the split function. I suppose that it's okay to do\nthis in SP-GiST without special care because there cannot be an\nSP-GiST index on a system catalog. I'll need to do something else\nabout it given that I'm doing this within nbtree, though -- I don't\nwant to complicate the code that deals with insertion scankeys to make\nthis work.\n\nHere is a strategy that fixes the problem without complicating matters\nfor nbtree: It should be safe if I make a point of using a special\ncomparator (the bitwise one that we already use in other contexts in\nthe patch) with system catalog indexes. We know that they cannot be of\ntypes that have a varlena header + typstorage != 'p', which ensures\nthat there are no cases where bitwise equality fails to be a reliable\nindicator of opclass equality (e.g. there are no cases like numeric\ndisplay scale). We could make sure that this requirement isn't\nviolated in the future by adding a pg_index test to opr_sanity.sql,\nlimiting system catalog indexes to opclasses that are known-safe for\nthe bitwise comparator.\n\nDoes that seem sane?\n\n[1] https://commitfest.postgresql.org/21/1787/\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 14 Jan 2019 18:59:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Safely calling index_getprocinfo() while holding an nbtree exclusive\n buffer lock"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> My nbtree patch [1] needs to call index_getprocinfo() with an\n> exclusive buffer lock held during a leaf page split.\n\nI think you should stop right there and ask why. Surely that info\ncan be obtained before starting the operation? Quite aside from the\ndeadlock hazard, I do not think holding an exclusive buffer lock\nfor long enough to go consult a system catalog will be acceptable\nfrom a performance/concurrency standpoint.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 14 Jan 2019 22:12:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Safely calling index_getprocinfo() while holding an nbtree\n exclusive buffer lock"
},
{
"msg_contents": "On Mon, Jan 14, 2019 at 7:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think you should stop right there and ask why. Surely that info\n> can be obtained before starting the operation?\n\n*Thinks some more*\n\nUh, I'm already telling the same _bt_truncate() code path that it is\nbeing called from a CREATE INDEX, allowing it to avoid accessing the\nmetapage. I now think that it would be perfectly acceptable to just\npass down the insertion scan key for the tuple that caused the split,\ninstead of that build bool, and handle both deadlock issues\n(index_getprocinfo() hazard and metapage hazard) that way instead.\nHeikki expressed some concerns about the way the patch accesses the\nmetapage already.\n\nI jumped the gun with this catalog index business. Clearly I'd be much\nbetter off avoiding *all* new buffer lock protocol stuff by getting\nboth pieces of information up-front -- for some reason I thought that\nthat would be harder than it now appears.\n\nThanks\n--\nPeter Geoghegan\n\n",
"msg_date": "Mon, 14 Jan 2019 20:02:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Safely calling index_getprocinfo() while holding an nbtree\n exclusive buffer lock"
}
] |
[
{
"msg_contents": "current_logfiles is a meta data file, that stores the current log writing\nfile, and this file\npresents in the data directory. This file doesn't follow the group access\nmode set at\nthe initdb time, but it follows the log_file_mode permissions.\n\nwithout group access permissions, backup with group access can lead to\nfailure.\nAttached patch fix the problem.\n\ncomments?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Tue, 15 Jan 2019 15:08:41 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Jan 15, 2019 at 03:08:41PM +1100, Haribabu Kommi wrote:\n> current_logfiles is a meta data file, that stores the current log writing\n> file, and this file presents in the data directory. This file\n> doesn't follow the group access mode set at the initdb time, but it\n> follows the log_file_mode permissions.\n> \n> Without group access permissions, backup with group access can lead to\n> failure. Attached patch fix the problem.\n\ninitdb enforces log_file_mode to 0640 when using the group mode, still\nif one enforces the parameter value then current_logfiles would just\nstick with it. This is not really user-friendly. This impacts also\nnormal log files as these get included in base backups if the log path\nis within the data folder (not everybody uses an absolute path out of\nthe data folder for the logs).\n\nOne way to think about this is that we may want to worry also about\nnormal log files and document that one had better be careful with the\nsetting of log_file_mode? Still, as we are talking about a file\naiming at storing meta-data for log files, something like what you\nsuggest can make sense.\n\nWhen discussing about pg_current_logfile(), I raised the point about\nnot including as well in base backups which would also address the\nproblem reported here. However we decided to keep it because it can\nbe helpful to know what's the last log file associated to a base\nbackup for debugging purposes:\nhttps://www.postgresql.org/message-id/50b58f25-ab07-f6bd-7a68-68f29f214ce9@dalibo.com\n\nInstead of what you are proposing, why not revisiting that and just\nexclude the file from base backups. I would be in favor of just doing\nthat instead of switching the file's permission from log_file_mode to\npg_file_create_mode.\n--\nMichael",
"msg_date": "Tue, 15 Jan 2019 14:15:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Jan 15, 2019 at 4:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jan 15, 2019 at 03:08:41PM +1100, Haribabu Kommi wrote:\n> > current_logfiles is a meta data file, that stores the current log writing\n> > file, and this file presents in the data directory. This file\n> > doesn't follow the group access mode set at the initdb time, but it\n> > follows the log_file_mode permissions.\n> >\n> > Without group access permissions, backup with group access can lead to\n> > failure. Attached patch fix the problem.\n>\n> initdb enforces log_file_mode to 0640 when using the group mode, still\n> if one enforces the parameter value then current_logfiles would just\n> stick with it. This is not really user-friendly. This impacts also\n> normal log files as these get included in base backups if the log path\n> is within the data folder (not everybody uses an absolute path out of\n> the data folder for the logs).\n>\n\nwe got this problem when the log_file_mode is set 0600 but the database\nfile are with group access permissions. In our scenario, the log files are\noutside the data folder, so we faced the problem with current_logfiles\nfile.\n\n\n> One way to think about this is that we may want to worry also about\n> normal log files and document that one had better be careful with the\n> setting of log_file_mode? Still, as we are talking about a file\n> aiming at storing meta-data for log files, something like what you\n> suggest can make sense.\n>\n\nYes, with log_file_mode less than 0640 containing the log files inside\nthe data directory can leads to backup failure. Yes, providing extra\ninformation about group access when log_file_mode is getting chosen.\n\nAnother option is how about not letting user to choose less than 0640\nwhen the group access mode is enabled?\n\n\n\n> When discussing about pg_current_logfile(), I raised the point about\n> not including as well in base backups which would also address the\n> problem reported here. However we decided to keep it because it can\n> be helpful to know what's the last log file associated to a base\n> backup for debugging purposes:\n>\n> https://www.postgresql.org/message-id/50b58f25-ab07-f6bd-7a68-68f29f214ce9@dalibo.com\n>\n> Instead of what you are proposing, why not revisiting that and just\n> exclude the file from base backups. I would be in favor of just doing\n> that instead of switching the file's permission from log_file_mode to\n> pg_file_create_mode.\n>\n\nI am not sure how much useful having the details of the log file in the\nbackup.\nIt may be useful when there is any problem with backup.\n\nExcluding the file in the backup can solve the problem of backup by an\nunprivileged user. Is there any scenarios it can cause problems if it\ndoesn't follow the group access mode?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Jan 15, 2019 at 4:15 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jan 15, 2019 at 03:08:41PM +1100, Haribabu Kommi wrote:\n> current_logfiles is a meta data file, that stores the current log writing\n> file, and this file presents in the data directory. This file\n> doesn't follow the group access mode set at the initdb time, but it\n> follows the log_file_mode permissions.\n> \n> Without group access permissions, backup with group access can lead to\n> failure. Attached patch fix the problem.\n\ninitdb enforces log_file_mode to 0640 when using the group mode, still\nif one enforces the parameter value then current_logfiles would just\nstick with it. This is not really user-friendly. This impacts also\nnormal log files as these get included in base backups if the log path\nis within the data folder (not everybody uses an absolute path out of\nthe data folder for the logs).we got this problem when the log_file_mode is set 0600 but the databasefile are with group access permissions. In our scenario, the log files areoutside the data folder, so we faced the problem with current_logfilesfile. \nOne way to think about this is that we may want to worry also about\nnormal log files and document that one had better be careful with the\nsetting of log_file_mode? Still, as we are talking about a file\naiming at storing meta-data for log files, something like what you\nsuggest can make sense.Yes, with log_file_mode less than 0640 containing the log files insidethe data directory can leads to backup failure. Yes, providing extrainformation about group access when log_file_mode is getting chosen.Another option is how about not letting user to choose less than 0640when the group access mode is enabled? \nWhen discussing about pg_current_logfile(), I raised the point about\nnot including as well in base backups which would also address the\nproblem reported here. However we decided to keep it because it can\nbe helpful to know what's the last log file associated to a base\nbackup for debugging purposes:\nhttps://www.postgresql.org/message-id/50b58f25-ab07-f6bd-7a68-68f29f214ce9@dalibo.com\n\nInstead of what you are proposing, why not revisiting that and just\nexclude the file from base backups. I would be in favor of just doing\nthat instead of switching the file's permission from log_file_mode to\npg_file_create_mode.I am not sure how much useful having the details of the log file in the backup.It may be useful when there is any problem with backup.Excluding the file in the backup can solve the problem of backup by anunprivileged user. Is there any scenarios it can cause problems if itdoesn't follow the group access mode?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 15 Jan 2019 19:55:35 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "Haribabu Kommi <kommi.haribabu@gmail.com> writes:\n> Excluding the file in the backup can solve the problem of backup by an\n> unprivileged user. Is there any scenarios it can cause problems if it\n> doesn't follow the group access mode?\n\nThe point of this file, as I understood it, was to allow someone who's\nallowed to read the log files to find out which one is the latest. It\nmakes zero sense for it to have different permissions from the log files,\nbecause doing that would break its only use-case.\n\nI am wondering what is the use-case for a backup arrangement that's so\nfragile it can't cope with varying permissions in the data directory.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 15 Jan 2019 09:47:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "I wrote:\n> Haribabu Kommi <kommi.haribabu@gmail.com> writes:\n>> Excluding the file in the backup can solve the problem of backup by an\n>> unprivileged user. Is there any scenarios it can cause problems if it\n>> doesn't follow the group access mode?\n\n> The point of this file, as I understood it, was to allow someone who's\n> allowed to read the log files to find out which one is the latest. It\n> makes zero sense for it to have different permissions from the log files,\n> because doing that would break its only use-case.\n\nOn reflection, maybe the problem is not that we're giving the file\nthe wrong permissions, but that we're putting it in the wrong place?\nThat is, seems like it should be in the logfile directory not the\ndata directory. That would certainly simplify the intended use-case,\nand it would fix this complaint too.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 15 Jan 2019 10:53:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Jan 15, 2019 at 10:53:30AM -0500, Tom Lane wrote:\n> On reflection, maybe the problem is not that we're giving the file\n> the wrong permissions, but that we're putting it in the wrong place?\n> That is, seems like it should be in the logfile directory not the\n> data directory. That would certainly simplify the intended use-case,\n> and it would fix this complaint too.\n\nYeah, thinking more on this one using for this file different\npermissions than the log files makes little sense, so what you propose\nhere seems like a sensible thing to do things. Even if we exclude the\nfile from native BASE_BACKUP this would not solve the case of custom\nbackup solutions doing their own copy of things, when they rely on\ngroup-read permissions. This would not solve completely the problem\nanyway if log files are in the data folder, but it would address the\ncase where the log files are in an absolute path out of the data\nfolder.\n\nI am adding in CC Gilles who implemented current_logfiles for his\ninput.\n--\nMichael",
"msg_date": "Wed, 16 Jan 2019 11:08:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Tue, Jan 15, 2019 at 10:53:30AM -0500, Tom Lane wrote:\n> > On reflection, maybe the problem is not that we're giving the file\n> > the wrong permissions, but that we're putting it in the wrong place?\n> > That is, seems like it should be in the logfile directory not the\n> > data directory. That would certainly simplify the intended use-case,\n> > and it would fix this complaint too.\n> \n> Yeah, thinking more on this one using for this file different\n> permissions than the log files makes little sense, so what you propose\n> here seems like a sensible thing to do things. Even if we exclude the\n> file from native BASE_BACKUP this would not solve the case of custom\n> backup solutions doing their own copy of things, when they rely on\n> group-read permissions. This would not solve completely the problem\n> anyway if log files are in the data folder, but it would address the\n> case where the log files are in an absolute path out of the data\n> folder.\n\nActually, I agree with the initial patch on the basis that this file\nthat's being created (which I'm honestly a bit amazed that we're doing\nthis way; certainly seems rather grotty to me) is surely not an actual\n*log* file and therefore using logfile_open() to open it doesn't seem\nquite right. I would have hoped for a way to pass this information that\ndidn't involve a file at all, but I'll assume that was discussed already\nand good reasons put forth as to why we can't avoid it.\n\nI'm not really sure putting it into the logfile directory is such a hot\nidea as users might have set up external log file rotation of files in\nthat directory. Of course, in that case they'd probably signal PG right\nafterwards and PG would go write out a new file, but it still seems\npretty awkward. I'm not terribly against solving this issue that way\neither though, but I tend to think the originally proposed patch is more\nsensible.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 16 Jan 2019 13:22:12 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Michael Paquier (michael@paquier.xyz) wrote:\n>> On Tue, Jan 15, 2019 at 10:53:30AM -0500, Tom Lane wrote:\n>>> On reflection, maybe the problem is not that we're giving the file\n>>> the wrong permissions, but that we're putting it in the wrong place?\n\n> I'm not really sure putting it into the logfile directory is such a hot\n> idea as users might have set up external log file rotation of files in\n> that directory. Of course, in that case they'd probably signal PG right\n> afterwards and PG would go write out a new file, but it still seems\n> pretty awkward. I'm not terribly against solving this issue that way\n> either though, but I tend to think the originally proposed patch is more\n> sensible.\n\nI dunno, I think that the current design was made without any thought\nwhatsoever about the log-files-outside-the-data-directory case. If\nyou're trying to set things up that way, it's because you want to give\nlogfile read access to people who shouldn't be able to look into the\ndata directory proper. That makes current_logfiles pretty useless\nto such people, as it's now designed.\n\nNow, if the expectation is that current_logfiles is just an internal\nworking file that users shouldn't access directly, then this argument\nis wrong --- but then why is it documented in user-facing docs?\n\nIf we're going to accept the patch as-is, then it logically follows\nthat we should de-document current_logfiles, because we're taking the\nposition that it's an internal temporary file not meant for user access.\n\nI don't really believe your argument about log rotation: a rotator\nwould presumably be configured either to pay attention to file name\npatterns (which current_logfiles wouldn't match) or to file age\n(which current_logfiles shouldn't satisfy either, since it's always\nrewritten when we switch logfiles).\n\nIf we wanted to worry about that case, a possible solution is to make the\ncurrent_logfiles pathname user-configurable so it could be put in some\nthird directory. But I think that adds more complexity than is justified\n--- and not just for us, but for programs trying to find and use\ncurrent_logfiles.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 13:39:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Michael Paquier (michael@paquier.xyz) wrote:\n> >> On Tue, Jan 15, 2019 at 10:53:30AM -0500, Tom Lane wrote:\n> >>> On reflection, maybe the problem is not that we're giving the file\n> >>> the wrong permissions, but that we're putting it in the wrong place?\n> \n> > I'm not really sure putting it into the logfile directory is such a hot\n> > idea as users might have set up external log file rotation of files in\n> > that directory. Of course, in that case they'd probably signal PG right\n> > afterwards and PG would go write out a new file, but it still seems\n> > pretty awkward. I'm not terribly against solving this issue that way\n> > either though, but I tend to think the originally proposed patch is more\n> > sensible.\n> \n> I dunno, I think that the current design was made without any thought\n> whatsoever about the log-files-outside-the-data-directory case. If\n> you're trying to set things up that way, it's because you want to give\n> logfile read access to people who shouldn't be able to look into the\n> data directory proper. That makes current_logfiles pretty useless\n> to such people, as it's now designed.\n\n... or you just want to move the log files to a more sensible location\nthan the data directory. The justification for log_file_mode existing\nis because you might want to have log files with different privileges,\nbut that's quite a different thing.\n\n> Now, if the expectation is that current_logfiles is just an internal\n> working file that users shouldn't access directly, then this argument\n> is wrong --- but then why is it documented in user-facing docs?\n\nI really couldn't say why it's documented in the user-facing docs, and\nfor my 2c I don't really think it should be- there's a function to get\nthat information. Sprinkling the data directory with files for users to\naccess directly doesn't exactly fit my view of what a good API looks\nlike.\n\nThe fact that there isn't any discussion about where that file actually\nlives does make me suspect you're right that log files outside the data\ndirectory wasn't really contemplated.\n\n> If we're going to accept the patch as-is, then it logically follows\n> that we should de-document current_logfiles, because we're taking the\n> position that it's an internal temporary file not meant for user access.\n\n... and hopefully we'd get rid of it one day entirely.\n\n> I don't really believe your argument about log rotation: a rotator\n> would presumably be configured either to pay attention to file name\n> patterns (which current_logfiles wouldn't match) or to file age\n> (which current_logfiles shouldn't satisfy either, since it's always\n> rewritten when we switch logfiles).\n\nYes, a good pattern would avoid picking up on this file and most are\nconfigured that way (though they are maybe not as specific as you might\nthink- the default here is just /var/log/postgresql/*.log).\n\n> If we wanted to worry about that case, a possible solution is to make the\n> current_logfiles pathname user-configurable so it could be put in some\n> third directory. But I think that adds more complexity than is justified\n> --- and not just for us, but for programs trying to find and use\n> current_logfiles.\n\nI'd much rather move to get rid of that file rather than increase its\nvisability- programs should be using the provided function.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 16 Jan 2019 13:49:54 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Thu, Jan 17, 2019 at 5:49 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > * Michael Paquier (michael@paquier.xyz) wrote:\n> > >> On Tue, Jan 15, 2019 at 10:53:30AM -0500, Tom Lane wrote:\n> > >>> On reflection, maybe the problem is not that we're giving the file\n> > >>> the wrong permissions, but that we're putting it in the wrong place?\n> >\n> > > I'm not really sure putting it into the logfile directory is such a hot\n> > > idea as users might have set up external log file rotation of files in\n> > > that directory. Of course, in that case they'd probably signal PG\n> right\n> > > afterwards and PG would go write out a new file, but it still seems\n> > > pretty awkward. I'm not terribly against solving this issue that way\n> > > either though, but I tend to think the originally proposed patch is\n> more\n> > > sensible.\n> >\n> > I dunno, I think that the current design was made without any thought\n> > whatsoever about the log-files-outside-the-data-directory case. If\n> > you're trying to set things up that way, it's because you want to give\n> > logfile read access to people who shouldn't be able to look into the\n> > data directory proper. That makes current_logfiles pretty useless\n> > to such people, as it's now designed.\n>\n> ... or you just want to move the log files to a more sensible location\n> than the data directory. The justification for log_file_mode existing\n> is because you might want to have log files with different privileges,\n> but that's quite a different thing.\n>\n\nThanks for sharing your opinions.\n\nThe current_logfiles is used to store the meta data information of current\nwriting log files, that is different to log files, so giving permissions of\nthe\nlog file may not be correct,\n\n> Now, if the expectation is that current_logfiles is just an internal\n> > working file that users shouldn't access directly, then this argument\n> > is wrong --- but then why is it documented in user-facing docs?\n>\n> I really couldn't say why it's documented in the user-facing docs, and\n> for my 2c I don't really think it should be- there's a function to get\n> that information. Sprinkling the data directory with files for users to\n> access directly doesn't exactly fit my view of what a good API looks\n> like.\n>\n> The fact that there isn't any discussion about where that file actually\n> lives does make me suspect you're right that log files outside the data\n> directory wasn't really contemplated.\n>\n\nI can only think of reading this file by the user directly when the server\nis not available, but I don't find any scenario where that is required?\n\n\n\n> > If we're going to accept the patch as-is, then it logically follows\n> > that we should de-document current_logfiles, because we're taking the\n> > position that it's an internal temporary file not meant for user access.\n>\n> ... and hopefully we'd get rid of it one day entirely.\n>\n\nIf there is no use of it when server is offline, it will be better to\nremove that\nfile with an alternative to provide the current log file name.\n\nWith group access mode, the default value of log_file_mode is changed,\nAttached patch reflects the same in docs.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Fri, 18 Jan 2019 15:08:15 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "Greetings,\n\n* Haribabu Kommi (kommi.haribabu@gmail.com) wrote:\n> On Thu, Jan 17, 2019 at 5:49 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > Now, if the expectation is that current_logfiles is just an internal\n> > > working file that users shouldn't access directly, then this argument\n> > > is wrong --- but then why is it documented in user-facing docs?\n> >\n> > I really couldn't say why it's documented in the user-facing docs, and\n> > for my 2c I don't really think it should be- there's a function to get\n> > that information. Sprinkling the data directory with files for users to\n> > access directly doesn't exactly fit my view of what a good API looks\n> > like.\n> >\n> > The fact that there isn't any discussion about where that file actually\n> > lives does make me suspect you're right that log files outside the data\n> > directory wasn't really contemplated.\n> \n> I can only think of reading this file by the user directly when the server\n> is not available, but I don't find any scenario where that is required?\n\nYeah, I agree, and if the server isn't running then there really isn't\na \"current\" logfile, as defined, since the server isn't writing to any\nparticular log file.\n\n> > > If we're going to accept the patch as-is, then it logically follows\n> > > that we should de-document current_logfiles, because we're taking the\n> > > position that it's an internal temporary file not meant for user access.\n> >\n> > ... and hopefully we'd get rid of it one day entirely.\n> \n> If there is no use of it when server is offline, it will be better to\n> remove that\n> file with an alternative to provide the current log file name.\n\nIt'd probably be good to give folks an opportunity to voice their\nopinion regarding their use-case for the file existing as it does and\nbeing documented as it is. At first blush, to me anyway, it seems like\nmaybe this was a case of \"over-documenting\" of the feature by including\nin user-facing documentation something that was really there for\ninternal reasons, but I could certainly be wrong and maybe there's a\nreason why it's really necessary to have the file around for users.\n\n> With group access mode, the default value of log_file_mode is changed,\n> Attached patch reflects the same in docs.\n\nYes, we should update the documentation in this regard, though it's\nreally an independent thing as that documentation should have been\nupdated in the original group-access patch, so I'll see about fixing\nit and back-patching it.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 18 Jan 2019 09:50:40 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 09:50:40AM -0500, Stephen Frost wrote:\n> It'd probably be good to give folks an opportunity to voice their\n> opinion regarding their use-case for the file existing as it does and\n> being documented as it is. At first blush, to me anyway, it seems like\n> maybe this was a case of \"over-documenting\" of the feature by including\n> in user-facing documentation something that was really there for\n> internal reasons, but I could certainly be wrong and maybe there's a\n> reason why it's really necessary to have the file around for users.\n\nIt's not only that. By keeping the file in its current location, we\ncan prevent base backups to work even if logs files are out of the\ndata folder, which is rather user-friendly, and I think that advanced\nusers of Postgres are careful enough to split log files and main data\nfolders into different partitions, without symlinks from the data\nfolder to the log location and with log_directory set to an absolute\npath, independent of the rest. So moving current_logfiles out of the\ndata folder to the base location of the log paths makes quite some\nsense in my opinion for consistency.\n\nUsing a new GUC to specify where current_logfiles should be located\ndoes not really justify the code complications in my opinion, and I'd\nthink that we should allow users with log file access to still look at\nit, even manually and connected from the host as this can be useful\nfor debugging purposes (sometimes clocks of systems get changed as\nthey are not all the time going throuhg ntpd).\n--\nMichael",
"msg_date": "Sat, 19 Jan 2019 10:49:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Fri, Jan 18, 2019 at 09:50:40AM -0500, Stephen Frost wrote:\n> > It'd probably be good to give folks an opportunity to voice their\n> > opinion regarding their use-case for the file existing as it does and\n> > being documented as it is. At first blush, to me anyway, it seems like\n> > maybe this was a case of \"over-documenting\" of the feature by including\n> > in user-facing documentation something that was really there for\n> > internal reasons, but I could certainly be wrong and maybe there's a\n> > reason why it's really necessary to have the file around for users.\n> \n> It's not only that. By keeping the file in its current location, we\n> can prevent base backups to work even if logs files are out of the\n> data folder, which is rather user-friendly, and I think that advanced\n> users of Postgres are careful enough to split log files and main data\n> folders into different partitions, without symlinks from the data\n> folder to the log location and with log_directory set to an absolute\n> path, independent of the rest. So moving current_logfiles out of the\n> data folder to the base location of the log paths makes quite some\n> sense in my opinion for consistency.\n\nAs discussed up-thread, if we change current_logfiles to work the way\nthe rest of our data files do, then base backups would work fine with\nthe file in its current location. I don't buy how having that file in\nthe logfiles directory is more \"consistent\" with anything either- it's\ncertainly not a log file itself.\n\n> Using a new GUC to specify where current_logfiles should be located\n> does not really justify the code complications in my opinion, and I'd\n> think that we should allow users with log file access to still look at\n> it, even manually and connected from the host as this can be useful\n> for debugging purposes (sometimes clocks of systems get changed as\n> they are not all the time going throuhg ntpd).\n\nI agree that we don't need a new GUC for this. I also don't really see\nthe use-case for this file being directly exposed to users- we have a\nfunction specifically for this information and that's generally how\nusers should expect to get information like this- or like what the log\ndirectory *is* to begin with, or where other files reside... I sure hope\nthat we aren't suggesting that asking users to write a parser for\npostgresql.conf, with include directories and files, able to also handle\npostgresql.auto.conf, is somehow user-friendly.\n\nThanks!\n\nStephen",
"msg_date": "Sat, 19 Jan 2019 10:41:08 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 09:50:40AM -0500, Stephen Frost wrote:\n> Yes, we should update the documentation in this regard, though it's\n> really an independent thing as that documentation should have been\n> updated in the original group-access patch, so I'll see about fixing\n> it and back-patching it.\n\nStephen, could you apply Hari's patch then? I am not sure what the\nconsensus is, but documenting the restriction is the minimum we can\ndo.\n\n- The default permissions are <literal>0600</literal>, meaning only the\n- server owner can read or write the log files. The other commonly\n- useful setting is <literal>0640</literal>, allowing members of the owner's\n- group to read the files. Note however that to make use of such a\n- setting, you'll need to alter <xref linkend=\"guc-log-directory\"/> to\n- store the files somewhere outside the cluster data directory. In\n- any case, it's unwise to make the log files world-readable, since\n- they might contain sensitive data.\n+ The default permissions are either <literal>0600</literal>, meaning only the\n+ server owner can read or write the log files or <literal>0640</literal>, that\n+ allows any user in the same group can read the log files, based on the new\n+ cluster created with <option>--allow-group-access</option> option of <command>initdb</command>\n+ command. Note however that to make use of any setting other than default,\n+ you'll need to alter <xref linkend=\"guc-log-directory\"/> to store the files\n+ somewhere outside the cluster data directory.\n\nI would formulate that differently, by just adding an extra paragraph\nto mention that using <literal>0640</literal> is recommended to be\ncompatible with initdb's --allow-group-access instead of sticking it\non the middle of the existing paragraph.\n--\nMichael",
"msg_date": "Fri, 1 Feb 2019 17:22:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Feb 1, 2019 at 7:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Jan 18, 2019 at 09:50:40AM -0500, Stephen Frost wrote:\n> > Yes, we should update the documentation in this regard, though it's\n> > really an independent thing as that documentation should have been\n> > updated in the original group-access patch, so I'll see about fixing\n> > it and back-patching it.\n>\n> Stephen, could you apply Hari's patch then? I am not sure what the\n> consensus is, but documenting the restriction is the minimum we can\n> do.\n>\n> - The default permissions are <literal>0600</literal>, meaning only the\n> - server owner can read or write the log files. The other commonly\n> - useful setting is <literal>0640</literal>, allowing members of the\n> owner's\n> - group to read the files. Note however that to make use of such a\n> - setting, you'll need to alter <xref linkend=\"guc-log-directory\"/> to\n> - store the files somewhere outside the cluster data directory. In\n> - any case, it's unwise to make the log files world-readable, since\n> - they might contain sensitive data.\n> + The default permissions are either <literal>0600</literal>, meaning\n> only the\n> + server owner can read or write the log files or\n> <literal>0640</literal>, that\n> + allows any user in the same group can read the log files, based on\n> the new\n> + cluster created with <option>--allow-group-access</option> option of\n> <command>initdb</command>\n> + command. Note however that to make use of any setting other than\n> default,\n> + you'll need to alter <xref linkend=\"guc-log-directory\"/> to store the\n> files\n> + somewhere outside the cluster data directory.\n>\n> I would formulate that differently, by just adding an extra paragraph\n> to mention that using <literal>0640</literal> is recommended to be\n> compatible with initdb's --allow-group-access instead of sticking it\n> on the middle of the existing paragraph.\n>\n\nThanks for the review.\nI changed the log_file_mode doc patch as per your comment.\n\nHow about the attached?\n\nAnd regarding current_logfiles permissions, I feel this file should have\npermissions of data directory files as it is present in the data directory\nwhether it stores the information of log file, until this file is completely\nremoved with another approach to store the log file details.\n\nI am not sure whether this has been already discussed or not? How about\nusing shared memory to store the log file names? So that we don't need\nof this file?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Mon, 4 Feb 2019 12:16:56 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Mon, Feb 4, 2019 at 12:16 PM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> And regarding current_logfiles permissions, I feel this file should have\n> permissions of data directory files as it is present in the data directory\n> whether it stores the information of log file, until this file is\n> completely\n> removed with another approach to store the log file details.\n>\n> I am not sure whether this has been already discussed or not? How about\n> using shared memory to store the log file names? So that we don't need\n> of this file?\n>\n\nI checked the code why the current_logfiles is not implemented as shared\nmemory\nand found that the current syslogger doesn't attach to the shared memory of\nthe\npostmaster. To support storing the current_logfiles in shared memory, the\nsyslogger\nprocess also needs to attach to the shared memory, this seems to be a new\ninfrastructure\nchange.\n\nIn case if we are not going to change the permissions of the file to group\naccess mode\ninstead of if we strict with log_file_mode, I just tried the attached patch\nof moving the\ncurrent_logfiles patch to the log_directory. The only drawback of this\napproach, is incase\nif the user changes the log_directory, the current_logfiles is present in\nthe old log_directory.\nI don't see that as a problem.\n\ncomments?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Tue, 26 Feb 2019 12:22:53 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 12:22:53PM +1100, Haribabu Kommi wrote:\n> I checked the code why the current_logfiles is not implemented as\n> shared memory and found that the current syslogger doesn't attach to\n> the shared memory of the postmaster. To support storing the\n> current_logfiles in shared memory, the syslogger process also needs\n> to attach to the shared memory, this seems to be a new\n> infrastructure change.\n\nI don't think you can do that anyway and we should not do it. Shared\nmemory can be reset after a backend exits abnormally, but the\nsyslogger lives across that. What you sent upthread to improve the\ndocumentation is in my opinion sufficient:\nhttps://www.postgresql.org/message-id/CAJrrPGe-v2_LMFD9nHrBEjJy3vVOKJwY3w_h+Fs2nxCJg3PbaA@mail.gmail.com\n\nI would not have split the paragraph you broke into two, but instead\njust add this part in-between:\n+ <para>\n+ Permissions <literal>0640</literal> are recommended to be compatible with\n+ <application>initdb</application> option <option>--allow-group-access</option>.\n+ </para>\nAny objections in doing that?\n--\nMichael",
"msg_date": "Tue, 12 Mar 2019 15:03:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 2:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Feb 26, 2019 at 12:22:53PM +1100, Haribabu Kommi wrote:\n> > I checked the code why the current_logfiles is not implemented as\n> > shared memory and found that the current syslogger doesn't attach to\n> > the shared memory of the postmaster. To support storing the\n> > current_logfiles in shared memory, the syslogger process also needs\n> > to attach to the shared memory, this seems to be a new\n> > infrastructure change.\n>\n> I don't think you can do that anyway and we should not do it. Shared\n> memory can be reset after a backend exits abnormally, but the\n> syslogger lives across that.\n\nI think we should do what Haribabu proposed originally. Moving\ncurrent_logfiles out of the data directory doesn't make sense,\nbecause:\n\n(1) If you're trying to find the log files, having a file that\ncontains their pathnames in the place where those files are does not\nhelp you. Having such a file in the known location, namely the data\ndirectory, does.\n\n(2) Someone might have logs from multiple PostgreSQL clusters in the\nsame external log directory, but there can only be one file named\ncurrent_logfiles.\n\n(3) Someone might store PostgreSQL log files in the same directory as\nnon-PostgreSQL log files, and having a file called current_logfiles\nfloating around will be confusingly ambiguous.\n\nOn the other hand, changing the file to have the same permissions as\neverything else in the data directory has basically no disadvantages.\nI agree with Stephen's analysis that a file containing the names of\nthe current log files is not itself a log file. Tom's idea that\nmaking the permissions consistent with everything else in the data\ndirectory would \"break its only use-case\" seems completely wrong.\nAnybody who has permission to read the log files but not the data\ndirectory will presumably hit the directory-level permissions on\n$PGDATA before the issue of the permissions on current_logfiles() per\nse become relevant, except in corner cases that I don't care about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 12 Mar 2019 16:08:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 04:08:53PM -0400, Robert Haas wrote:\n> Anybody who has permission to read the log files but not the data\n> directory will presumably hit the directory-level permissions on\n> $PGDATA before the issue of the permissions on current_logfiles() per\n> se become relevant, except in corner cases that I don't care about.\n\nSane deployments normally split the log directory and the main data\nfolder into separate partitions, and use an absolute path for\nlog_directory. So, FWIW, I can live with the original proposal as\nwell.\n--\nMichael",
"msg_date": "Thu, 14 Mar 2019 13:54:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 5:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Feb 26, 2019 at 12:22:53PM +1100, Haribabu Kommi wrote:\n> > I checked the code why the current_logfiles is not implemented as\n> > shared memory and found that the current syslogger doesn't attach to\n> > the shared memory of the postmaster. To support storing the\n> > current_logfiles in shared memory, the syslogger process also needs\n> > to attach to the shared memory, this seems to be a new\n> > infrastructure change.\n>\n> I don't think you can do that anyway and we should not do it. Shared\n> memory can be reset after a backend exits abnormally, but the\n> syslogger lives across that. What you sent upthread to improve the\n> documentation is in my opinion sufficient:\n>\n> https://www.postgresql.org/message-id/CAJrrPGe-v2_LMFD9nHrBEjJy3vVOKJwY3w_h+Fs2nxCJg3PbaA@mail.gmail.com\n>\n> I would not have split the paragraph you broke into two, but instead\n> just add this part in-between:\n> + <para>\n> + Permissions <literal>0640</literal> are recommended to be\n> compatible with\n> + <application>initdb</application> option\n> <option>--allow-group-access</option>.\n> + </para>\n> Any objections in doing that?\n>\n\nIf I remember correctly, in one of the mails, you mentioned that having a\nseparate\npara is better. Attached is the updated patch as per your suggestion.\n\nIMO, this update is just a recommendation to the user, and sometimes it is\nstill\npossible that there may be strict permissions for the log file even the\ndata directory\nis allowed for the group access. So I feel it is still better to update the\npermissions\nof the current_logfiles to the database files permissions than log file\npermissions.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Fri, 15 Mar 2019 18:51:37 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 06:51:37PM +1100, Haribabu Kommi wrote:\n> IMO, this update is just a recommendation to the user, and sometimes it is\n> still possible that there may be strict permissions for the log file\n> even the data directory is allowed for the group access. So I feel\n> it is still better to update the permissions of the current_logfiles\n> to the database files permissions than log file permissions.\n\nI was just reading again this thread, and the suggestions that\ncurrent_logfiles is itself not a log file is also a sensible\nposition. I was just looking at the patch that you sent at the top of\nthe thread here:\nhttps://www.postgresql.org/message-id/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com\n\nAnd actually it seems to me that you have a race condition in that\nstuff. I think that you had better use umask(), then fopen, and then\nonce again umask() to put back the previous permissions, removing the\nextra chmod() call.\n--\nMichael",
"msg_date": "Wed, 20 Mar 2019 14:33:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 4:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Mar 15, 2019 at 06:51:37PM +1100, Haribabu Kommi wrote:\n> > IMO, this update is just a recommendation to the user, and sometimes it\n> is\n> > still possible that there may be strict permissions for the log file\n> > even the data directory is allowed for the group access. So I feel\n> > it is still better to update the permissions of the current_logfiles\n> > to the database files permissions than log file permissions.\n>\n> I was just reading again this thread, and the suggestions that\n> current_logfiles is itself not a log file is also a sensible\n> position. I was just looking at the patch that you sent at the top of\n> the thread here:\n>\n> https://www.postgresql.org/message-id/CAJrrPGcEotF1P7AWoeQyD3Pqr-0xkQg_Herv98DjbaMj+naozw@mail.gmail.com\n>\n\nThanks for the review.\n\n\n> And actually it seems to me that you have a race condition in that\n> stuff. I think that you had better use umask(), then fopen, and then\n> once again umask() to put back the previous permissions, removing the\n> extra chmod() call.\n>\n\nChanged the patch to use umask() instead of chmod() according to\nyour suggestion.\n\nupdated patch attached.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Thu, 21 Mar 2019 12:41:16 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 12:41 PM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> On Wed, Mar 20, 2019 at 4:33 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> And actually it seems to me that you have a race condition in that\n>> stuff. I think that you had better use umask(), then fopen, and then\n>> once again umask() to put back the previous permissions, removing the\n>> extra chmod() call.\n>>\n>\n> Changed the patch to use umask() instead of chmod() according to\n> your suggestion.\n>\n> updated patch attached.\n>\n\nEarlier attached patch is wrong.\nCorrect patch attached. Sorry for the inconvenience.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Thu, 21 Mar 2019 12:52:14 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 12:52:14PM +1100, Haribabu Kommi wrote:\n> Earlier attached patch is wrong.\n\n- oumask = umask(pg_file_create_mode);\n+ oumask = umask(pg_mode_mask);\nIndeed that was wrong.\n\n> Correct patch attached. Sorry for the inconvenience.\n\nThis looks better for the umask setting, still it could be more\nsimple.\n\n #include <sys/time.h>\n-\n+#include \"common/file_perm.h\"\n #include \"lib/stringinfo.h\"\nNit: it is better for readability to keep an empty line between the\nsystem includes and the Postgres ones.\n\nA second thing, more important, is that you can reset umask just after\nopening the file, as attached. This way there is no need to reset the\numask in all the code paths leaving update_metainfo_datafile(). Does\nthat look fine to you?\n--\nMichael",
"msg_date": "Fri, 22 Mar 2019 10:23:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 12:24 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Mar 21, 2019 at 12:52:14PM +1100, Haribabu Kommi wrote:\n> > Earlier attached patch is wrong.\n>\n> - oumask = umask(pg_file_create_mode);\n> + oumask = umask(pg_mode_mask);\n> Indeed that was wrong.\n>\n> > Correct patch attached. Sorry for the inconvenience.\n>\n> This looks better for the umask setting, still it could be more\n> simple.\n>\n> #include <sys/time.h>\n> -\n> +#include \"common/file_perm.h\"\n> #include \"lib/stringinfo.h\"\n> Nit: it is better for readability to keep an empty line between the\n> system includes and the Postgres ones.\n>\n> A second thing, more important, is that you can reset umask just after\n> opening the file, as attached. This way there is no need to reset the\n> umask in all the code paths leaving update_metainfo_datafile(). Does\n> that look fine to you?\n>\n\nThanks for the correction, Yes, that is correct and it works fine.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Mar 22, 2019 at 12:24 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 21, 2019 at 12:52:14PM +1100, Haribabu Kommi wrote:\n> Earlier attached patch is wrong.\n\n- oumask = umask(pg_file_create_mode);\n+ oumask = umask(pg_mode_mask);\nIndeed that was wrong.\n\n> Correct patch attached. Sorry for the inconvenience.\n\nThis looks better for the umask setting, still it could be more\nsimple.\n\n #include <sys/time.h>\n-\n+#include \"common/file_perm.h\"\n #include \"lib/stringinfo.h\"\nNit: it is better for readability to keep an empty line between the\nsystem includes and the Postgres ones.\n\nA second thing, more important, is that you can reset umask just after\nopening the file, as attached. This way there is no need to reset the\numask in all the code paths leaving update_metainfo_datafile(). Does\nthat look fine to you?Thanks for the correction, Yes, that is correct and it works fine.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 22 Mar 2019 14:35:41 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 02:35:41PM +1100, Haribabu Kommi wrote:\n> Thanks for the correction. Yes, that is correct and it works fine.\n\nThanks for double-checking. Are there any objections with this patch?\n--\nMichael",
"msg_date": "Fri, 22 Mar 2019 13:01:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 01:01:44PM +0900, Michael Paquier wrote:\n> On Fri, Mar 22, 2019 at 02:35:41PM +1100, Haribabu Kommi wrote:\n> > Thanks for the correction. Yes, that is correct and it works fine.\n> \n> Thanks for double-checking. Are there any objections with this patch?\n\nDone and committed down to v11 where group access has been added.\nThere could be an argument to do the same in v10, but as the root of\nthe problem is the interaction between a data folder using 0640 as\nbase mode for files and log_file_mode being more restrictive, then it\ncannot apply to v10.\n\nAfter testing and reviewing the patch, I noticed that all versions\nsent up to now missed two things done by logfile_open():\n- Bufferring is line-buffered. For current_logfiles it may not matter\nmuch as the contents are first written into a temporary file and then\nthe file is renamed, but for debugging having the insurance of\nconsistent contents is nice even for the temporary file.\n- current_logfiles uses \\r\\n. While it does not have a consequence\nfor the parsing of the file by pg_current_logfile, it breaks the\nreadability of the file on Windows, which is not nice.\nSo I have kept the patch with the previous defaults for consistency.\nPerhaps they could be changed, but the current set is a good set.\n--\nMichael",
"msg_date": "Sun, 24 Mar 2019 21:16:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 09:16:44PM +0900, Michael Paquier wrote:\n> After testing and reviewing the patch, I noticed that all versions\n> sent up to now missed two things done by logfile_open():\n> - Bufferring is line-buffered. For current_logfiles it may not matter\n> much as the contents are first written into a temporary file and then\n> the file is renamed, but for debugging having the insurance of\n> consistent contents is nice even for the temporary file.\n> - current_logfiles uses \\r\\n. While it does not have a consequence\n> for the parsing of the file by pg_current_logfile, it breaks the\n> readability of the file on Windows, which is not nice.\n> So I have kept the patch with the previous defaults for consistency.\n> Perhaps they could be changed, but the current set is a good set.\n\nBy the way, this also fixes a cosmetic issue with a failure in\ncreating current_logfiles: when update_metainfo_datafile() fails to\ncreate the file, it logs a LOG message, but logfile_open() does the\nsame thing, so this finishes with two log entries for the same\nfailure. v10 still has that issue, I don't think that it is worth\nfixing as it has no actual consequence except perhaps bringing some\nconfusion.\n--\nMichael",
"msg_date": "Sun, 24 Mar 2019 21:26:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 11:16 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Fri, Mar 22, 2019 at 01:01:44PM +0900, Michael Paquier wrote:\n> > On Fri, Mar 22, 2019 at 02:35:41PM +1100, Haribabu Kommi wrote:\n> > > Thanks for the correction. Yes, that is correct and it works fine.\n> >\n> > Thanks for double-checking. Are there any objections with this patch?\n>\n> Done and committed down to v11 where group access has been added.\n> There could be an argument to do the same in v10, but as the root of\n> the problem is the interaction between a data folder using 0640 as\n> base mode for files and log_file_mode being more restrictive, then it\n> cannot apply to v10.\n>\n> After testing and reviewing the patch, I noticed that all versions\n> sent up to now missed two things done by logfile_open():\n> - Bufferring is line-buffered. For current_logfiles it may not matter\n> much as the contents are first written into a temporary file and then\n> the file is renamed, but for debugging having the insurance of\n> consistent contents is nice even for the temporary file.\n> - current_logfiles uses \\r\\n. While it does not have a consequence\n> for the parsing of the file by pg_current_logfile, it breaks the\n> readability of the file on Windows, which is not nice.\n> So I have kept the patch with the previous defaults for consistency.\n> Perhaps they could be changed, but the current set is a good set.\n>\n\nThanks Micheal and others.\nThis really helps to choose the restrictive log file permissions even when\nthe data directory is enabled with group access.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Sun, Mar 24, 2019 at 11:16 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Mar 22, 2019 at 01:01:44PM +0900, Michael Paquier wrote:\n> On Fri, Mar 22, 2019 at 02:35:41PM +1100, Haribabu Kommi wrote:\n> > Thanks for the correction. Yes, that is correct and it works fine.\n> \n> Thanks for double-checking. Are there any objections with this patch?\n\nDone and committed down to v11 where group access has been added.\nThere could be an argument to do the same in v10, but as the root of\nthe problem is the interaction between a data folder using 0640 as\nbase mode for files and log_file_mode being more restrictive, then it\ncannot apply to v10.\n\nAfter testing and reviewing the patch, I noticed that all versions\nsent up to now missed two things done by logfile_open():\n- Bufferring is line-buffered. For current_logfiles it may not matter\nmuch as the contents are first written into a temporary file and then\nthe file is renamed, but for debugging having the insurance of\nconsistent contents is nice even for the temporary file.\n- current_logfiles uses \\r\\n. While it does not have a consequence\nfor the parsing of the file by pg_current_logfile, it breaks the\nreadability of the file on Windows, which is not nice.\nSo I have kept the patch with the previous defaults for consistency.\nPerhaps they could be changed, but the current set is a good set.\nThanks Micheal and others.This really helps to choose the restrictive log file permissions even whenthe data directory is enabled with group access.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Mon, 25 Mar 2019 18:19:08 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: current_logfiles not following group access and instead follows\n log_file_mode permissions"
}
] |
[
{
"msg_contents": "Hi\n\nmake check-world fails\n\nt/008_diff_schema.pl .. ok\nt/009_matviews.pl ..... ok\nt/010_truncate.pl ..... ok\nt/011_generated.pl .... # Looks like your test exited with 29 before it\ncould output anything.\nt/011_generated.pl .... Dubious, test returned 29 (wstat 7424, 0x1d00)\nFailed 2/2 subtests\n\nTest Summary Report\n-------------------\nt/011_generated.pl (Wstat: 7424 Tests: 0 Failed: 0)\n Non-zero exit status: 29\n Parse errors: Bad plan. You planned 2 tests but ran 0.\nFiles=11, Tests=50, 62 wallclock secs ( 0.09 usr 0.03 sys + 21.82 cusr\n7.63 csys = 29.57 CPU)\nResult: FAIL\nmake[2]: *** [Makefile:19: check] Chyba 1\nmake[2]: Opouští se adresář\n„/home/pavel/src/postgresql.master/src/test/subscription“\nmake[1]: *** [Makefile:48: check-subscription-recurse] Chyba 2\nmake[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“\nmake: *** [GNUmakefile:70: check-world-src/test-recurse] Chyba 2\n\nRegards\n\nPavel\n\nHimake check-world failst/008_diff_schema.pl .. ok t/009_matviews.pl ..... ok t/010_truncate.pl ..... ok t/011_generated.pl .... # Looks like your test exited with 29 before it could output anything.t/011_generated.pl .... Dubious, test returned 29 (wstat 7424, 0x1d00)Failed 2/2 subtests Test Summary Report-------------------t/011_generated.pl (Wstat: 7424 Tests: 0 Failed: 0) Non-zero exit status: 29 Parse errors: Bad plan. You planned 2 tests but ran 0.Files=11, Tests=50, 62 wallclock secs ( 0.09 usr 0.03 sys + 21.82 cusr 7.63 csys = 29.57 CPU)Result: FAILmake[2]: *** [Makefile:19: check] Chyba 1make[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test/subscription“make[1]: *** [Makefile:48: check-subscription-recurse] Chyba 2make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“make: *** [GNUmakefile:70: check-world-src/test-recurse] Chyba 2RegardsPavel",
"msg_date": "Tue, 15 Jan 2019 10:47:25 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "regress tests fails"
},
{
"msg_contents": "út 15. 1. 2019 v 10:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> make check-world fails\n>\n> t/008_diff_schema.pl .. ok\n> t/009_matviews.pl ..... ok\n> t/010_truncate.pl ..... ok\n> t/011_generated.pl .... # Looks like your test exited with 29 before it\n> could output anything.\n> t/011_generated.pl .... Dubious, test returned 29 (wstat 7424, 0x1d00)\n> Failed 2/2 subtests\n>\n> Test Summary Report\n> -------------------\n> t/011_generated.pl (Wstat: 7424 Tests: 0 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: Bad plan. You planned 2 tests but ran 0.\n> Files=11, Tests=50, 62 wallclock secs ( 0.09 usr 0.03 sys + 21.82 cusr\n> 7.63 csys = 29.57 CPU)\n> Result: FAIL\n> make[2]: *** [Makefile:19: check] Chyba 1\n> make[2]: Opouští se adresář\n> „/home/pavel/src/postgresql.master/src/test/subscription“\n> make[1]: *** [Makefile:48: check-subscription-recurse] Chyba 2\n> make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“\n> make: *** [GNUmakefile:70: check-world-src/test-recurse] Chyba 2\n>\n>\nI am sorry for noise. looks like garbage in my repository\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n\nút 15. 1. 2019 v 10:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Himake check-world failst/008_diff_schema.pl .. ok t/009_matviews.pl ..... ok t/010_truncate.pl ..... ok t/011_generated.pl .... # Looks like your test exited with 29 before it could output anything.t/011_generated.pl .... Dubious, test returned 29 (wstat 7424, 0x1d00)Failed 2/2 subtests Test Summary Report-------------------t/011_generated.pl (Wstat: 7424 Tests: 0 Failed: 0) Non-zero exit status: 29 Parse errors: Bad plan. You planned 2 tests but ran 0.Files=11, Tests=50, 62 wallclock secs ( 0.09 usr 0.03 sys + 21.82 cusr 7.63 csys = 29.57 CPU)Result: FAILmake[2]: *** [Makefile:19: check] Chyba 1make[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test/subscription“make[1]: *** [Makefile:48: check-subscription-recurse] Chyba 2make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“make: *** [GNUmakefile:70: check-world-src/test-recurse] Chyba 2I am sorry for noise. looks like garbage in my repositoryPavel RegardsPavel",
"msg_date": "Tue, 15 Jan 2019 10:50:25 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: regress tests fails"
}
] |
[
{
"msg_contents": "I recently purchased a copy of \"The Benchmark Handbook\", a book from\nthe early 1990s that was edited by Jim Gray. It features analysis of\nthe Wisconsin Benchmark in chapter 3 -- that's a single client\nbenchmark that famously showed real limitations in the optimizers that\nwere current in the early to mid 1980s. The book describes various\nlimitations of Wisconsin as a general purpose benchmark, but it's\nstill interesting in other ways, then and now. The book goes on to say\nthat it is still often used in regression tests.\n\nI see that we even had a full copy of the benchmark until it was torn\nout by commit a05a4b47 in 2009. I don't think that anybody will be\ninterested in the Benchmark itself, but the design of the benchmark\nmay provide useful context. I could imagine somebody with an interest\nin the optimizer finding the book useful. I paid about $5 for a second\nhand copy of the first edition, so it isn't a hard purchase for me to\njustify.\n--\nPeter Geoghegan\n\n",
"msg_date": "Tue, 15 Jan 2019 13:52:11 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "PSA: \"tenk1\" and other similar regression test tables are from the\n Wisconsin Benchmark"
}
] |
[
{
"msg_contents": "Hi all,\n\nf3db7f16 has proved that it can be a bad idea to run pg_resetwal on a\ndata folder which does not match the version it has been compiled\nwith.\n\nAs of HEAD, PG_CONTROL_VERSION is still 1100:\n$ pg_controldata | grep \"pg_control version\"\npg_control version number: 1100\n\nWouldn't it be better to bump it up to 1200?\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 16 Jan 2019 11:02:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Bump up PG_CONTROL_VERSION on HEAD"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-16 11:02:08 +0900, Michael Paquier wrote:\n> f3db7f16 has proved that it can be a bad idea to run pg_resetwal on a\n> data folder which does not match the version it has been compiled\n> with.\n> \n> As of HEAD, PG_CONTROL_VERSION is still 1100:\n> $ pg_controldata | grep \"pg_control version\"\n> pg_control version number: 1100\n> \n> Wouldn't it be better to bump it up to 1200?\n\nWe don't commonly bump that without corresponding control version\nchanges. I don't see what we'd gain by the bump?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 15 Jan 2019 18:07:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Bump up PG_CONTROL_VERSION on HEAD"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-01-16 11:02:08 +0900, Michael Paquier wrote:\n>> f3db7f16 has proved that it can be a bad idea to run pg_resetwal on a\n>> data folder which does not match the version it has been compiled\n>> with.\n>> \n>> As of HEAD, PG_CONTROL_VERSION is still 1100:\n>> $ pg_controldata | grep \"pg_control version\"\n>> pg_control version number: 1100\n>> \n>> Wouldn't it be better to bump it up to 1200?\n\n> We don't commonly bump that without corresponding control version\n> changes. I don't see what we'd gain by the bump?\n\nYeah, it has not been our practice to bump PG_CONTROL_VERSION\nunless the contents of pg_control actually change. The whole\npoint of f3db7f16 was to ensure that we didn't have to do that\njust because of a major version change.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 15 Jan 2019 23:51:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bump up PG_CONTROL_VERSION on HEAD"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nConsider this query plan:\n\ncreate table t (i int, b bool);\ncreate index on t(i, b);\nset enable_bitmapscan to off;\nexplain select * from t where i = 300 and b;\n\n QUERY PLAN\n-------------------------------------------------------------------------\n Index Only Scan using t_i_b_idx on t (cost=0.15..24.27 rows=6 width=5)\n Index Cond: ((i = 300) AND (b = true))\n Filter: b\n\n\nThe filter is not needed, why is it there? Turns out we can't recognize \nthat the restriction clause 'b' and the index clause 'b = true' are \nequivalent. My first reaction was to patch operator_predicate_proof to \nhandle this case, but there is a more straightforward way: mark the \nexpanded index clause as potentially redundant when it is generated in \nexpand_indexqual_conditions. There is already RestrictInfo.parent_ec \nthat is used to mark redundant EC-derived join clauses. The patch \nrenames it to rinfo_parent and uses it to mark the expanded index \nclauses. Namely, for both the expanded and the original clause, \nrinfo_parent points to the original clause or its generating EC, if set. \nThe name is no good -- the original clause is not a parent of itself, \nafter all. I considered something like redundancy_tag, but some places \nactually use the fact that it points to the generating EC, so I don't \nlike this name either.\n\nWhat do you think?\n\n-- \nAlexander Kuzmenkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 16 Jan 2019 14:39:53 +0300",
"msg_from": "Alexander Kuzmenkov <a.kuzmenkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Redundant filter in index scan with a bool column"
},
{
"msg_contents": "Alexander Kuzmenkov <a.kuzmenkov@postgrespro.ru> writes:\n> The filter is not needed, why is it there? Turns out we can't recognize \n> that the restriction clause 'b' and the index clause 'b = true' are \n> equivalent.\n\nYeah, it's intentional that we don't get rid of the extra clause;\nit doesn't really seem worth the expense and complexity to do so.\nIndexes on bool columns are a pretty niche case in the first place.\nUsually, if you are interested in just the rows where b = true,\nyou're better off using \"where b\" as an index predicate. In your\nexample, we can do this instead:\n\nregression=# create index on t(i) where b;\nCREATE INDEX\nregression=# explain select * from t where i = 300 and b;\n QUERY PLAN \n------------------------------------------------------------------\n Index Scan using t_i_idx on t (cost=0.12..24.19 rows=6 width=5)\n Index Cond: (i = 300)\n(2 rows)\n\nresulting in a much smaller index, if the b=true condition is selective\nenough to be worth indexing. Even in the case you showed, how much is\nthe redundant filter clause really costing?\n\n> My first reaction was to patch operator_predicate_proof to \n> handle this case, but there is a more straightforward way: mark the \n> expanded index clause as potentially redundant when it is generated in \n> expand_indexqual_conditions. There is already RestrictInfo.parent_ec \n> that is used to mark redundant EC-derived join clauses. The patch \n> renames it to rinfo_parent and uses it to mark the expanded index \n> clauses.\n\nThat's an unbelievable hack that almost certainly breaks existing uses.\n\nThe approach of teaching predtest.c that \"b = true\" implies \"b\" would\nwork, but it seems a bit brute-force because ordinarily such cases\nwould never be seen there, thanks to simplify_boolean_equality having\ncanonicalized the former into the latter. The problem we have is that\nindxpath.c re-generates \"b = true\" in indexscan conditions. Thinking\nabout it now, I wonder if we could postpone that conversion till later,\nsay do it in create_indexscan_plan after having checked for redundant\nclauses. Not sure how messy that'd be.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 10:05:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Redundant filter in index scan with a bool column"
}
] |
[
{
"msg_contents": "Hi,\n\nDuring the discussion in [1] an idea about refactoring ArchiveEntry was\nsuggested. The reason is that currently this function has significant number of\narguments that are \"optional\", and every change that has to deal with it\nintroduces a lot of useless diffs. In the thread, mentioned above, such an\nexample is tracking current table access method, and I guess \"Remove WITH OIDS\"\ncommit 578b229718e is also similar.\n\nProposed idea is to refactor out all/optional arguments into a separate data\nstructure, so that adding/removing a new argument wouldn't change that much of\nunrelated code. Then for every invocation of ArchiveEntry this structure needs\nto be prepared before the call, or as Andres suggested:\n\n ArchiveEntry((ArchiveArgs){.tablespace = 3,\n .dumpFn = somefunc,\n ...});\n\nAnother suggestion from Amit is to have an ArchiveEntry() function with limited\nnumber of parameters, and an ArchiveEntryEx() with those extra parameters which\nare not needed in usual cases.\n\nI want to prepare a patch for that, and I'm inclined to go with the first\noption, but since there are two solutions to choose from, I would love to hear\nmore opinion about this topic. Any pros/cons we don't see yet?\n\n[1]: https://www.postgresql.org/message-id/flat/20180703070645.wchpu5muyto5n647%40alap3.anarazel.de\n\n",
"msg_date": "Wed, 16 Jan 2019 13:16:40 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "> On Wed, Jan 16, 2019 at 1:16 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> Hi,\n>\n> During the discussion in [1] an idea about refactoring ArchiveEntry was\n> suggested. The reason is that currently this function has significant number of\n> arguments that are \"optional\", and every change that has to deal with it\n> introduces a lot of useless diffs. In the thread, mentioned above, such an\n> example is tracking current table access method, and I guess \"Remove WITH OIDS\"\n> commit 578b229718e is also similar.\n>\n> Proposed idea is to refactor out all/optional arguments into a separate data\n> structure, so that adding/removing a new argument wouldn't change that much of\n> unrelated code. Then for every invocation of ArchiveEntry this structure needs\n> to be prepared before the call, or as Andres suggested:\n>\n> ArchiveEntry((ArchiveArgs){.tablespace = 3,\n> .dumpFn = somefunc,\n> ...});\n>\n> Another suggestion from Amit is to have an ArchiveEntry() function with limited\n> number of parameters, and an ArchiveEntryEx() with those extra parameters which\n> are not needed in usual cases.\n>\n> I want to prepare a patch for that, and I'm inclined to go with the first\n> option, but since there are two solutions to choose from, I would love to hear\n> more opinion about this topic. Any pros/cons we don't see yet?\n>\n> [1]: https://www.postgresql.org/message-id/flat/20180703070645.wchpu5muyto5n647%40alap3.anarazel.de\n\n[CC Andres and Amit]\n\n",
"msg_date": "Wed, 16 Jan 2019 13:18:06 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-Jan-16, Dmitry Dolgov wrote:\n\n\n> Proposed idea is to refactor out all/optional arguments into a separate data\n> structure, so that adding/removing a new argument wouldn't change that much of\n> unrelated code. Then for every invocation of ArchiveEntry this structure needs\n> to be prepared before the call, or as Andres suggested:\n> \n> ArchiveEntry((ArchiveArgs){.tablespace = 3,\n> .dumpFn = somefunc,\n> ...});\n\nPrepping the struct before the call would be our natural style, I think.\nThis one where the struct is embedded in the function call does not look\n*too* horrible, but I'm curious as to what does pgindent do with it.\n\n> Another suggestion from Amit is to have an ArchiveEntry() function with limited\n> number of parameters, and an ArchiveEntryEx() with those extra parameters which\n> are not needed in usual cases.\n\nIs there real savings to be had by doing this? What would be the\narguments to each function? Off-hand, I'm not liking this idea too\nmuch. But maybe we can combine both ideas and have one \"normal\"\nfunction with only the most common args, and create ArchiveEntryExtended\nto use the struct as proposed by Andres.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 17 Jan 2019 12:02:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jan-16, Dmitry Dolgov wrote:\n>> ArchiveEntry((ArchiveArgs){.tablespace = 3,\n>> .dumpFn = somefunc,\n>> ...});\n\n> Is there real savings to be had by doing this? What would be the\n> arguments to each function? Off-hand, I'm not liking this idea too\n> much.\n\nI'm not either. What this looks like it will mainly do is create\na back-patching barrier, with little if any readability improvement.\n\nI don't buy the argument that this would move the goalposts in terms\nof how much work it is to add a new argument. You'd still end up\ntouching every call site.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 10:23:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-01-17 10:23:39 -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Jan-16, Dmitry Dolgov wrote:\n> >> ArchiveEntry((ArchiveArgs){.tablespace = 3,\n> >> .dumpFn = somefunc,\n> >> ...});\n>\n> > Is there real savings to be had by doing this? What would be the\n> > arguments to each function? Off-hand, I'm not liking this idea too\n> > much.\n>\n> I'm not either. What this looks like it will mainly do is create\n> a back-patching barrier, with little if any readability improvement.\n\nI don't really buy this. We've whacked around the arguments to\nArchiveEntry() repeatedly over the last few releases, so there's already\na hindrance to backpatching. And given the current setup we've to whack\naround all 70+ callsites whenever a single argument is added. With the\nsetup I'd suggested you don't, because the designated initializer syntax\nallows you to omit items that ought to be zero-initialized.\n\nAnd given the number of arguments to ArchiveEntry() having a name for\neach argument would help for readability too. It's currently not exactly\nobvious what is an argument for what:\n\tArchiveEntry(AH, nilCatalogId, createDumpId(),\n\t\t\t\t \"ENCODING\", NULL, NULL, \"\",\n\t\t\t\t \"ENCODING\", SECTION_PRE_DATA,\n\t\t\t\t qry->data, \"\", NULL,\n\t\t\t\t NULL, 0,\n\t\t\t\t NULL, NULL);\n\nIf you compare that with\n\n\tArchiveEntry(AH,\n (ArchiveEntry){.catalogId = nilCatalogId,\n .dumpId = createDumpId(),\n .tag = \"ENCODING\",\n .desc = \"ENCODING\",\n .section = SECTION_PRE_DATA,\n .defn = qry->data});\n\nit's definitely easier to see what argument is what.\n\n\n> I don't buy the argument that this would move the goalposts in terms\n> of how much work it is to add a new argument. You'd still end up\n> touching every call site.\n\nWhy? A lot of arguments that'd be potentially added or removed would not\nbe set by each callsites.\n\nIf you e.g. look at\n\nyou can see that a lot of changes where like\n ArchiveEntry(fout, nilCatalogId, createDumpId(),\n \"pg_largeobject\", NULL, NULL, \"\",\n- false, \"pg_largeobject\", SECTION_PRE_DATA,\n+ \"pg_largeobject\", SECTION_PRE_DATA,\n loOutQry->data, \"\", NULL,\n NULL, 0,\n NULL, NULL);\n\ni.e. just removing a 'false' argument. In like 70+ callsites. With the\nabove scheme, we'd instead just have removed a single .withoids = true,\nfrom dumpTableSchema()'s ArchiveEntry() call.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 17 Jan 2019 09:29:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-01-17 10:23:39 -0500, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > On 2019-Jan-16, Dmitry Dolgov wrote:\n> > >> ArchiveEntry((ArchiveArgs){.tablespace = 3,\n> > >> .dumpFn = somefunc,\n> > >> ...});\n> >\n> > > Is there real savings to be had by doing this? What would be the\n> > > arguments to each function? Off-hand, I'm not liking this idea too\n> > > much.\n> >\n> > I'm not either. What this looks like it will mainly do is create\n> > a back-patching barrier, with little if any readability improvement.\n> \n> I don't really buy this. We've whacked around the arguments to\n> ArchiveEntry() repeatedly over the last few releases, so there's already\n> a hindrance to backpatching. And given the current setup we've to whack\n> around all 70+ callsites whenever a single argument is added. With the\n> setup I'd suggested you don't, because the designated initializer syntax\n> allows you to omit items that ought to be zero-initialized.\n> \n> And given the number of arguments to ArchiveEntry() having a name for\n> each argument would help for readability too. It's currently not exactly\n> obvious what is an argument for what:\n> \tArchiveEntry(AH, nilCatalogId, createDumpId(),\n> \t\t\t\t \"ENCODING\", NULL, NULL, \"\",\n> \t\t\t\t \"ENCODING\", SECTION_PRE_DATA,\n> \t\t\t\t qry->data, \"\", NULL,\n> \t\t\t\t NULL, 0,\n> \t\t\t\t NULL, NULL);\n> \n> If you compare that with\n> \n> \tArchiveEntry(AH,\n> (ArchiveEntry){.catalogId = nilCatalogId,\n> .dumpId = createDumpId(),\n> .tag = \"ENCODING\",\n> .desc = \"ENCODING\",\n> .section = SECTION_PRE_DATA,\n> .defn = qry->data});\n> \n> it's definitely easier to see what argument is what.\n\n+1. I was on the fence about this approach when David started using it\nin pgBackRest but I've come to find that it's actually pretty nice and\nbeing able to omit things which should be zero/default is very nice. I\nfeel like it's quite similar to what we do in other places too- just\nlook for things like:\n\nutils/adt/jsonfuncs.c:600\n\n sem = palloc0(sizeof(JsonSemAction));\n\n...\n\n sem->semstate = (void *) state;\n sem->array_start = okeys_array_start;\n sem->scalar = okeys_scalar;\n sem->object_field_start = okeys_object_field_start;\n /* remainder are all NULL, courtesy of palloc0 above */\n\n pg_parse_json(lex, sem);\n\n...\n\n pfree(sem);\n\n> > I don't buy the argument that this would move the goalposts in terms\n> > of how much work it is to add a new argument. You'd still end up\n> > touching every call site.\n> \n> Why? A lot of arguments that'd be potentially added or removed would not\n> be set by each callsites.\n> \n> If you e.g. look at\n> \n> you can see that a lot of changes where like\n> ArchiveEntry(fout, nilCatalogId, createDumpId(),\n> \"pg_largeobject\", NULL, NULL, \"\",\n> - false, \"pg_largeobject\", SECTION_PRE_DATA,\n> + \"pg_largeobject\", SECTION_PRE_DATA,\n> loOutQry->data, \"\", NULL,\n> NULL, 0,\n> NULL, NULL);\n> \n> i.e. just removing a 'false' argument. In like 70+ callsites. With the\n> above scheme, we'd instead just have removed a single .withoids = true,\n> from dumpTableSchema()'s ArchiveEntry() call.\n\nAgreed. Using this approach in more places, when appropriate and\nsensible, seems like a good direction to go in. To be clear, I don't\nthink we should go rewrite pieces of code just for the sake of it as\nthat would make back-patching more difficult, but when we're making\nchanges anyway, or where it wouldn't really change the landscape for\nback-patching, then it seems like a good change.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 17 Jan 2019 13:20:30 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-17 09:29:04 -0800, Andres Freund wrote:\n> On 2019-01-17 10:23:39 -0500, Tom Lane wrote:\n> > I don't buy the argument that this would move the goalposts in terms\n> > of how much work it is to add a new argument. You'd still end up\n> > touching every call site.\n> \n> Why? A lot of arguments that'd be potentially added or removed would not\n> be set by each callsites.\n> \n> If you e.g. look at\n> \n> you can see that a lot of changes where like\n> ArchiveEntry(fout, nilCatalogId, createDumpId(),\n> \"pg_largeobject\", NULL, NULL, \"\",\n> - false, \"pg_largeobject\", SECTION_PRE_DATA,\n> + \"pg_largeobject\", SECTION_PRE_DATA,\n> loOutQry->data, \"\", NULL,\n> NULL, 0,\n> NULL, NULL);\n> \n> i.e. just removing a 'false' argument. In like 70+ callsites. With the\n> above scheme, we'd instead just have removed a single .withoids = true,\n> from dumpTableSchema()'s ArchiveEntry() call.\n\nthe \"at\" I was trying to reference above is\n578b229718e8f15fa779e20f086c4b6bb3776106 / the WITH OID removal, and\ntherein specifically the pg_dump changes.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 17 Jan 2019 10:26:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On Wed, 16 Jan 2019 at 17:45, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> Hi,\n>\n> During the discussion in [1] an idea about refactoring ArchiveEntry was\n> suggested. The reason is that currently this function has significant number of\n> arguments that are \"optional\", and every change that has to deal with it\n> introduces a lot of useless diffs. In the thread, mentioned above, such an\n> example is tracking current table access method, and I guess \"Remove WITH OIDS\"\n> commit 578b229718e is also similar.\n>\n> Proposed idea is to refactor out all/optional arguments into a separate data\n> structure, so that adding/removing a new argument wouldn't change that much of\n> unrelated code. Then for every invocation of ArchiveEntry this structure needs\n> to be prepared before the call, or as Andres suggested:\n>\n> ArchiveEntry((ArchiveArgs){.tablespace = 3,\n> .dumpFn = somefunc,\n> ...});\n\nI didn't know we could do it this way. I thought we would have to\ndeclare a variable and have to initialize fields with non-const values\nseparately. This looks nice. We could even initialize fields with\nnon-const values. +1 from me.\n\nI think, we could use the same TocEntry structure as parameter, rather\nthan a new structure. Most of the arguments already resemble fields of\nthis structure. Also, we could pass pointer to that structure :\n\n ArchiveEntry( &(TocEntry){.tablespace = 3,\n .dumpFn = somefunc,\n ...});\n\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n",
"msg_date": "Fri, 18 Jan 2019 10:06:40 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "> On 2019-01-17 09:29:04 -0800, Andres Freund wrote:\n> On 2019-01-17 10:23:39 -0500, Tom Lane wrote:\n> > I don't buy the argument that this would move the goalposts in terms\n> > of how much work it is to add a new argument. You'd still end up\n> > touching every call site.\n>\n> Why? A lot of arguments that'd be potentially added or removed would not\n> be set by each callsites.\n>\n> If you e.g. look at\n>\n> you can see that a lot of changes where like\n> ArchiveEntry(fout, nilCatalogId, createDumpId(),\n> \"pg_largeobject\", NULL, NULL, \"\",\n> - false, \"pg_largeobject\", SECTION_PRE_DATA,\n> + \"pg_largeobject\", SECTION_PRE_DATA,\n> loOutQry->data, \"\", NULL,\n> NULL, 0,\n> NULL, NULL);\n>\n> i.e. just removing a 'false' argument. In like 70+ callsites. With the\n> above scheme, we'd instead just have removed a single .withoids = true,\n> from dumpTableSchema()'s ArchiveEntry() call.\n\nTo make this discussion a bit more specific, I've created a patch of how it can\nlook like. All the arguments, except Archive, CatalogId and DumpId I've moved\ninto the ArchiveOpts structure. Not all of them could be empty before, but\nanyway it seems better for consistency and readability. Some of the arguments\nhad empty string as a default value, I haven't changed anything here yet\n(although this mixture of NULL and \"\" in ArchiveEntry looks a bit confusing).\n\nAs Andres mentioned above, for 578b229718e / the WITH OID removal and pg_dump\nmodification from pluggable storage thread, this patch reduces number of\nchanges, related to ArchiveEntry, from 70+ to just one.",
"msg_date": "Wed, 23 Jan 2019 16:12:15 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-23 16:12:15 +0100, Dmitry Dolgov wrote:\n> To make this discussion a bit more specific, I've created a patch of how it can\n> look like.\n\nThanks.\n\n> All the arguments, except Archive, CatalogId and DumpId I've moved\n> into the ArchiveOpts structure. Not all of them could be empty before, but\n> anyway it seems better for consistency and readability. Some of the arguments\n> had empty string as a default value, I haven't changed anything here yet\n> (although this mixture of NULL and \"\" in ArchiveEntry looks a bit confusing).\n\nProbably worth changing at the same time, if we decide to go for it.\n\nTo me this does look like it'd be more maintainable going forward.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 23 Jan 2019 08:47:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Hello\n\nOn 2019-Jan-23, Andres Freund wrote:\n\n> > All the arguments, except Archive, CatalogId and DumpId I've moved\n> > into the ArchiveOpts structure. Not all of them could be empty before, but\n> > anyway it seems better for consistency and readability. Some of the arguments\n> > had empty string as a default value, I haven't changed anything here yet\n> > (although this mixture of NULL and \"\" in ArchiveEntry looks a bit confusing).\n> \n> Probably worth changing at the same time, if we decide to go for it.\n> \n> To me this does look like it'd be more maintainable going forward.\n\nIt does. How does pgindent behave with it?\n\nI'd use ArchiveEntryOpts as struct name; ArchiveOpts sounds wrong. Also,\nthe struct members could use better names -- \"defn\" for example could\nperhaps be \"createStmt\" (to match dropStmt/copyStmt), and expand \"desc\"\nto \"description\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 23 Jan 2019 13:58:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 1/23/19 10:12 AM, Dmitry Dolgov wrote:\n> To make this discussion a bit more specific, I've created a patch of how\n> it can look like.\nA little bit of vararg-macro action can make such a design look\neven tidier, cf. [1].\n\nOr are compilers without vararg macros still in the supported mix?\n\n-Chap\n\n\n\n[1] https://github.com/NetBSD/src/blob/trunk/sys/sys/midiio.h#L709\n\nThe macros in [1] are not defined to create a function call, but only\nthe argument structure because there might be several functions to pass\nit to, so a call would be written like func(&SEQ_MK_CHN(NOTEON, ...)).\n\nIn ArchiveEntry's case, if there's only one function involved, there'd\nbe no reason not to have a macro produce the whole call.\n\n",
"msg_date": "Wed, 23 Jan 2019 12:05:10 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Or are compilers without vararg macros still in the supported mix?\n\nNo.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 23 Jan 2019 12:05:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-23 13:58:07 -0300, Alvaro Herrera wrote:\n> I'd use ArchiveEntryOpts as struct name; ArchiveOpts sounds wrong.\n\nBrevity would be of some advantage IMO, because it'll probably determine\nhow pgindent indents the arguments, because the struct name will be in\nthe arguments.\n\n\n> Also, the struct members could use better names -- \"defn\" for example\n> could perhaps be \"createStmt\" (to match dropStmt/copyStmt), and expand\n> \"desc\" to \"description\".\n\nTrue.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 23 Jan 2019 09:07:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2019-01-23 12:05:10 -0500, Chapman Flack wrote:\n> On 1/23/19 10:12 AM, Dmitry Dolgov wrote:\n> > To make this discussion a bit more specific, I've created a patch of how\n> > it can look like.\n\n> A little bit of vararg-macro action can make such a design look\n> even tidier, cf. [1].\n> [1] https://github.com/NetBSD/src/blob/trunk/sys/sys/midiio.h#L709\n> \n> The macros in [1] are not defined to create a function call, but only\n> the argument structure because there might be several functions to pass\n> it to, so a call would be written like func(&SEQ_MK_CHN(NOTEON, ...)).\n> \n> In ArchiveEntry's case, if there's only one function involved, there'd\n> be no reason not to have a macro produce the whole call.\n\nI'm not really seeing this being more than obfuscation in this case. The\nonly point of the macro is to set the .tag and .op elements to something\nwithout adding redundancies due to the struct name. Which we'd not have.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 23 Jan 2019 09:10:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 1/23/19 12:10 PM, Andres Freund wrote:\n> On 2019-01-23 12:05:10 -0500, Chapman Flack wrote:\n>> [1] https://github.com/NetBSD/src/blob/trunk/sys/sys/midiio.h#L709\n\n> I'm not really seeing this being more than obfuscation in this case. The\n> only point of the macro is to set the .tag and .op elements to something\n> without adding redundancies due to the struct name. Which we'd not have.\n\nGranted, that example is more elaborate than this case, but writing\n\n\nArchiveEntry(fout, dbCatId, dbDumpId, .tag = datname, .owner = dba,\n .desc = \"DATABASE\", .section = SECTION_PRE_DATA,\n .defn = creaQry->data, .dropStmt = delQry->data);\n\ninstead of\n\nArchiveEntry(fout, dbCatId, dbDumpId, &(ArchiveOpts){.tag = datname,\n .owner = dba, .desc = \"DATABASE\",\n .section = SECTION_PRE_DATA, .defn = creaQry->data,\n .dropStmt = delQry->data});\n\nwould be easy, and still save a bit of visual noise.\n\nRegards,\n-Chap\n\n",
"msg_date": "Wed, 23 Jan 2019 12:22:23 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-01-23 13:58:07 -0300, Alvaro Herrera wrote:\n> Hello\n> \n> On 2019-Jan-23, Andres Freund wrote:\n> \n> > > All the arguments, except Archive, CatalogId and DumpId I've moved\n> > > into the ArchiveOpts structure. Not all of them could be empty before, but\n> > > anyway it seems better for consistency and readability. Some of the arguments\n> > > had empty string as a default value, I haven't changed anything here yet\n> > > (although this mixture of NULL and \"\" in ArchiveEntry looks a bit confusing).\n> > \n> > Probably worth changing at the same time, if we decide to go for it.\n> > \n> > To me this does look like it'd be more maintainable going forward.\n> \n> It does. How does pgindent behave with it?\n\nIt craps out:\nError@3649: Unbalanced parens\nWarning@3657: Extra )\n\nBut that can be worked around with something like\n\n te = ArchiveEntry(fout, tdinfo->dobj.catId, tdinfo->dobj.dumpId,\n ARCHIVE_ARGS(.tag = tbinfo->dobj.name,\n .namespace = tbinfo->dobj.namespace->dobj.name,\n .owner = tbinfo->rolname,\n .desc = \"TABLE DATA\",\n .section = SECTION_DATA,\n .copyStmt = copyStmt,\n .deps = &(tbinfo->dobj.dumpId),\n .nDeps = 1,\n .dumpFn = dumpFn,\n .dumpArg = tdinfo,\n ));\nwhich looks mildly simpler too.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 23 Jan 2019 09:23:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-01-23 12:22:23 -0500, Chapman Flack wrote:\n> On 1/23/19 12:10 PM, Andres Freund wrote:\n> > On 2019-01-23 12:05:10 -0500, Chapman Flack wrote:\n> >> [1] https://github.com/NetBSD/src/blob/trunk/sys/sys/midiio.h#L709\n> \n> > I'm not really seeing this being more than obfuscation in this case. The\n> > only point of the macro is to set the .tag and .op elements to something\n> > without adding redundancies due to the struct name. Which we'd not have.\n> \n> Granted, that example is more elaborate than this case, but writing\n> \n> \n> ArchiveEntry(fout, dbCatId, dbDumpId, .tag = datname, .owner = dba,\n> .desc = \"DATABASE\", .section = SECTION_PRE_DATA,\n> .defn = creaQry->data, .dropStmt = delQry->data);\n> \n> instead of\n> \n> ArchiveEntry(fout, dbCatId, dbDumpId, &(ArchiveOpts){.tag = datname,\n> .owner = dba, .desc = \"DATABASE\",\n> .section = SECTION_PRE_DATA, .defn = creaQry->data,\n> .dropStmt = delQry->data});\n> \n> would be easy, and still save a bit of visual noise.\n\nIDK, it'd be harder to parse correctly as a C programmer though. I'm up\nwith a wrapper macro like\n#define ARCHIVE_ARGS(...) &(ArchiveOpts){__VA_ARGS__}\nbut weirdly mixing struct arguments and normal function arguments seems\nquite confusing.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 23 Jan 2019 09:25:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-01-23 13:58:07 -0300, Alvaro Herrera wrote:\n>> It does. How does pgindent behave with it?\n\n> It craps out:\n> Error@3649: Unbalanced parens\n> Warning@3657: Extra )\n\n> But that can be worked around with something like\n\n> te = ArchiveEntry(fout, tdinfo->dobj.catId, tdinfo->dobj.dumpId,\n> ARCHIVE_ARGS(.tag = tbinfo->dobj.name,\n> .namespace = tbinfo->dobj.namespace->dobj.name,\n> .owner = tbinfo->rolname,\n> .desc = \"TABLE DATA\",\n> .section = SECTION_DATA,\n> .copyStmt = copyStmt,\n> .deps = &(tbinfo->dobj.dumpId),\n> .nDeps = 1,\n> .dumpFn = dumpFn,\n> .dumpArg = tdinfo,\n> ));\n> which looks mildly simpler too.\n\nThat looks fairly reasonable from here, but I'd suggest\nARCHIVE_OPTS rather than ARCHIVE_ARGS.\n\nCan we omit the initial dots if we use a wrapper macro? Would it be\na good idea to do so (I'm not really sure)?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 23 Jan 2019 12:32:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-01-23 12:32:06 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-01-23 13:58:07 -0300, Alvaro Herrera wrote:\n> >> It does. How does pgindent behave with it?\n> \n> > It craps out:\n> > Error@3649: Unbalanced parens\n> > Warning@3657: Extra )\n> \n> > But that can be worked around with something like\n> \n> > te = ArchiveEntry(fout, tdinfo->dobj.catId, tdinfo->dobj.dumpId,\n> > ARCHIVE_ARGS(.tag = tbinfo->dobj.name,\n> > .namespace = tbinfo->dobj.namespace->dobj.name,\n> > .owner = tbinfo->rolname,\n> > .desc = \"TABLE DATA\",\n> > .section = SECTION_DATA,\n> > .copyStmt = copyStmt,\n> > .deps = &(tbinfo->dobj.dumpId),\n> > .nDeps = 1,\n> > .dumpFn = dumpFn,\n> > .dumpArg = tdinfo,\n> > ));\n> > which looks mildly simpler too.\n> \n> That looks fairly reasonable from here, but I'd suggest\n> ARCHIVE_OPTS rather than ARCHIVE_ARGS.\n\nWFM. Seems quite possible that we'd grow a few more of these over time,\nso establishing some common naming seems good.\n\nBtw, do you have an opionion on keeping catId / dumpId outside/inside\nthe argument struct?\n\n\n> Can we omit the initial dots if we use a wrapper macro? Would it be\n> a good idea to do so (I'm not really sure)?\n\nNot easily, if at all, I think. We'd have to do a fair bit of weird\nmacro magic (and then still end up with limitations) to \"process\" each\nargument individually. And even if it were easy, I don't think it's\nparticularly advantageous.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 23 Jan 2019 09:36:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Btw, do you have an opionion on keeping catId / dumpId outside/inside\n> the argument struct?\n\nI'd go for outside, since they're not optional. Not dead set on that\nthough.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 23 Jan 2019 13:33:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 1/23/19 12:25 PM, Andres Freund wrote:\n> On 2019-01-23 12:22:23 -0500, Chapman Flack wrote:\n\n>> ArchiveEntry(fout, dbCatId, dbDumpId, .tag = datname, .owner = dba,\n>> .desc = \"DATABASE\", .section = SECTION_PRE_DATA,\n>> .defn = creaQry->data, .dropStmt = delQry->data);\n\n> IDK, it'd be harder to parse correctly as a C programmer though. ...\n> weirdly mixing struct arguments and normal function arguments seems\n> quite confusing.\n\nHmm, I guess the rubric I think with goes something like \"is a C\nprogrammer who encounters this in a source file for the first time\nlikely to guess wrong about what it means?\", and in the case above,\nI can scarcely imagine it.\n\nISTM that these days, many people are familiar with several languages\nthat allow a few mandatory, positional parameters followed by optional\nnamed ones, and so a likely reaction would be \"hey look, somebody used\na macro here to make C look more like <insert other language I know>.\"\n\nOn 1/23/19 12:32 PM, Tom Lane wrote:\n> Can we omit the initial dots if we use a wrapper macro?\n\nThat, I think, is hard.\n\nGetting to the form above is downright easy; making the dots go away,\neven if achievable, seems way further down the path of diminishing\nreturns.\n\nRegards,\n-Chap\n\n",
"msg_date": "Wed, 23 Jan 2019 13:36:36 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "Here is another version, where I accumulated all the suggestions:\n\n* Use NULL as a default value where it was an empty string before (this\n required few minor changes for some part of the code outside ArchiveEntry)\n\n* Rename defn, descr to createStmt, description\n\n* Use a macro to avoid pgindent errors\n\nAbout the last one. I'm also inclined to use the simpler version of\nARCHIVE_OPTS macro, mostly because the difference between \"optional\" and\n\"positional\" arguments in the alternative proposal is not that visible. So\n\n> mixing struct arguments and normal function arguments seems\n> quite confusing\n\ncould probably affect not only readability, but also would be bit more\nproblematic for updating this code (which was the goal in the first place).",
"msg_date": "Thu, 24 Jan 2019 13:12:40 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "> On 24 Jan 2019, at 13:12, Dmitry Dolgov <9erthalion6@gmail.com> wrote: \n\n> Here is another version, where I accumulated all the suggestions:\n\nNothing sticks out during review and AFAICT all comments have been addressed.\nEverything works as expected during (light) testing between master and an older\nversion.\n\n+1 on committing this, having spent a lot of time in this code I really\nappreciate the improved readability.\n\ncheers ./daniel\n",
"msg_date": "Fri, 1 Feb 2019 11:05:34 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "pgindent didn't like your layout with two-space indents for the struct\nmembers :-( I thought it was nice, but oh well. This means we can do\naway with the newline at each callsite, and I didn't like the trailing\ncomma (and I have vague recollections that some old compilers might\ncomplain about them too, though maybe we retired them already.)\n\n> * Use NULL as a default value where it was an empty string before (this\n> required few minor changes for some part of the code outside ArchiveEntry)\n\nAh, so this is why you changed replace_line_endings. So the comment on\nthat function now is wrong -- it fails to indicate that it returns a\nmalloc'ed \"\" on NULL input. But about half the callers want to have a\nmalloc'ed \"-\" on NULL input ... I think it'd make the code a little bit\nsimpler if we did that in replace_line_endings itself, maybe add a\n\"want_dash\" bool argument, so this code\n\n\t\tif (!ropt->noOwner)\n\t\t\tsanitized_owner = replace_line_endings(te->owner);\n\t\telse\n\t\t\tsanitized_owner = pg_strdup(\"-\");\n\ncan become\n\t\tsanitized_owner = replace_line_endings(te->owner, true);\n\nI don't quite understand why the comments about line sanitization were\nadded in the callsites rather than in replace_line_endings itself. I\nwould rename the function to sanitize_line() and put those comments\nthere (removing them from the callsites), then the new argument I\nsuggest would not be completely out of place.\n\nWhat do you think?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 1 Feb 2019 08:33:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-Feb-01, Alvaro Herrera wrote:\n\n> ... so this code\n> \n> \t\tif (!ropt->noOwner)\n> \t\t\tsanitized_owner = replace_line_endings(te->owner);\n> \t\telse\n> \t\t\tsanitized_owner = pg_strdup(\"-\");\n> \n> can become\n> \t\tsanitized_owner = replace_line_endings(te->owner, true);\n\nSorry, there's a silly bug here because I picked the wrong example to\nhand-type. The proposed pattern works fine for the schema cases, not\nfor this owner case. The owner case is correctly handled (AFAICT) in\nthe patch I posted. (Also for some reason I decided to go with \"hyphen\"\ninstead of \"dash\" in the argument name. Not sure if anybody cares\nstrongly about using the right terminology there (I don't know which it\nis).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 1 Feb 2019 08:43:23 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "> On Fri, Feb 1, 2019 at 12:33 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> pgindent didn't like your layout with two-space indents for the struct\n> members :-( I thought it was nice, but oh well. This means we can do\n> away with the newline at each callsite, and I didn't like the trailing\n> comma (and I have vague recollections that some old compilers might\n> complain about them too, though maybe we retired them already.)\n\nOh, ok. In fact I did this almost automatically without thinking too much (a\nformatting habit from other languages), so if pgindent doesn't like it, then\nfine.\n\n> > * Use NULL as a default value where it was an empty string before (this\n> > required few minor changes for some part of the code outside ArchiveEntry)\n>\n> I would rename the function to sanitize_line() and put those comments there\n> (removing them from the callsites), then the new argument I suggest would not\n> be completely out of place.\n\nYes, sounds pretty reasonable for me.\n\n> (Also for some reason I decided to go with \"hyphen\" instead of \"dash\" in the\n> argument name. Not sure if anybody cares strongly about using the right\n> terminology there (I don't know which it is).\n\nJust out of curiosity I did some search and could find few examples of using\nboth \"dash\" and \"hyphen\" across the code, but I guess indeed it doesn't really\nmatter.\n\n",
"msg_date": "Fri, 1 Feb 2019 15:25:50 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
},
{
"msg_contents": "On 2019-Feb-01, Dmitry Dolgov wrote:\n\n> > On Fri, Feb 1, 2019 at 12:33 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > > * Use NULL as a default value where it was an empty string before (this\n> > > required few minor changes for some part of the code outside ArchiveEntry)\n> >\n> > I would rename the function to sanitize_line() and put those comments there\n> > (removing them from the callsites), then the new argument I suggest would not\n> > be completely out of place.\n> \n> Yes, sounds pretty reasonable for me.\n\nThanks for looking -- pushed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 1 Feb 2019 11:30:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ArchiveEntry optional arguments refactoring"
}
] |
[
{
"msg_contents": "Hi,\n\nPer discussion in [1] about generic type subscripting pathc Pavel had\ninteresting commentary. So far jsonb_set, if is invoked for a jsonb array with\nan index outside of the array boundaries, will implicitely add a value:\n\n=# insert into table values('[\"a\", \"b\", \"c\"]');\n=# update table set data = jsonb_set(data, '{1000}', '\"d\"');\n=# table test;\n=# table test;\ndata\n----------------------\n[\"a\", \"b\", \"c\", \"d\"]\n\nThis is perfectly documented feature, there are no questions here. But for\ngeneric type subscripting infrastructure I'm introducing another, more\nconvenient, syntax:\n\n=# update table test set data['selector'] = 'value';\n\nSince the idea behind generic type subsripting patch is just to introduce\nextendable subscripting operation for different data types, here I'm relying on\nalready existing functionality for jsonb. But then this feature of jsonb_set\nindeed became more confusing with the new syntax.\n\n=# update table test set data[1000] = 'd';\n=# table test;\ndata\n----------------------\n[\"a\", \"b\", \"c\", \"d\"]\n\nUnfortunately, the only alternative option here would be to return an error and\nreject such a value, which differs from jsonb_set. I would like to ask , what\nwould be the best solution here - to keep this confusing behaviour, or to have\ntwo different implementation of updating jsonb functionality (one for\njsonb_set, another for subscripting)?\n\n[1]: https://www.postgresql.org/message-id/CA%2Bq6zcXmwR9BDrcf188Mcz5%2BjU8DaqrrOat2mzizKf-nYgDXkg%40mail.gmail.com\n\n",
"msg_date": "Wed, 16 Jan 2019 13:47:06 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "jsonb_set for an array with an index outside of boundaries"
}
] |
[
{
"msg_contents": "Hello,\n\nI already sent the following message to postgres-xl-developers list but\nthey don't seem so active. Therefore, I hope you accept this kind of\nmessages to this mail list.\n\nI'm planning to run PostgreSQL-XL data nodes on CephFS as shared folder. My\ngoal is to provide scalability (auto/manual) in terms of CPU and RAM for\nthose who have large amount of data and operate on clouds. When data gets\nbigger it's hard to make copy of every node instantly we increase number of\nnodes. That's why I want CEPH to do disk management, distribution of data\n(disk) and replication of data (disk). I thought of adding SHARED mode\nalongside with REPLICADTED and DISTRIBUTED modes since PostgreSQL-XL does\nglobal transaction and session management.\n\nWhat do you think of this?\n\nWhat do you think I should be careful about this development if you think\nit's possible?\n\nBests.\n\n-- \nAtıf Ceylan\nCTO\nAppstoniA OÜ\n\nHello,I already sent the following message to postgres-xl-developers list but they don't seem so active. Therefore, I hope you accept this kind of messages to this mail list.I'm planning to run PostgreSQL-XL data nodes on CephFS as shared folder. My goal is to provide scalability (auto/manual) in terms of CPU and RAM for those who have large amount of data and operate on clouds. When data gets bigger it's hard to make copy of every node instantly we increase number of nodes. That's why I want CEPH to do disk management, distribution of data (disk) and replication of data (disk). I thought of adding SHARED mode alongside with REPLICADTED and DISTRIBUTED modes since PostgreSQL-XL does global transaction and session management. What do you think of this?What do you think I should be careful about this development if you think it's possible?Bests.-- Atıf CeylanCTOAppstoniA OÜ",
"msg_date": "Wed, 16 Jan 2019 17:11:08 +0300",
"msg_from": "=?UTF-8?B?TS5BdMSxZiBDRVlMQU4=?= <mehmet@atifceylan.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL-XL shared disk development question"
}
] |
[
{
"msg_contents": "As discussed in the Ryu thread, herewith a draft of a patch to use\nstrtof() for float4 input (rather than using strtod() with its\ndouble-rounding issue).\n\nAn exhaustive search shows that this does not change the resulting\nbit-pattern for any input string that could have been generated by PG\nwith extra_float_digits=3 set. The risk is that values generated by\nother software, especially code that uses shortest-exact float output\n(as a number of languages seem to do, and which PG will do if the Ryu\npatch goes in) will be incorrectly input; though it appears that only\none value (7.038531e-26) is both a possible shortest-exact\nrepresentation and a rounding error (though a number of other values\nround incorrectly, they are not shortest representations).\n\nThis includes a fallback to use strtod() the old way if the platform\nlacks strtof(). A variant file for the new regression tests is needed\nfor such platforms; I've taken a stab at setting this up for the one\nplatform we know will need it (if there are others, the buildfarm will\nlet us know in due course).\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Wed, 16 Jan 2019 14:17:52 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "draft patch for strtof()"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> As discussed in the Ryu thread, herewith a draft of a patch to use\n> strtof() for float4 input (rather than using strtod() with its\n> double-rounding issue).\n\nThe errno handling in strtof seems bizarrely overcomplex; why do\nyou need the separate caller_errno variable?\n\n> This includes a fallback to use strtod() the old way if the platform\n> lacks strtof(). A variant file for the new regression tests is needed\n> for such platforms; I've taken a stab at setting this up for the one\n> platform we know will need it (if there are others, the buildfarm will\n> let us know in due course).\n\nI'm not that much on board with introducing test cases that we know,\nbeyond question, are going to be portability headaches. What will\nwe actually gain with this, compared to just not having the test case?\nI can't see that it's worth either the buildfarm cycles or the human\nmaintenance effort just to prove that, yes, some platforms have\nportability corner cases. I also don't like the prospect that we\nship releases that will fail basic regression tests on platforms\nwe haven't tested. Coping with such failures is a large burden\nfor affected packagers or end users, especially when the only useful\n\"coping\" mechanism is to ignore the regression test failure. Might\nas well not have it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 10:20:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> As discussed in the Ryu thread, herewith a draft of a patch to use\n >> strtof() for float4 input (rather than using strtod() with its\n >> double-rounding issue).\n\n Tom> The errno handling in strtof seems bizarrely overcomplex; why do\n Tom> you need the separate caller_errno variable?\n\nEh. I was preserving the conventional behaviour of setting errno only on\nerrors and not resetting it to 0, but I suppose you could argue that\nthat is overkill given that the function is called only in one place\nthat's supposed to have already set errno to 0.\n\n(And yes, I missed a couple of files and the windows build breaks,\nworking on those)\n\n >> This includes a fallback to use strtod() the old way if the platform\n >> lacks strtof(). A variant file for the new regression tests is needed\n >> for such platforms; I've taken a stab at setting this up for the one\n >> platform we know will need it (if there are others, the buildfarm will\n >> let us know in due course).\n\n Tom> I'm not that much on board with introducing test cases that we\n Tom> know, beyond question, are going to be portability headaches. What\n Tom> will we actually gain with this, compared to just not having the\n Tom> test case?\n\nThe purpose of the test case is to ensure that we're getting the right\nvalues on input. If these values fail on any platform that anyone\nactually cares about, then I think we need to know about it.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 16 Jan 2019 15:46:23 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> The errno handling in strtof seems bizarrely overcomplex; why do\n> Tom> you need the separate caller_errno variable?\n\n> Eh. I was preserving the conventional behaviour of setting errno only on\n> errors and not resetting it to 0,\n\nOh, I see. -ENOCAFFEINE.\n\n> but I suppose you could argue that\n> that is overkill given that the function is called only in one place\n> that's supposed to have already set errno to 0.\n\nWell, we probably oughtta endeavor to maintain compatibility with\nthe function's standard behavior, because other calls to it are\nlikely to creep in over time. Objection withdrawn.\n\n> Tom> I'm not that much on board with introducing test cases that we\n> Tom> know, beyond question, are going to be portability headaches. What\n> Tom> will we actually gain with this, compared to just not having the\n> Tom> test case?\n\n> The purpose of the test case is to ensure that we're getting the right\n> values on input. If these values fail on any platform that anyone\n> actually cares about, then I think we need to know about it.\n\nMeh. I think the actual outcome will be that we define any platform\nthat gets the wrong answer as one that we don't care about, mainly\nbecause we won't have any practical way to fix it. That being the\nsituation, trying to maintain a test case seems like pointless\nmake-work.\n\n(FWIW, I'm running the patch on gaur's host, just to confirm it\ndoes what you expect. Should have an answer in an hour or so ...)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 11:07:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": "See if Windows likes this one any better.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Wed, 16 Jan 2019 16:09:26 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> See if Windows likes this one any better.\n\nDoubtful, because AFAICT it's the same patch.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 11:14:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n >> See if Windows likes this one any better.\n\n Tom> Doubtful, because AFAICT it's the same patch.\n\nSigh. copied wrong file. Let's try that again.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Wed, 16 Jan 2019 16:32:20 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": "I wrote:\n> (FWIW, I'm running the patch on gaur's host, just to confirm it\n> does what you expect. Should have an answer in an hour or so ...)\n\nIt does --- it compiles cleanly, and the float4 output matches\nfloat4-misrounded-input.out.\n\n(This is the v1 patch not v2.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 12:36:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> (FWIW, I'm running the patch on gaur's host, just to confirm it\n >> does what you expect. Should have an answer in an hour or so ...)\n\n Tom> It does --- it compiles cleanly, and the float4 output matches\n Tom> float4-misrounded-input.out.\n\nWell I'm glad _something_ works.\n\nBecause it turns out that Windows (at least the version running on\nAppveyor) completely fucks this up; strtof() is apparently returning\ninfinity or zero _without setting errno_ for values out of range for\nfloat: input of \"10e70\" returns +inf with no error, input of \"10e-70\"\nreturns (exactly) 0.0 with no error.\n\n*facepalm*\n\nAny windows-users have any idea about this?\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Thu, 17 Jan 2019 02:12:50 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Andrew> Because it turns out that Windows (at least the version running\n Andrew> on Appveyor) completely fucks this up; strtof() is apparently\n Andrew> returning infinity or zero _without setting errno_ for values\n Andrew> out of range for float: input of \"10e70\" returns +inf with no\n Andrew> error, input of \"10e-70\" returns (exactly) 0.0 with no error.\n\nThis bug turns out to be dependent on compiler/SDK versions, not\nsurprisingly. So far I have figured out how to invoke these combinations\non appveyor:\n\nVS2013 / SDK 7.1 (as per cfbot): fails\nVS2015 / SDK 8.1: works\n\nTrying to figure out how to get other combinations to test.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Thu, 17 Jan 2019 23:50:13 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": "This one builds ok on appveyor with at least three different VS\nversions. Though I've not tried the exact combination of commands\nused by cfbot... yet.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Fri, 18 Jan 2019 04:34:03 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
},
{
"msg_contents": "Merge back in some code changes made in the Ryu patch that really belong\nhere, in preparation for rebasing Ryu on top of this (since this is\nreally a separate functional change). Posting this mainly to let cfbot\ntake a look at it.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Mon, 04 Feb 2019 12:07:21 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: draft patch for strtof()"
}
] |
[
{
"msg_contents": "Looking into a bug report on the -general list about grouping sets,\nwhich turns out to be an issue of collation assignment: if the query has\n\n CASE GROUPING(expr) WHEN 1 ...\n\nthen the expression is rejected as not matching the one in the GROUP BY\nclause, because CASE already assigned collations to the expression (as a\nspecial case in its transform function) while the rest of the query\nhasn't yet had them assigned, because parseCheckAggregates gets run\nbefore assign_query_collations.\n\nI'll be looking into this in detail later, but right now, cam anyone\nthink of any reason why parseCheckAggregates couldn't be moved to after\nassign_query_collations?\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 16 Jan 2019 17:02:07 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "parseCheckAggregates vs. assign_query_collations"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> Looking into a bug report on the -general list about grouping sets,\n> which turns out to be an issue of collation assignment: if the query has\n> CASE GROUPING(expr) WHEN 1 ...\n> then the expression is rejected as not matching the one in the GROUP BY\n> clause, because CASE already assigned collations to the expression (as a\n> special case in its transform function) while the rest of the query\n> hasn't yet had them assigned, because parseCheckAggregates gets run\n> before assign_query_collations.\n\nBleah.\n\n> I'll be looking into this in detail later, but right now, cam anyone\n> think of any reason why parseCheckAggregates couldn't be moved to after\n> assign_query_collations?\n\nI never particularly liked assign_query_collations, as a matter of overall\nsystem design. I'd prefer to nuke that and instead require collation\nassignment to be done per-expression, ie at the end of transformExpr and\nsimilar places. Now that we've seen this example, it's fairly clear why\ncollation assignment really should be considered an integral part of\nexpression parsing. Who's to say there aren't more gotchas of this sort\nwaiting to bite us? Also, if it were integrated into transformExpr as\nit should have been to begin with, we would not have the need for quite\nso many random calls to assign_expr_collations, with consequent bugs of\nomission, cf 7a28e9aa0.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 16 Jan 2019 12:49:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: parseCheckAggregates vs. assign_query_collations"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> I'll be looking into this in detail later, but right now, cam anyone\n >> think of any reason why parseCheckAggregates couldn't be moved to\n >> after assign_query_collations?\n\n Tom> I never particularly liked assign_query_collations, as a matter of\n Tom> overall system design. I'd prefer to nuke that and instead require\n Tom> collation assignment to be done per-expression, ie at the end of\n Tom> transformExpr and similar places. Now that we've seen this\n Tom> example, it's fairly clear why collation assignment really should\n Tom> be considered an integral part of expression parsing. Who's to say\n Tom> there aren't more gotchas of this sort waiting to bite us? Also,\n Tom> if it were integrated into transformExpr as it should have been to\n Tom> begin with, we would not have the need for quite so many random\n Tom> calls to assign_expr_collations, with consequent bugs of omission,\n Tom> cf 7a28e9aa0.\n\nSure, this might be the right approach going forward. But right now we\nneed a back-patchable fix, and the above sounds a bit too intrusive for\nthat.\n\nTurns out the issue can be reproduced without grouping sets too:\n\nselect case a||'' when '1' then 0 else 1 end\n from (select (select random()::text) as a) s group by a||''; \nERROR: column \"s.a\" must appear in the GROUP BY clause or be used in an aggregate function\n\nselect case when a||'' = '1' then 0 else 1 end\n from (select (select random()::text) as a) s group by a||''; -- works\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Thu, 17 Jan 2019 02:48:31 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: parseCheckAggregates vs. assign_query_collations"
}
] |
[
{
"msg_contents": "Although we've got a few NetBSD and OpenBSD buildfarm critters,\nnone of them are doing --enable-tap-tests. If they were, we'd\nhave noticed the pgbench regression tests falling over:\n\nnot ok 3 - pgbench option error: bad option stderr /(?^:(unrecognized|illegal) option)/\n# Failed test 'pgbench option error: bad option stderr /(?^:(unrecognized|illegal) option)/'\n# at t/002_pgbench_no_server.pl line 190.\n# 'pgbench: unknown option -- bad-option\n# Try \"pgbench --help\" for more information.\n# '\n# doesn't match '(?^:(unrecognized|illegal) option)'\n\nSure enough, manual testing confirms that on these platforms\nthat error message is spelled differently:\n\n$ pgbench --bad-option\npgbench: unknown option -- bad-option\nTry \"pgbench --help\" for more information.\n\n\nI am, TBH, inclined to fix this by removing that test case rather\nthan teaching it another spelling to accept. I think it's very\nhard to make the case that tests like this one are anything but\na waste of developer and buildfarm time. When they are also a\nportability hazard, it's time to cut our losses. (I also note\nthat this test has caused us problems before, cf 869aa40a2 and\n933851033.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 00:04:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nHello Tom,\n\n> Although we've got a few NetBSD and OpenBSD buildfarm critters,\n> none of them are doing --enable-tap-tests. If they were, we'd\n> have noticed the pgbench regression tests falling over:\n>\n> [...]\n>\n> I am, TBH, inclined to fix this by removing that test case rather\n> than teaching it another spelling to accept. I think it's very\n> hard to make the case that tests like this one are anything but\n> a waste of developer and buildfarm time. When they are also a\n> portability hazard, it's time to cut our losses. (I also note\n> that this test has caused us problems before, cf 869aa40a2 and\n> 933851033.)\n\nI'd rather keep it by simply adding the \"|unknown\" alternative. 30 years \nof programming have taught me that testing limit & error cases is useful, \nalthough you never know when it will be proven so.\n\nClient application coverage is currently abysmal, especially \"psql\" \ndespite the many script used for testing (39% of lines, 42% of \nfunctions!), pgbench is under 90%. Generally we really need more tests, \nnot less. TAP tests are a good compromise because they are not always \nrun, and ISTM sometimes (i.e. you asked for it) is enough.\n\nI agree that some tests can be useless, but I do not think that it applies \nto this one. This test also checks that under a bad option pgbench stops \nwith an appropriate 1 exit status. Recently a patch updated the exit \nstatus of pgbench in many cases to distinguish between different kind \nerrors, thus having non-regression in this area was shown to be a good \nidea. Moreover, knowing that the exit status on getopt errors is \nconsistent across platform has value, and knowing that there is some \nvariability is not uninteresting.\n\n-- \nFabien.\n\n",
"msg_date": "Thu, 17 Jan 2019 10:46:34 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "On 2019-01-17 06:04, Tom Lane wrote:\n> Although we've got a few NetBSD and OpenBSD buildfarm critters,\n> none of them are doing --enable-tap-tests. If they were, we'd\n> have noticed the pgbench regression tests falling over:\n\nFor what it's worth I've enabled tap-tests for my OpenBSD 5.9 (curculio) \nand NetBSD 7 (sidewinder) animals now.\n\n/Mikael\n\n",
"msg_date": "Thu, 17 Jan 2019 22:12:57 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> On 2019-01-17 06:04, Tom Lane wrote:\n>> Although we've got a few NetBSD and OpenBSD buildfarm critters,\n>> none of them are doing --enable-tap-tests. If they were, we'd\n>> have noticed the pgbench regression tests falling over:\n\n> For what it's worth I've enabled tap-tests for my OpenBSD 5.9 (curculio) \n> and NetBSD 7 (sidewinder) animals now.\n\nOh, thanks! I'm guessing they'll fail their next runs, but I'll\nwait to see confirmation of that before I do anything about the\ntest bug.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 16:16:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nOn 2019-01-17 22:16, Tom Lane wrote:\n\n>> For what it's worth I've enabled tap-tests for my OpenBSD 5.9 (curculio)\n>> and NetBSD 7 (sidewinder) animals now.\n> \n> Oh, thanks! I'm guessing they'll fail their next runs, but I'll\n> wait to see confirmation of that before I do anything about the\n> test bug.\n\nThey should run the next time within the hour or hour and a half so I \nguess we will find out soon enough.\n\n/Mikael\n\n\n",
"msg_date": "Thu, 17 Jan 2019 22:19:25 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nOn 2019-01-17 22:19, Mikael Kjellström wrote:\n\n> On 2019-01-17 22:16, Tom Lane wrote:\n> \n>>> For what it's worth I've enabled tap-tests for my OpenBSD 5.9 (curculio)\n>>> and NetBSD 7 (sidewinder) animals now.\n>>\n>> Oh, thanks! I'm guessing they'll fail their next runs, but I'll\n>> wait to see confirmation of that before I do anything about the\n>> test bug.\n> \n> They should run the next time within the hour or hour and a half so I \n> guess we will find out soon enough.\n\nHm, that didn't go so well.\n\nIt says:\n\nconfigure: error: Additional Perl modules are required to run TAP tests\n\nso how do I find out with Perl modules that are required?\n\n/Mikael\n\n",
"msg_date": "Thu, 17 Jan 2019 22:38:10 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> It says:\n> configure: error: Additional Perl modules are required to run TAP tests\n> so how do I find out with Perl modules that are required?\n\nIf you look into the configure log it should say just above that,\nbut I'm betting you just need p5-IPC-Run.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 16:42:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\n\nOn 2019-01-17 22:42, Tom Lane wrote:\n\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> It says:\n>> configure: error: Additional Perl modules are required to run TAP tests\n>> so how do I find out with Perl modules that are required?\n> \n> If you look into the configure log it should say just above that,\n> but I'm betting you just need p5-IPC-Run.\n\nYes it seems to be IPC::Run that is missing.\n\nI've installed it manually through CPAN.\n\nLet's see if it works better this time.\n\n/Mikael\n\n\n",
"msg_date": "Thu, 17 Jan 2019 22:47:33 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "On 2019-01-17 22:47, Mikael Kjellström wrote:\n> \n> \n> On 2019-01-17 22:42, Tom Lane wrote:\n> \n>> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>>> It says:\n>>> configure: error: Additional Perl modules are required to run TAP tests\n>>> so how do I find out with Perl modules that are required?\n>>\n>> If you look into the configure log it should say just above that,\n>> but I'm betting you just need p5-IPC-Run.\n> \n> Yes it seems to be IPC::Run that is missing.\n> \n> I've installed it manually through CPAN.\n> \n> Let's see if it works better this time.\n\nHmmm, nope:\n\n================== \npgsql.build/src/bin/pg_ctl/tmp_check/log/003_promote_standby.log \n===================\n2019-01-17 23:09:20.343 CET [9129] LOG: listening on Unix socket \n\"/tmp/g66P1fpMFK/.s.PGSQL.64980\"\n2019-01-17 23:09:20.343 CET [9129] FATAL: could not create semaphores: \nNo space left on device\n2019-01-17 23:09:20.343 CET [9129] DETAIL: Failed system call was \nsemget(64980002, 17, 03600).\n2019-01-17 23:09:20.343 CET [9129] HINT: This error does *not* mean \nthat you have run out of disk space. It occurs when either the system \nlimit for the maximum number of semaphore sets (SEMMNI), or the system \nwide maximum number of semaphores (SEMMNS), would be exceeded. You need \nto raise the respective kernel parameter. Alternatively, reduce \nPostgreSQL's consumption of semaphores by reducing its max_connections \nparameter.\n\tThe PostgreSQL documentation contains more information about \nconfiguring your system for PostgreSQL.\n2019-01-17 23:09:20.345 CET [9129] LOG: database system is shut down\n\nwill try and increase SEMMNI and see if that helps.\n\n/Mikael\n\n\n",
"msg_date": "Thu, 17 Jan 2019 23:12:43 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> Let's see if it works better this time.\n\n> Hmmm, nope:\n\n> 2019-01-17 23:09:20.343 CET [9129] FATAL: could not create semaphores: \n> No space left on device\n\nYeah, you might've been able to get by with OpenBSD/NetBSD's default\nsemaphore settings before, but they really only let one postmaster\nrun at a time; and the TAP tests want to start more than one.\nFor me it seems to work to append this to /etc/sysctl.conf:\n\nkern.seminfo.semmni=100\nkern.seminfo.semmns=2000\n\nand either reboot, or install those settings manually with sysctl.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 17:23:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nOn 2019-01-17 23:23, Tom Lane wrote:\n\n> Yeah, you might've been able to get by with OpenBSD/NetBSD's default\n> semaphore settings before, but they really only let one postmaster\n> run at a time; and the TAP tests want to start more than one.\n> For me it seems to work to append this to /etc/sysctl.conf:\n> \n> kern.seminfo.semmni=100\n> kern.seminfo.semmns=2000\n> \n> and either reboot, or install those settings manually with sysctl.\n\nLooks that way.\n\nI've increased the values and rebooted the machines.\n\nLet's hope 5th time is the charm :-)\n\n/Mikael\n\n",
"msg_date": "Thu, 17 Jan 2019 23:37:34 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nOn 2019-01-17 23:37, Mikael Kjellström wrote:\n> \n> On 2019-01-17 23:23, Tom Lane wrote:\n> \n>> Yeah, you might've been able to get by with OpenBSD/NetBSD's default\n>> semaphore settings before, but they really only let one postmaster\n>> run at a time; and the TAP tests want to start more than one.\n>> For me it seems to work to append this to /etc/sysctl.conf:\n>>\n>> kern.seminfo.semmni=100\n>> kern.seminfo.semmns=2000\n>>\n>> and either reboot, or install those settings manually with sysctl.\n> \n> Looks that way.\n> \n> I've increased the values and rebooted the machines.\n> \n> Let's hope 5th time is the charm :-)\n\nNope!\n\nBut it looks like in NetBSD the options are called:\n\nnetbsd7-pgbf# sysctl -a | grep semmn\nkern.ipc.semmni = 10\nkern.ipc.semmns = 60\nkern.ipc.semmnu = 30\n\nso I will try and set that in /etc/sysctl.conf and reboot and see what \nhappens.\n\n/Mikael\n\n\n",
"msg_date": "Thu, 17 Jan 2019 23:54:46 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\n\nOn 2019-01-17 23:54, Mikael Kjellström wrote:\n\n> But it looks like in NetBSD the options are called:\n> \n> netbsd7-pgbf# sysctl -a | grep semmn\n> kern.ipc.semmni = 10\n> kern.ipc.semmns = 60\n> kern.ipc.semmnu = 30\n> \n> so I will try and set that in /etc/sysctl.conf and reboot and see what \n> happens.\n\nThat seems to have done the trick:\n\nnetbsd7-pgbf# sysctl -a | grep semmn\nkern.ipc.semmni = 100\nkern.ipc.semmns = 2000\nkern.ipc.semmnu = 30\n\nI just started another run on sidewinder (NetBSD 7), let's see how that \ngoes.\n\nbut the OpenBSD machine went further and now fails on:\n\npgbenchCheck instead.\n\nIs that the failure you expected to get?\n\n/Mikael\n\n",
"msg_date": "Fri, 18 Jan 2019 00:00:49 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nOn 2019-01-18 00:00, Mikael Kjellström wrote:\n\n\n> I just started another run on sidewinder (NetBSD 7), let's see how that \n> goes.\n> \n> but the OpenBSD machine went further and now fails on:\n> \n> pgbenchCheck instead.\n> \n> Is that the failure you expected to get?\n\nAnd now also the NetBSD machine failed on pgbenchCheck.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-01-17%2022%3A57%3A14\n\nshould I leave it as it is for now?\n\n/Mikael\n\n",
"msg_date": "Fri, 18 Jan 2019 00:10:01 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> But it looks like in NetBSD the options are called:\n\nSorry about that, I copied-and-pasted from the openbsd machine I was\nlooking at without remembering that netbsd is just a shade different.\n\n> but the OpenBSD machine went further and now fails on:\n> pgbenchCheck instead.\n> Is that the failure you expected to get?\n\nYup, sure is. Thanks!\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 18:11:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> And now also the NetBSD machine failed on pgbenchCheck.\n\nIndeed, as expected.\n\n> should I leave it as it is for now?\n\nPlease. I'll push a fix for the broken test case in a bit --- I\njust wanted to confirm that somebody else's machines agreed that\nit's broken.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 18:31:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nOn 2019-01-18 00:31, Tom Lane wrote:\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> And now also the NetBSD machine failed on pgbenchCheck.\n> \n> Indeed, as expected.\n\nOk.\n\n\n>> should I leave it as it is for now?\n> \n> Please. I'll push a fix for the broken test case in a bit --- I\n> just wanted to confirm that somebody else's machines agreed that\n> it's broken.\n\nOk, I will leave it on then.\n\n/Mikael\n\n\n",
"msg_date": "Fri, 18 Jan 2019 00:46:45 +0100",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I am, TBH, inclined to fix this by removing that test case rather\n>> than teaching it another spelling to accept. I think it's very\n>> hard to make the case that tests like this one are anything but\n>> a waste of developer and buildfarm time. When they are also a\n>> portability hazard, it's time to cut our losses. (I also note\n>> that this test has caused us problems before, cf 869aa40a2 and\n>> 933851033.)\n\n> I'd rather keep it by simply adding the \"|unknown\" alternative. 30 years \n> of programming have taught me that testing limit & error cases is useful, \n> although you never know when it will be proven so.\n\nSorry, I don't buy this line of argument. Reasonable test design requires\nmaking cost/benefit tradeoffs: the cost to run the test over and over,\nand the cost to maintain the test itself (e.g. fix portability issues in\nit) have to be balanced against the probability of it finding something\nuseful. I judge that the chance of this particular test finding something\nis small, and I've had quite enough of the maintenance costs.\n\nJust to point up that we're still not clearly done with the maintenance\ncosts of supposing that we know how every version of getopt_long will\nspell this error message, I note that my Linux box seems to have two\nvariants of it:\n\n$ pgbench -z \npgbench: invalid option -- 'z'\nTry \"pgbench --help\" for more information.\n$ pgbench --z\npgbench: unrecognized option '--z'\nTry \"pgbench --help\" for more information.\n\nof which the \"invalid\" alternative is also not in our list right now.\nWho's to say how many more versions of getopt_long are out there,\nor what the maintainers thereof might do in the future?\n\n> I agree that some tests can be useless, but I do not think that it applies \n> to this one. This test also checks that under a bad option pgbench stops \n> with an appropriate 1 exit status.\n\nIt's possible that it's worth the trouble to check for exit status 1,\nbut I entirely fail to see the point of checking exactly what is the\nspelling of a message that is issued by code not under our control.\n\nLooking closer at the test case:\n\n [\n 'bad option',\n '-h home -p 5432 -U calvin -d --bad-option',\n [ qr{(unrecognized|illegal) option}, qr{--help.*more information} ]\n ],\n\nISTM that just removing the first qr{} pattern, and checking only that\nwe get the text that *is* under our control, is a reasonable compromise\nhere.\n\n regards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 19:21:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "On Thu, Jan 17, 2019 at 07:21:08PM -0500, Tom Lane wrote:\n> Sorry, I don't buy this line of argument. Reasonable test design requires\n> making cost/benefit tradeoffs: the cost to run the test over and over,\n> and the cost to maintain the test itself (e.g. fix portability issues in\n> it) have to be balanced against the probability of it finding something\n> useful. I judge that the chance of this particular test finding something\n> is small, and I've had quite enough of the maintenance costs.\n\nYes, I agree with Tom's line of thoughts here. It seems to me that\njust dropping this part of the test is just but fine.\n--\nMichael",
"msg_date": "Fri, 18 Jan 2019 09:43:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "BTW, if you're wondering why curculio is still failing the pgbench\ntest, all is explained here:\n\nhttps://man.openbsd.org/srandom\n\nOr at least most is explained there. While curculio is unsurprisingly\nfailing all four seeded_random tests, when I try it locally on an\nOpenBSD 6.4 installation, only the uniform, exponential, and gaussian\ncases reliably \"fail\". zipfian usually doesn't. It looks like the\nzipfian code almost always produces 4000 regardless of the seed value,\nthough occasionally it produces 4001. Bad parameters for that\nalgorithm, perhaps?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 18 Jan 2019 01:18:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\n>> I'd rather keep it by simply adding the \"|unknown\" alternative. 30 years\n>> of programming have taught me that testing limit & error cases is useful,\n>> although you never know when it will be proven so.\n>\n> Sorry, I don't buy this line of argument.\n\n> Reasonable test design requires making cost/benefit tradeoffs: the cost \n> to run the test over and over, and the cost to maintain the test itself \n> (e.g. fix portability issues in it) have to be balanced against the \n> probability of it finding something useful. I judge that the chance of \n> this particular test finding something is small, and I've had quite \n> enough of the maintenance costs.\n>\n> Just to point up that we're still not clearly done with the maintenance\n> costs of supposing that we know how every version of getopt_long will\n> spell this error message, I note that my Linux box seems to have two\n> variants of it:\n>\n> $ pgbench -z\n> pgbench: invalid option -- 'z'\n> Try \"pgbench --help\" for more information.\n> $ pgbench --z\n> pgbench: unrecognized option '--z'\n> Try \"pgbench --help\" for more information.\n>\n> of which the \"invalid\" alternative is also not in our list right now.\n> Who's to say how many more versions of getopt_long are out there,\n> or what the maintainers thereof might do in the future?\n\nISTM that the getopt implementers imagination should run out in the end:-) \ninvalid, unknown, unrecognized, unexpected, incorrect... Ok English has \nmany words:-)\n\n>> I agree that some tests can be useless, but I do not think that it applies\n>> to this one. This test also checks that under a bad option pgbench stops\n>> with an appropriate 1 exit status.\n>\n> It's possible that it's worth the trouble to check for exit status 1,\n> but I entirely fail to see the point of checking exactly what is the\n> spelling of a message that is issued by code not under our control.\n>\n> Looking closer at the test case:\n>\n> [\n> 'bad option',\n> '-h home -p 5432 -U calvin -d --bad-option',\n> [ qr{(unrecognized|illegal) option}, qr{--help.*more information} ]\n> ],\n>\n> ISTM that just removing the first qr{} pattern, and checking only that\n> we get the text that *is* under our control, is a reasonable compromise\n> here.\n\nPossibly. I'd be a little happier if it checks for a non-empty error \nmessage, eg qr{...} or qr{option} (the message should say something about \nthe option).\n\n-- \nFabien.\n\n",
"msg_date": "Fri, 18 Jan 2019 09:26:49 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\n> BTW, if you're wondering why curculio is still failing the pgbench\n> test,\n\nHmmm, that is interesting! It shows that at least some TAP tests are \nuseful.\n\n> all is explained here:\n>\n> https://man.openbsd.org/srandom\n>\n> Or at least most is explained there.\n\nYep. They try to be more serious than other systems about PRNG, which is \nnot bad in itself.\n\n> While curculio is unsurprisingly failing all four seeded_random tests, \n> when I try it locally on an OpenBSD 6.4 installation, only the uniform, \n> exponential, and gaussian cases reliably \"fail\". zipfian usually \n> doesn't.\n\n> It looks like the zipfian code almost always produces 4000 regardless of \n> the seed value, though occasionally it produces 4001. Bad parameters \n> for that algorithm, perhaps?\n\nWelcome to the zipfian highly skewed distribution! I'll check the \nparameters used in the test, maybe it should use something less extreme.\n\nsrandom is only used for initializing the state of various internal rand48 \nLCG PRNG for pgbench.\n\nMaybe on OpenBSD pg should switch srandom to srandom_deterministic?\n\n-- \nFabien.\n\n",
"msg_date": "Fri, 18 Jan 2019 09:37:26 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> all is explained here:\n>> https://man.openbsd.org/srandom\n>> Or at least most is explained there.\n\n> Yep. They try to be more serious than other systems about PRNG, which is \n> not bad in itself.\n\n> Maybe on OpenBSD pg should switch srandom to srandom_deterministic?\n\nDunno. I'm fairly annoyed by their idea that they're smarter than POSIX.\nHowever, for most of our uses of srandom, this behavior isn't awful;\nit's only pgbench that has an expectation that the platform random()\ncan be made to behave deterministically. And TBH I think that's just\nan expectation that's going to bite us.\n\nI'd suggest that maybe we should get rid of the use of both random()\nand srandom() in pgbench, and go over to letting set_random_seed()\nfill the pg_erand48 state directly. In the integer-seed case you\ncould use something equivalent to pg_srand48. (In the other cases\nprobably you could do better, certainly the strong-random case could\njust fill all 6 bytes directly.) That would get us to a place where\nthe behavior of --random-seed=N is not only deterministic but\nplatform-independent, which seems like an improvement.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 18 Jan 2019 12:56:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\n>> Maybe on OpenBSD pg should switch srandom to srandom_deterministic?\n>\n> Dunno. I'm fairly annoyed by their idea that they're smarter than POSIX.\n> However, for most of our uses of srandom, this behavior isn't awful;\n> it's only pgbench that has an expectation that the platform random()\n> can be made to behave deterministically. And TBH I think that's just\n> an expectation that's going to bite us.\n>\n> I'd suggest that maybe we should get rid of the use of both random()\n> and srandom() in pgbench, and go over to letting set_random_seed()\n> fill the pg_erand48 state directly. In the integer-seed case you\n> could use something equivalent to pg_srand48. (In the other cases\n> probably you could do better, certainly the strong-random case could\n> just fill all 6 bytes directly.) That would get us to a place where\n> the behavior of --random-seed=N is not only deterministic but\n> platform-independent, which seems like an improvement.\n\nThat's a point. Althought I'm not found of round48, indeed having \nsomething platform independent for testing makes definite sense.\n\nI'll look into it.\n\n-- \nFabien.\n\n",
"msg_date": "Fri, 18 Jan 2019 23:01:07 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Hello Tom,\n\n>>> Maybe on OpenBSD pg should switch srandom to srandom_deterministic?\n>> \n>> Dunno. I'm fairly annoyed by their idea that they're smarter than POSIX.\n\nHmmm. I'm afraid that is not that hard.\n\n>> However, for most of our uses of srandom, this behavior isn't awful;\n>> it's only pgbench that has an expectation that the platform random()\n>> can be made to behave deterministically. And TBH I think that's just\n>> an expectation that's going to bite us.\n>> \n>> I'd suggest that maybe we should get rid of the use of both random()\n>> and srandom() in pgbench, and go over to letting set_random_seed()\n>> fill the pg_erand48 state directly. In the integer-seed case you\n>> could use something equivalent to pg_srand48. (In the other cases\n>> probably you could do better, certainly the strong-random case could\n>> just fill all 6 bytes directly.) That would get us to a place where\n>> the behavior of --random-seed=N is not only deterministic but\n>> platform-independent, which seems like an improvement.\n>\n> That's a point. Althought I'm not found of round48, indeed having something \n> platform independent for testing makes definite sense.\n>\n> I'll look into it.\n\nHere is a POC which defines an internal interface for a PRNG, and use it \nwithin pgbench, with several possible implementations which default to \nrand48.\n\nI must admit that I have a grudge against standard rand48:\n\n - it is a known poor PRNG which was designed at a time when LCG where\n basically the only low cost PRNG available. Newer designs were very\n recent when the standard was set.\n - it is a LCG, i.e. its low bits cycle quickly, so should not be used.\n - so the 48 bit state size is relevant for generating 32 bits ints\n and floats.\n - however it eis used to generate more bits...\n - the double function uses all 48 bits, whereas 52 need to be filled...\n - and it is used to generate integers, which means that for large range\n some values are inaccessible.\n - 3 * 16 bits integers state looks silly on 32/64 bit architectures.\n - ...\n\nGiven that postgres needs doubles (52 bits mantissa) and possibly 64 bits \nintegers, IMO the internal state should be 64 bits as a bare minimum, \nwhich anyway is also the minimal bite on 64 bit architectures, which is \nwhat is encoutered in practice.\n\n-- \nFabien.",
"msg_date": "Sun, 20 Jan 2019 11:07:46 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>> I'd suggest that maybe we should get rid of the use of both random()\n>>> and srandom() in pgbench, and go over to letting set_random_seed()\n>>> fill the pg_erand48 state directly.\n\n> Here is a POC which defines an internal interface for a PRNG, and use it \n> within pgbench, with several possible implementations which default to \n> rand48.\n\nI seriously dislike this patch. pgbench's random support is quite\noverengineered already IMO, and this proposes to add a whole batch of\nnew code and new APIs to fix a very small bug.\n\n> I must admit that I have a grudge against standard rand48:\n\nI think this is nonsense, particularly the claim that anything in PG\ncares about the lowest-order bits of random doubles. I'm aware that\nthere are applications where that does matter, but people aren't doing\nhigh-precision weather simulations in pgbench.\n\nBTW, did you look at the question of the range of zipfian? I confirmed\nhere that as used in the test case, it's generating a range way smaller\nthan the other ones: repeating the insertion snippet 1000x produces stats\nlike this:\n\nregression=# select seed,rand,min(val),max(val),count(distinct val) from seeded_random group by 1,2 order by 2,1;\n seed | rand | min | max | count \n------------+-------------+------+------+-------\n 1957482663 | exponential | 2000 | 2993 | 586\n 1958556409 | exponential | 2000 | 2995 | 569\n 1959867462 | exponential | 2000 | 2997 | 569\n 1957482663 | gaussian | 3009 | 3997 | 493\n 1958556409 | gaussian | 3027 | 3956 | 501\n 1959867462 | gaussian | 3018 | 3960 | 511\n 1957482663 | uniform | 1001 | 1999 | 625\n 1958556409 | uniform | 1000 | 1999 | 642\n 1959867462 | uniform | 1001 | 1999 | 630\n 1957482663 | zipfian | 4000 | 4081 | 19\n 1958556409 | zipfian | 4000 | 4022 | 18\n 1959867462 | zipfian | 4000 | 4156 | 23\n\nI have no idea whether that indicates an actual bug, or just poor\nchoice of parameter in the test's call. But the very small number\nof distinct outputs is disheartening at least.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 20 Jan 2019 15:26:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "\nHello Tom,\n\n>> Here is a POC which defines an internal interface for a PRNG, and use it\n>> within pgbench, with several possible implementations which default to\n>> rand48.\n>\n> I seriously dislike this patch. pgbench's random support is quite\n> overengineered already IMO, and this proposes to add a whole batch of\n> new code and new APIs to fix a very small bug.\n\nMy intention is rather to discuss postgres' PRNG, in passing. Full success \non this point:-)\n\n>> I must admit that I have a grudge against standard rand48:\n>\n> I think this is nonsense, particularly the claim that anything in PG\n> cares about the lowest-order bits of random doubles. I'm aware that\n> there are applications where that does matter, but people aren't doing\n> high-precision weather simulations in pgbench.\n\nSure. My point is not that it is an actual issue for pgbench, but as the \nsame PRNG is used more or less everywhere in postgres, I think that it \nshould be a good one rather than a known bad one.\n\nEg, about double:\n\n \\set i debug(random(1, POWER(2,49)) % 2)\n\nAlways return 1 because of the 48 bit precision, i.e. the output is never \neven.\n\n \\set i debug(random(1, POWER(2,48)) % 2)\n\nReturn 0 1 0 1 0 1 0 1 0 1 0 1 0 1 ... because it is a LCG.\n\n \\set i debug(random(1, POWER(2,48)) % 4)\n\nCycles over (3 2 1 0)*\n\n \\set i debug(random(1, power(2, 47)) % 4)\n\nCycles over (0 0 1 1 2 2 3 3)*, and so on.\n\n> BTW, did you look at the question of the range of zipfian?\n\nYep.\n\n> I confirmed here that as used in the test case, it's generating a range \n> way smaller than the other ones: repeating the insertion snippet 1000x \n> produces stats like this: [...]\n\n> I have no idea whether that indicates an actual bug, or just poor\n> choice of parameter in the test's call. But the very small number\n> of distinct outputs is disheartening at least.\n\nZipf distribution is highly skewed, somehow close to an exponential. To \nreduce the decreasing probability the parameter must be closer to 1, eg \n1.05 or something. However as far as the test is concerned I do not see \nthis as a significant issue. I was rather planning to submit a \ndocumentation improvement to provide more precise hints about how the \ndistribution behaves depending on the parameter, and possibly reduce the \nparameter used in the test in passing, but I see this as not very urgent.\n\n-- \nFabien.\n\n",
"msg_date": "Sun, 20 Jan 2019 22:54:41 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Hello Tom,\n\n>> BTW, did you look at the question of the range of zipfian?\n>\n> Yep.\n>\n>> I confirmed here that as used in the test case, it's generating a range way \n>> smaller than the other ones: repeating the insertion snippet 1000x produces \n>> stats like this: [...]\n>\n>> I have no idea whether that indicates an actual bug, or just poor\n>> choice of parameter in the test's call. But the very small number\n>> of distinct outputs is disheartening at least.\n>\n> Zipf distribution is highly skewed, somehow close to an exponential. To \n> reduce the decreasing probability the parameter must be closer to 1, eg 1.05 \n> or something. However as far as the test is concerned I do not see this as a \n> significant issue. I was rather planning to submit a documentation \n> improvement to provide more precise hints about how the distribution behaves \n> depending on the parameter, and possibly reduce the parameter used in the \n> test in passing, but I see this as not very urgent.\n\nAttached a documentation patch and a scripts to check the distribution \n(here for N = 10 & s = 2.5), the kind of thing I used when checking the \ninitial patch:\n\n sh> psql < zipf_init.sql\n sh> pgbench -t 500000 -c 2 -M prepared -f zipf_test.sql -P 1\n -- close to 29000 tps on my laptop\n sh> psql < zipf_end.sql\n ┌────┬────────┬────────────────────┬────────────────────────┐\n │ i │ cnt │ ratio │ expected │\n ├────┼────────┼────────────────────┼────────────────────────┤\n │ 1 │ 756371 │ • │ • │\n │ 2 │ 133431 │ 5.6686302283577280 │ 5.65685424949238019521 │\n │ 3 │ 48661 │ 2.7420521567579787 │ 2.7556759606310754 │\n │ 4 │ 23677 │ 2.0552012501583816 │ 2.0528009571186693 │\n │ 5 │ 13534 │ 1.7494458401063987 │ 1.7469281074217107 │\n │ 6 │ 8773 │ 1.5426877920893651 │ 1.5774409656148784 │\n │ 7 │ 5709 │ 1.5366964442108951 │ 1.4701680288054869 │\n │ 8 │ 4247 │ 1.3442429950553332 │ 1.3963036312159316 │\n │ 9 │ 3147 │ 1.3495392437241818 │ 1.3423980299088363 │\n │ 10 │ 2450 │ 1.2844897959183673 │ 1.3013488313450120 │\n └────┴────────┴────────────────────┴────────────────────────┘\n sh> psql < zipf_clean.sql\n\nGiven these results, I do not think that it is useful to change \nrandom_zipfian TAP test parameter from 2.5 to something else.\n\n-- \nFabien.",
"msg_date": "Tue, 22 Jan 2019 11:16:02 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Given these results, I do not think that it is useful to change \n> random_zipfian TAP test parameter from 2.5 to something else.\n\nI'm not following this argument. The test case is basically useless\nfor its intended purpose with that parameter, because it's highly\nlikely that the failure mode it's supposedly checking for will be\nmasked by the \"random\" function's tendency to spit out the same\nvalue all the time. We might as well drop zipfian from the test\naltogether and save ourselves some buildfarm cycles.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 22 Jan 2019 10:46:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Hello Tom,\n\n>> Given these results, I do not think that it is useful to change\n>> random_zipfian TAP test parameter from 2.5 to something else.\n>\n> I'm not following this argument. The test case is basically useless\n> for its intended purpose with that parameter, because it's highly\n> likely that the failure mode it's supposedly checking for will be\n> masked by the \"random\" function's tendency to spit out the same\n> value all the time.\n\nThe first value is taken about 75% of the time for N=1000 and s=2.5, which \nmeans that a non deterministic implementation would succeed about 0.75ᅵ ~ \n56% of the time on that one. Then there is other lower probability random \nsuccesses. ISTM that if a test fails every three run it would be detected, \nso the purpose of testing random_zipfian determinism is somehow served.\n\nAlso, the drawing procedure is less efficient when the parameter is close \nto 1 because it is more likely to loop, and there are other values tested, \n0.5 and 1.3 (note that the code has two methods, depending on whether the \nparameter is below or above 1), so I think that having something different \nis better.\n\nIf you want something more drastic, using 1.5 instead of 2.5 would reduce \nthe probability of accidentaly passing the test by chance to about 20%, so \nit would fail 80% of the time.\n\n> We might as well drop zipfian from the test altogether and save \n> ourselves some buildfarm cycles.\n\nAll 4 random functions are tested together on the same run, removing a \nparticular one does not seem desirable to me.\n\n-- \nFabien.",
"msg_date": "Tue, 22 Jan 2019 17:19:11 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>> Here is a POC which defines an internal interface for a PRNG, and use it\n>>> within pgbench, with several possible implementations which default to\n>>> rand48.\n\n>> I seriously dislike this patch. pgbench's random support is quite\n>> overengineered already IMO, and this proposes to add a whole batch of\n>> new code and new APIs to fix a very small bug.\n\n> My intention is rather to discuss postgres' PRNG, in passing. Full success \n> on this point:-)\n\nOur immediate problem is to fix a portability failure, which we need to\nback-patch into at least one released branch, ergo conservatism is\nwarranted. I had in mind something more like the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 22 Jan 2019 11:44:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I'm not following this argument. The test case is basically useless\n>> for its intended purpose with that parameter, because it's highly\n>> likely that the failure mode it's supposedly checking for will be\n>> masked by the \"random\" function's tendency to spit out the same\n>> value all the time.\n\n> The first value is taken about 75% of the time for N=1000 and s=2.5, which \n> means that a non deterministic implementation would succeed about 0.75² ~ \n> 56% of the time on that one.\n\nRight, that's about what we've been seeing on OpenBSD.\n\n> Also, the drawing procedure is less efficient when the parameter is close \n> to 1 because it is more likely to loop,\n\nThat might be something to fix, but I agree it's a reason not to go\noverboard trying to flatten the test case's distribution right now.\n\n> If you want something more drastic, using 1.5 instead of 2.5 would reduce \n> the probability of accidentaly passing the test by chance to about 20%, so \n> it would fail 80% of the time.\n\nI think your math is off; 1.5 works quite well here. I saw one failure\nto produce distinct values in 20 attempts. It's not demonstrably slower\nthan 2.5 either. (1.1 is measurably slower; probably not by enough for\nanyone to care, but 1.5 is good enough for me.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 22 Jan 2019 12:12:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": ">> The first value is taken about 75% of the time for N=1000 and s=2.5, which\n>> means that a non deterministic implementation would succeed about 0.75² ~\n>> 56% of the time on that one.\n>\n> Right, that's about what we've been seeing on OpenBSD.\n>\n>> Also, the drawing procedure is less efficient when the parameter is close\n>> to 1 because it is more likely to loop,\n>\n> That might be something to fix, but I agree it's a reason not to go\n> overboard trying to flatten the test case's distribution right now.\n\nProbably you would have to invent a new method to draw a zipfian \ndistribution for that, which would be nice.\n\n>> If you want something more drastic, using 1.5 instead of 2.5 would reduce\n>> the probability of accidentaly passing the test by chance to about 20%, so\n>> it would fail 80% of the time.\n>\n> I think your math is off;\n\nArgh. Although I confirm my computation, ISTM that with 1.5 the first \nvalue as 39% chance of getting out so collision on 15% of cases, second \nvalue 14% so collision on 2%, ... total cumulated probability about 18%.\n\n> 1.5 works quite well here. I saw one failure to produce distinct values \n> in 20 attempts.\n\nFor 3 failure expected, that is possible.\n\n> It's not demonstrably slower than 2.5 either. (1.1 is measurably \n> slower; probably not by enough for anyone to care, but 1.5 is good \n> enough for me.)\n\nGood if it fails quick enough for you.\n\n-- \nFabien.",
"msg_date": "Tue, 22 Jan 2019 20:07:05 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": ">> It's not demonstrably slower than 2.5 either. (1.1 is measurably slower; \n>> probably not by enough for anyone to care, but 1.5 is good enough for me.)\n>\n> Good if it fails quick enough for you.\n\nAttached a patch with the zipf doc update & the TAP test parameter change.\n\n-- \nFabien.",
"msg_date": "Tue, 22 Jan 2019 20:58:50 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Hello Tom,\n\n>>>> Here is a POC which defines an internal interface for a PRNG, and use it\n>>>> within pgbench, with several possible implementations which default to\n>>>> rand48.\n>\n>>> I seriously dislike this patch. pgbench's random support is quite\n>>> overengineered already IMO, and this proposes to add a whole batch of\n>>> new code and new APIs to fix a very small bug.\n>\n>> My intention is rather to discuss postgres' PRNG, in passing. Full success\n>> on this point:-)\n>\n> Our immediate problem is to fix a portability failure, which we need to\n> back-patch into at least one released branch, ergo conservatism is\n> warranted.\n\nSure, the patch I sent is definitely not for backpatching, it is for \ndiscussion.\n\n> I had in mind something more like the attached.\n\nYep.\n\nI'm not too happy that it mixes API levels, and about the int/double/int \npath.\n\nAttached an updated version which relies on pg_jrand48 instead. Also, as \nthe pseudo-random state is fully controlled, seeded test results are \ndeterministic so the expected value can be fully checked.\n\nI did a few sanity tests which were all ok.\n\nI think that this version is appropriate for backpatching. I also think \nthat it would be appropriate to consider having a better PRNG to replace \nrand48 in a future release.\n\n-- \nFabien.",
"msg_date": "Tue, 22 Jan 2019 21:35:27 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I had in mind something more like the attached.\n\n> Yep.\n> I'm not too happy that it mixes API levels, and about the int/double/int \n> path.\n> Attached an updated version which relies on pg_jrand48 instead.\n\nHm, I'm not sure that's really an improvement, but I pushed it like that\n(and the other change along with it).\n\n> Also, as \n> the pseudo-random state is fully controlled, seeded test results are \n> deterministic so the expected value can be fully checked.\n\nI found that the \"expected value\" was different in v11 than HEAD,\nwhich surprised me. It looks like the reason is that HEAD sets up\nmore/different RandomStates from the same seed than v11 did. Not\nsure if it's a good thing for this behavior to change across versions.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 24 Jan 2019 11:35:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
},
{
"msg_contents": "On 2019-Jan-24, Tom Lane wrote:\n\n> > Also, as \n> > the pseudo-random state is fully controlled, seeded test results are \n> > deterministic so the expected value can be fully checked.\n> \n> I found that the \"expected value\" was different in v11 than HEAD,\n> which surprised me. It looks like the reason is that HEAD sets up\n> more/different RandomStates from the same seed than v11 did. Not\n> sure if it's a good thing for this behavior to change across versions.\n\nThe rationale behind this was that some internal uses of random numbers\nmessed up the determinism of user-invoked random functions; 409231919443\ncommit message says\n\n While at it, use separate random state for thread administratrivia such\n as deciding which script to run, how long to delay for throttling, or\n whether to log a message when sampling; this not only makes these tasks\n independent of each other, but makes the actual thread run\n deterministic.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 24 Jan 2019 13:45:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: we lack TAP test coverage on NetBSD and OpenBSD"
}
] |
[
{
"msg_contents": "Remove references to Majordomo\n\nLists are not handled by Majordomo anymore and haven't been for a while,\nso remove the reference and instead direct people to the list server.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0e10040e19db02a797a2597d2fecbaa094f04866\n\nModified Files\n--------------\ndoc/src/sgml/problems.sgml | 12 ++++--------\n1 file changed, 4 insertions(+), 8 deletions(-)\n\n",
"msg_date": "Thu, 17 Jan 2019 13:04:44 +0000",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Thu, Jan 17, 2019 at 01:04:44PM +0000, Magnus Hagander wrote:\n> Remove references to Majordomo\n> \n> Lists are not handled by Majordomo anymore and haven't been for a while,\n> so remove the reference and instead direct people to the list server.\n\nWouldn't it be better to also switch the references to pgsql-bugs in\nall the C code for the different --help outputs?\n--\nMichael",
"msg_date": "Fri, 18 Jan 2019 09:26:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jan 17, 2019 at 01:04:44PM +0000, Magnus Hagander wrote:\n> > Remove references to Majordomo\n> >\n> > Lists are not handled by Majordomo anymore and haven't been for a while,\n> > so remove the reference and instead direct people to the list server.\n>\n> Wouldn't it be better to also switch the references to pgsql-bugs in\n> all the C code for the different --help outputs?\n>\n>\nYou are right, we definitely should. I'll go ahead and fix that. I can't\nquite make up my mind on if it's a good idea to backpatch that though --\nit's certainly safe enough to do, but it might cause issues for translators?\n\n//Magnus\n\nOn Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jan 17, 2019 at 01:04:44PM +0000, Magnus Hagander wrote:\n> Remove references to Majordomo\n> \n> Lists are not handled by Majordomo anymore and haven't been for a while,\n> so remove the reference and instead direct people to the list server.\n\nWouldn't it be better to also switch the references to pgsql-bugs in\nall the C code for the different --help outputs?You are right, we definitely should. I'll go ahead and fix that. I can't quite make up my mind on if it's a good idea to backpatch that though -- it's certainly safe enough to do, but it might cause issues for translators?//Magnus",
"msg_date": "Fri, 18 Jan 2019 11:38:51 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Wouldn't it be better to also switch the references to pgsql-bugs in\n>> all the C code for the different --help outputs?\n\n> You are right, we definitely should. I'll go ahead and fix that. I can't\n> quite make up my mind on if it's a good idea to backpatch that though --\n> it's certainly safe enough to do, but it might cause issues for translators?\n\nYeah, weak -1 for back-patching. We don't usually like to thrash\ntranslatable messages in the back branches.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 18 Jan 2019 10:02:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 10:02:47AM -0500, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> You are right, we definitely should. I'll go ahead and fix that. I can't\n>> quite make up my mind on if it's a good idea to backpatch that though --\n>> it's certainly safe enough to do, but it might cause issues for translators?\n> \n> Yeah, weak -1 for back-patching. We don't usually like to thrash\n> translatable messages in the back branches.\n\nYes, I think that it is better to not bother about back-branches and\njust do that on HEAD.\n--\nMichael",
"msg_date": "Sat, 19 Jan 2019 09:04:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >> Wouldn't it be better to also switch the references to pgsql-bugs in\n> >> all the C code for the different --help outputs?\n>\n> > You are right, we definitely should. I'll go ahead and fix that. I can't\n> > quite make up my mind on if it's a good idea to backpatch that though --\n> > it's certainly safe enough to do, but it might cause issues for\n> translators?\n>\n> Yeah, weak -1 for back-patching. We don't usually like to thrash\n> translatable messages in the back branches.\n>\n\nPushed.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jan 18, 2019 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Wouldn't it be better to also switch the references to pgsql-bugs in\n>> all the C code for the different --help outputs?\n\n> You are right, we definitely should. I'll go ahead and fix that. I can't\n> quite make up my mind on if it's a good idea to backpatch that though --\n> it's certainly safe enough to do, but it might cause issues for translators?\n\nYeah, weak -1 for back-patching. We don't usually like to thrash\ntranslatable messages in the back branches.Pushed. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 19 Jan 2019 19:13:35 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Fri, Jan 18, 2019 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Magnus Hagander <magnus@hagander.net> writes:\n> > > On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz>\n> > wrote:\n> > >> Wouldn't it be better to also switch the references to pgsql-bugs in\n> > >> all the C code for the different --help outputs?\n> >\n> > > You are right, we definitely should. I'll go ahead and fix that. I can't\n> > > quite make up my mind on if it's a good idea to backpatch that though --\n> > > it's certainly safe enough to do, but it might cause issues for\n> > translators?\n> >\n> > Yeah, weak -1 for back-patching. We don't usually like to thrash\n> > translatable messages in the back branches.\n> \n> Pushed.\n\nDoes this also implicitly mean we've just agreed to push back the\nretirement of the @postgresql.org aliases for the lists until v11 is\nEOL..?\n\nI can understand the concern around translators and back-patching and\nsuch, but I don't think we should be waiting another 5 years before we\nretire those aliases as having them is preventing us from moving forward\nwith other infrastructure improvements to our email systems. I also\ndon't think it'd be ideal to wait until we are ready to retire those\naliases to make the change in the back-branches, so, I really think we\nshould back-patch this.\n\nI could see an argument for waiting until the next round of releases is\nout to give more time to translators, if we think that's necessary, but\ngiven that it's a pretty straight-forward change, I wouldn't think it'd\nbe too bad..\n\nThanks!\n\nStephen",
"msg_date": "Sat, 19 Jan 2019 13:19:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Sat, Jan 19, 2019 at 7:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Magnus Hagander (magnus@hagander.net) wrote:\n> > On Fri, Jan 18, 2019 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Magnus Hagander <magnus@hagander.net> writes:\n> > > > On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz\n> >\n> > > wrote:\n> > > >> Wouldn't it be better to also switch the references to pgsql-bugs in\n> > > >> all the C code for the different --help outputs?\n> > >\n> > > > You are right, we definitely should. I'll go ahead and fix that. I\n> can't\n> > > > quite make up my mind on if it's a good idea to backpatch that\n> though --\n> > > > it's certainly safe enough to do, but it might cause issues for\n> > > translators?\n> > >\n> > > Yeah, weak -1 for back-patching. We don't usually like to thrash\n> > > translatable messages in the back branches.\n> >\n> > Pushed.\n>\n> Does this also implicitly mean we've just agreed to push back the\n> retirement of the @postgresql.org aliases for the lists until v11 is\n> EOL..?\n>\n\nSpecifically for pgsql-bugs, yes :) We can special-case that one when the\ntime comes, and retire the other ones properly.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Jan 19, 2019 at 7:19 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Fri, Jan 18, 2019 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Magnus Hagander <magnus@hagander.net> writes:\n> > > On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz>\n> > wrote:\n> > >> Wouldn't it be better to also switch the references to pgsql-bugs in\n> > >> all the C code for the different --help outputs?\n> >\n> > > You are right, we definitely should. I'll go ahead and fix that. I can't\n> > > quite make up my mind on if it's a good idea to backpatch that though --\n> > > it's certainly safe enough to do, but it might cause issues for\n> > translators?\n> >\n> > Yeah, weak -1 for back-patching. We don't usually like to thrash\n> > translatable messages in the back branches.\n> \n> Pushed.\n\nDoes this also implicitly mean we've just agreed to push back the\nretirement of the @postgresql.org aliases for the lists until v11 is\nEOL..?Specifically for pgsql-bugs, yes :) We can special-case that one when the time comes, and retire the other ones properly.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 19 Jan 2019 19:25:08 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Sat, Jan 19, 2019 at 7:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Does this also implicitly mean we've just agreed to push back the\n>> retirement of the @postgresql.org aliases for the lists until v11 is\n>> EOL..?\n\n> Specifically for pgsql-bugs, yes :) We can special-case that one when the\n> time comes, and retire the other ones properly.\n\nIf you're hoping to wait till nobody's copy of Postgres mentions the\n@postgresql.org addresses, you're going to be waiting a long time.\nI don't see a reason to suppose that pre-9.4 copies are going to\ndisappear from circulation anytime soon. Heck, they haven't even\ndisappeared from our website, e.g.\n\nhttps://www.postgresql.org/docs/9.3/bug-reporting.html\n\nSo I doubt that back-patching this particular commit would move the\ngoalposts very much in terms of when we think we can desupport the\n@postgresql.org addresses.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 19 Jan 2019 15:53:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Sat, Jan 19, 2019 at 7:19 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >> Does this also implicitly mean we've just agreed to push back the\n> >> retirement of the @postgresql.org aliases for the lists until v11 is\n> >> EOL..?\n> \n> > Specifically for pgsql-bugs, yes :) We can special-case that one when the\n> > time comes, and retire the other ones properly.\n\nThat might possibly work.\n\n> If you're hoping to wait till nobody's copy of Postgres mentions the\n> @postgresql.org addresses, you're going to be waiting a long time.\n\nI was thinking that we would want to make sure that supported versions\nhave the correct address, but if we're fine with special-casing the old\naliases and keeping them working, as Magnus suggests, then I suppose it\ndoesn't matter.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 21 Jan 2019 12:00:31 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Sat, Jan 19, 2019 at 01:19:46PM -0500, Stephen Frost wrote:\n> * Magnus Hagander (magnus@hagander.net) wrote:\n> > On Fri, Jan 18, 2019 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Magnus Hagander <magnus@hagander.net> writes:\n> > > > On Fri, Jan 18, 2019 at 1:26 AM Michael Paquier <michael@paquier.xyz>\n> > > wrote:\n> > > >> Wouldn't it be better to also switch the references to pgsql-bugs in\n> > > >> all the C code for the different --help outputs?\n> > >\n> > > > You are right, we definitely should. I'll go ahead and fix that. I can't\n> > > > quite make up my mind on if it's a good idea to backpatch that though --\n> > > > it's certainly safe enough to do, but it might cause issues for\n> > > translators?\n> > >\n> > > Yeah, weak -1 for back-patching. We don't usually like to thrash\n> > > translatable messages in the back branches.\n> > \n> > Pushed.\n> \n> Does this also implicitly mean we've just agreed to push back the\n> retirement of the @postgresql.org aliases for the lists until v11 is\n> EOL..?\n> \n> I can understand the concern around translators and back-patching and\n> such, but I don't think we should be waiting another 5 years before we\n> retire those aliases as having them is preventing us from moving forward\n> with other infrastructure improvements to our email systems.\n\nWhat are those blocked infrastructure improvements?\n\n",
"msg_date": "Sat, 26 Jan 2019 23:28:01 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n> > Does this also implicitly mean we've just agreed to push back the\n> > retirement of the @postgresql.org aliases for the lists until v11 is\n> > EOL..?\n> >\n> > I can understand the concern around translators and back-patching and\n> > such, but I don't think we should be waiting another 5 years before we\n> > retire those aliases as having them is preventing us from moving forward\n> > with other infrastructure improvements to our email systems.\n>\n> What are those blocked infrastructure improvements?\n\n+1 for that question. I find myself wondering what infrastructure\nimprovements could possibly be important enough to justify rushing\nthis change (or for that matter, ever making it at all).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 28 Jan 2019 11:40:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n> > > Does this also implicitly mean we've just agreed to push back the\n> > > retirement of the @postgresql.org aliases for the lists until v11 is\n> > > EOL..?\n> > >\n> > > I can understand the concern around translators and back-patching and\n> > > such, but I don't think we should be waiting another 5 years before we\n> > > retire those aliases as having them is preventing us from moving forward\n> > > with other infrastructure improvements to our email systems.\n> >\n> > What are those blocked infrastructure improvements?\n> \n> +1 for that question. I find myself wondering what infrastructure\n> improvements could possibly be important enough to justify rushing\n> this change (or for that matter, ever making it at all).\n\nThe specific improvements we're talking about are DKIM/DMARC/SPF, which\nis becoming more and more important to making sure that the email from\nour lists can actually get through to the subscribers.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 28 Jan 2019 12:01:08 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n>> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n>>> What are those blocked infrastructure improvements?\n\n> The specific improvements we're talking about are DKIM/DMARC/SPF, which\n> is becoming more and more important to making sure that the email from\n> our lists can actually get through to the subscribers.\n\nCertainly those are pretty critical. But can you give us a quick\nrefresher on why dropping the @postgresql.org list aliases is\nnecessary for that? I thought we'd already managed to make the\nlists compliant with those specs.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 28 Jan 2019 13:26:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Mon, Jan 28, 2019 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> >> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n> >>> What are those blocked infrastructure improvements?\n>\n> > The specific improvements we're talking about are DKIM/DMARC/SPF, which\n> > is becoming more and more important to making sure that the email from\n> > our lists can actually get through to the subscribers.\n>\n> Certainly those are pretty critical. But can you give us a quick\n> refresher on why dropping the @postgresql.org list aliases is\n> necessary for that? I thought we'd already managed to make the\n> lists compliant with those specs.\n>\n\nI believe it doesn't, as Stephen also agreed with upthread.\n\nWe needed to move our *sending* out of the postgresql.org domain in order\nto be able to treat them differently. But there is nothing preventing us\nfrom receiving to e.g. pgsql-bugs@postgresql.org and internally forward it\nto @lists.postgresql.org, where we then deliver from.\n\nI believe we *can* do the same for all lists, but that part is more a\nmatter of cleaning up our infrastructure, which has a fair amount of cruft\nto deal with those things. We have an easy workaround for a couple of lists\nwhich owuld take only a fairly small amount of traffic over it, but we'd\nlike to get rid of the cruft to deal with the large batch of them.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jan 28, 2019 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Stephen Frost <sfrost@snowman.net> writes:\n>> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n>>> What are those blocked infrastructure improvements?\n\n> The specific improvements we're talking about are DKIM/DMARC/SPF, which\n> is becoming more and more important to making sure that the email from\n> our lists can actually get through to the subscribers.\n\nCertainly those are pretty critical. But can you give us a quick\nrefresher on why dropping the @postgresql.org list aliases is\nnecessary for that? I thought we'd already managed to make the\nlists compliant with those specs.I believe it doesn't, as Stephen also agreed with upthread.We needed to move our *sending* out of the postgresql.org domain in order to be able to treat them differently. But there is nothing preventing us from receiving to e.g. pgsql-bugs@postgresql.org and internally forward it to @lists.postgresql.org, where we then deliver from.I believe we *can* do the same for all lists, but that part is more a matter of cleaning up our infrastructure, which has a fair amount of cruft to deal with those things. We have an easy workaround for a couple of lists which owuld take only a fairly small amount of traffic over it, but we'd like to get rid of the cruft to deal with the large batch of them. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 28 Jan 2019 19:29:39 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Jan 28, 2019 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Stephen Frost <sfrost@snowman.net> writes:\n> > >> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n> > >>> What are those blocked infrastructure improvements?\n> >\n> > > The specific improvements we're talking about are DKIM/DMARC/SPF, which\n> > > is becoming more and more important to making sure that the email from\n> > > our lists can actually get through to the subscribers.\n> >\n> > Certainly those are pretty critical. But can you give us a quick\n> > refresher on why dropping the @postgresql.org list aliases is\n> > necessary for that? I thought we'd already managed to make the\n> > lists compliant with those specs.\n> \n> I believe it doesn't, as Stephen also agreed with upthread.\n> \n> We needed to move our *sending* out of the postgresql.org domain in order\n> to be able to treat them differently. But there is nothing preventing us\n> from receiving to e.g. pgsql-bugs@postgresql.org and internally forward it\n> to @lists.postgresql.org, where we then deliver from.\n\nYes, I *think* this will work, as long as we are sending it back out\nfrom pgsql-bugs@lists.postgresql.org then we should be able to have SPF\nrecords for lists.postgresql.org and downstream mail servers should be\nhappy with that, though I still want to actually test it out in our test\ninstance of PGLister.\n\nThis is the main thing- we want to have lists.postgresql.org (and\nfriends) have SPF (and maybe DKIM..) records which basically say that\nmalur is allowed to send mail out from those lists (or with those lists\nin the From: of the email in the case of DKIM), but we don't want to\nmake everyone who is sending email from a @postgresql.org have to relay\nthrough our mail servers (well, at least not today.. we may get to a\npoint in the spam wars where we *have* to do that or their email ends up\nnot going through, but we aren't quite there yet).\n\n> I believe we *can* do the same for all lists, but that part is more a\n> matter of cleaning up our infrastructure, which has a fair amount of cruft\n> to deal with those things. We have an easy workaround for a couple of lists\n> which owuld take only a fairly small amount of traffic over it, but we'd\n> like to get rid of the cruft to deal with the large batch of them.\n\nYes, there's this aspect of it also.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 28 Jan 2019 13:43:42 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Mon, Jan 28, 2019 at 07:29:39PM +0100, Magnus Hagander wrote:\n> On Mon, Jan 28, 2019 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > >> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n> > >>> What are those blocked infrastructure improvements?\n> >\n> > > The specific improvements we're talking about are DKIM/DMARC/SPF, which\n> > > is becoming more and more important to making sure that the email from\n> > > our lists can actually get through to the subscribers.\n> >\n> > Certainly those are pretty critical. But can you give us a quick\n> > refresher on why dropping the @postgresql.org list aliases is\n> > necessary for that? I thought we'd already managed to make the\n> > lists compliant with those specs.\n> \n> I believe it doesn't, as Stephen also agreed with upthread.\n> \n> We needed to move our *sending* out of the postgresql.org domain in order\n> to be able to treat them differently. But there is nothing preventing us\n> from receiving to e.g. pgsql-bugs@postgresql.org and internally forward it\n> to @lists.postgresql.org, where we then deliver from.\n> \n> I believe we *can* do the same for all lists, but that part is more a\n> matter of cleaning up our infrastructure, which has a fair amount of cruft\n> to deal with those things. We have an easy workaround for a couple of lists\n> which owuld take only a fairly small amount of traffic over it, but we'd\n> like to get rid of the cruft to deal with the large batch of them.\n\nCeasing to accept mail at pgsql-FOO@postgresql.org would cause a concrete,\nuser-facing loss in that users replying to old messages would get a bounce.\nAlso, I find pgsql-FOO@lists.postgresql.org uglier, since \"lists\" adds\nnegligible information. (The same is true of \"pgsql\", alas.) If the cost of\nkeeping pgsql-FOO@postgresql.org is limited to \"cruft\", I'd prefer to keep\npgsql-FOO@postgresql.org indefinitely.\n\nnm\n\n",
"msg_date": "Sat, 2 Feb 2019 03:18:33 -0500",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
},
{
"msg_contents": "On Sat, Feb 2, 2019 at 9:18 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Mon, Jan 28, 2019 at 07:29:39PM +0100, Magnus Hagander wrote:\n> > On Mon, Jan 28, 2019 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Stephen Frost <sfrost@snowman.net> writes:\n> > > >> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com>\n> wrote:\n> > > >>> What are those blocked infrastructure improvements?\n> > >\n> > > > The specific improvements we're talking about are DKIM/DMARC/SPF,\n> which\n> > > > is becoming more and more important to making sure that the email\n> from\n> > > > our lists can actually get through to the subscribers.\n> > >\n> > > Certainly those are pretty critical. But can you give us a quick\n> > > refresher on why dropping the @postgresql.org list aliases is\n> > > necessary for that? I thought we'd already managed to make the\n> > > lists compliant with those specs.\n> >\n> > I believe it doesn't, as Stephen also agreed with upthread.\n> >\n> > We needed to move our *sending* out of the postgresql.org domain in\n> order\n> > to be able to treat them differently. But there is nothing preventing us\n> > from receiving to e.g. pgsql-bugs@postgresql.org and internally forward\n> it\n> > to @lists.postgresql.org, where we then deliver from.\n> >\n> > I believe we *can* do the same for all lists, but that part is more a\n> > matter of cleaning up our infrastructure, which has a fair amount of\n> cruft\n> > to deal with those things. We have an easy workaround for a couple of\n> lists\n> > which owuld take only a fairly small amount of traffic over it, but we'd\n> > like to get rid of the cruft to deal with the large batch of them.\n>\n> Ceasing to accept mail at pgsql-FOO@postgresql.org would cause a concrete,\n> user-facing loss in that users replying to old messages would get a bounce.\n> Also, I find pgsql-FOO@lists.postgresql.org uglier, since \"lists\" adds\n> negligible information. (The same is true of \"pgsql\", alas.) If the cost\n> of\n> keeping pgsql-FOO@postgresql.org is limited to \"cruft\", I'd prefer to keep\n> pgsql-FOO@postgresql.org indefinitely.\n>\n\nIt very specifically *does* convey important information. It may not do so\nto you, but posting to an @lists.<something> domain is something that\nimplies that you understand you are posting to a list, more or less. Thus\nit makes a big difference when it comes to things like GDPR, per the\ninformation we have received from people who know a lot more about that\nthan we do. That part only applies to lists that are being delivered and\narchived publicly.\n\nI had forgotten about that part and went back to my notes.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Feb 2, 2019 at 9:18 AM Noah Misch <noah@leadboat.com> wrote:On Mon, Jan 28, 2019 at 07:29:39PM +0100, Magnus Hagander wrote:\n> On Mon, Jan 28, 2019 at 7:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > >> On Sun, Jan 27, 2019 at 2:28 AM Noah Misch <noah@leadboat.com> wrote:\n> > >>> What are those blocked infrastructure improvements?\n> >\n> > > The specific improvements we're talking about are DKIM/DMARC/SPF, which\n> > > is becoming more and more important to making sure that the email from\n> > > our lists can actually get through to the subscribers.\n> >\n> > Certainly those are pretty critical. But can you give us a quick\n> > refresher on why dropping the @postgresql.org list aliases is\n> > necessary for that? I thought we'd already managed to make the\n> > lists compliant with those specs.\n> \n> I believe it doesn't, as Stephen also agreed with upthread.\n> \n> We needed to move our *sending* out of the postgresql.org domain in order\n> to be able to treat them differently. But there is nothing preventing us\n> from receiving to e.g. pgsql-bugs@postgresql.org and internally forward it\n> to @lists.postgresql.org, where we then deliver from.\n> \n> I believe we *can* do the same for all lists, but that part is more a\n> matter of cleaning up our infrastructure, which has a fair amount of cruft\n> to deal with those things. We have an easy workaround for a couple of lists\n> which owuld take only a fairly small amount of traffic over it, but we'd\n> like to get rid of the cruft to deal with the large batch of them.\n\nCeasing to accept mail at pgsql-FOO@postgresql.org would cause a concrete,\nuser-facing loss in that users replying to old messages would get a bounce.\nAlso, I find pgsql-FOO@lists.postgresql.org uglier, since \"lists\" adds\nnegligible information. (The same is true of \"pgsql\", alas.) If the cost of\nkeeping pgsql-FOO@postgresql.org is limited to \"cruft\", I'd prefer to keep\npgsql-FOO@postgresql.org indefinitely.It very specifically *does* convey important information. It may not do so to you, but posting to an @lists.<something> domain is something that implies that you understand you are posting to a list, more or less. Thus it makes a big difference when it comes to things like GDPR, per the information we have received from people who know a lot more about that than we do. That part only applies to lists that are being delivered and archived publicly.I had forgotten about that part and went back to my notes. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 8 Feb 2019 18:29:17 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Remove references to Majordomo"
}
] |
[
{
"msg_contents": "Restrict the use of temporary namespace in two-phase transactions\n\nAttempting to use a temporary table within a two-phase transaction is\nforbidden for ages. However, there have been uncovered grounds for\na couple of other object types and commands which work on temporary\nobjects with two-phase commit. In short, trying to create, lock or drop\nan object on a temporary schema should not be authorized within a\ntwo-phase transaction, as it would cause its state to create\ndependencies with other sessions, causing all sorts of side effects with\nthe existing session or other sessions spawned later on trying to use\nthe same temporary schema name.\n\nRegression tests are added to cover all the grounds found, the original\nreport mentioned function creation, but monitoring closer there are many\nother patterns with LOCK, DROP or CREATE EXTENSION which are involved.\nOne of the symptoms resulting in combining both is that the session\nwhich used the temporary schema is not able to shut down completely,\nwaiting for being able to drop the temporary schema, something that it\ncannot complete because of the two-phase transaction involved with\ntemporary objects. In this case the client is able to disconnect but\nthe session remains alive on the backend-side, potentially blocking\nconnection backend slots from being used. Other problems reported could\nalso involve server crashes.\n\nThis is back-patched down to v10, which is where 9b013dc has introduced\nMyXactFlags, something that this patch relies on.\n\nReported-by: Alexey Bashtanov\nAuthor: Michael Paquier\nReviewed-by: Masahiko Sawada\nDiscussion: https://postgr.es/m/5d910e2e-0db8-ec06-dd5f-baec420513c3@imap.cc\nBackpatch-through: 10\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/c5660e0aa52d5df27accd8e5e97295cf0e64f7d4\n\nModified Files\n--------------\ndoc/src/sgml/ref/prepare_transaction.sgml | 6 +-\nsrc/backend/access/transam/xact.c | 12 ++++\nsrc/backend/catalog/namespace.c | 59 +++++++++++++-----\nsrc/backend/commands/dropcmds.c | 8 +++\nsrc/backend/commands/extension.c | 7 +++\nsrc/backend/commands/lockcmds.c | 10 +++\nsrc/include/access/xact.h | 5 ++\n.../test_extensions/expected/test_extensions.out | 33 ++++++++++\n.../test_extensions/sql/test_extensions.sql | 29 +++++++++\nsrc/test/regress/expected/temp.out | 71 ++++++++++++++++++++++\nsrc/test/regress/sql/temp.sql | 56 +++++++++++++++++\n11 files changed, 278 insertions(+), 18 deletions(-)\n\n",
"msg_date": "Fri, 18 Jan 2019 00:22:52 +0000",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 12:22:52AM +0000, Michael Paquier wrote:\n> Restrict the use of temporary namespace in two-phase transactions\n> \n> Attempting to use a temporary table within a two-phase transaction is\n> forbidden for ages. However, there have been uncovered grounds for\n> a couple of other object types and commands which work on temporary\n> objects with two-phase commit. In short, trying to create, lock or drop\n> an object on a temporary schema should not be authorized within a\n> two-phase transaction, as it would cause its state to create\n> dependencies with other sessions, causing all sorts of side effects with\n> the existing session or other sessions spawned later on trying to use\n> the same temporary schema name.\n\nI have been monitoring the buildfarm and crake is complaining:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=crake&br=HEAD\n\nHere is the problem:\nSET search_path TO 'pg_temp';\nBEGIN;\nSELECT current_schema() ~ 'pg_temp' AS is_temp_schema;\n- is_temp_schema\n-----------------\n- t\n-(1 row)\n-\n+ERROR: cannot create temporary tables during a parallel operation\nPREPARE TRANSACTION 'twophase_search';\n-ERROR: cannot PREPARE a transaction that has operated on temporary namespace\n\nI am actually amazed to see the planner choose a parallel plan for\nthat, and the test can be fixed by enforcing those parameters I think:\nSET max_parallel_workers = 0;\nSET max_parallel_workers_per_gather = 0;\nCould somebody confirm my assumption here by the way? This enforces a\nnon-parallel plan, right?\n\nAnyway, it seems to me that this is pointing out to another issue:\ncurrent_schema() can trigger a namespace creation, hence shouldn't we\nmark it as PARALLEL UNSAFE and make sure that we never run into this\nproblem?\n--\nMichael",
"msg_date": "Fri, 18 Jan 2019 09:59:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have been monitoring the buildfarm and crake is complaining:\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=crake&br=HEAD\n\n> I am actually amazed to see the planner choose a parallel plan for\n> that,\n\nThat's due to force_parallel_mode = regress, I imagine.\n\n> Anyway, it seems to me that this is pointing out to another issue:\n> current_schema() can trigger a namespace creation, hence shouldn't we\n> mark it as PARALLEL UNSAFE and make sure that we never run into this\n> problem?\n\nThat seems a bit annoying, but maybe we have little choice?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 17 Jan 2019 20:08:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Thu, Jan 17, 2019 at 8:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Anyway, it seems to me that this is pointing out to another issue:\n> > current_schema() can trigger a namespace creation, hence shouldn't we\n> > mark it as PARALLEL UNSAFE and make sure that we never run into this\n> > problem?\n>\n> That seems a bit annoying, but maybe we have little choice?\n\nThe only other option I see is to make current_schema() not trigger a\nnamespace creation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 18 Jan 2019 15:05:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jan 17, 2019 at 8:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Anyway, it seems to me that this is pointing out to another issue:\n>>> current_schema() can trigger a namespace creation, hence shouldn't we\n>>> mark it as PARALLEL UNSAFE and make sure that we never run into this\n>>> problem?\n\n>> That seems a bit annoying, but maybe we have little choice?\n\n> The only other option I see is to make current_schema() not trigger a\n> namespace creation.\n\nSeems hard to avoid. We could conceivably make it return \"pg_temp\"\nfor the temp schema instead of the schema's actual name, but it's\nnot very hard to think of ways whereby that would make use of the\nresult fail in contexts where it previously worked.\n\nAnother idea is to force creation of the temp namespace as soon as\nwe see that search_path references it. I'm not very sure exactly\nwhere would be a convenient place to make that happen, though.\nThere are paths whereby the GUC's value will change outside a\ntransaction, so we couldn't tie it directly to the GUC update.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 18 Jan 2019 15:34:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Fri, Jan 18, 2019 at 03:34:30PM -0500, Tom Lane wrote:\n> Seems hard to avoid. We could conceivably make it return \"pg_temp\"\n> for the temp schema instead of the schema's actual name, but it's\n> not very hard to think of ways whereby that would make use of the\n> result fail in contexts where it previously worked.\n\nCREATE EXTENSION is one such case. It would not work if referring to\nthe synonym pg_temp, but it can work if using directly the temporary\nnamespace of the session. So I feel that changing such things is\nprone to break more things than to actually fix things.\n\n> Another idea is to force creation of the temp namespace as soon as\n> we see that search_path references it. I'm not very sure exactly\n> where would be a convenient place to make that happen, though.\n> There are paths whereby the GUC's value will change outside a\n> transaction, so we couldn't tie it directly to the GUC update.\n\nThis is documented at the top of namespace.c: \"initial GUC processing\nof search_path happens outside a transaction\".\n--\nMichael",
"msg_date": "Sat, 19 Jan 2019 09:08:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Sat, Jan 19, 2019 at 09:08:27AM +0900, Michael Paquier wrote:\n> On Fri, Jan 18, 2019 at 03:34:30PM -0500, Tom Lane wrote:\n>> Seems hard to avoid. We could conceivably make it return \"pg_temp\"\n>> for the temp schema instead of the schema's actual name, but it's\n>> not very hard to think of ways whereby that would make use of the\n>> result fail in contexts where it previously worked.\n> \n> CREATE EXTENSION is one such case. It would not work if referring to\n> the synonym pg_temp, but it can work if using directly the temporary\n> namespace of the session. So I feel that changing such things is\n> prone to break more things than to actually fix things.\n\nAs long as I don't forget about it.. current_schema() is classified\nas stable, so it's not like we can make it return pg_temp and then the\nreal temporary schema name within the same transaction...\n--\nMichael",
"msg_date": "Sat, 19 Jan 2019 10:12:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Sat, Jan 19, 2019 at 5:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 17, 2019 at 8:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Anyway, it seems to me that this is pointing out to another issue:\n> > > current_schema() can trigger a namespace creation, hence shouldn't we\n> > > mark it as PARALLEL UNSAFE and make sure that we never run into this\n> > > problem?\n> >\n> > That seems a bit annoying, but maybe we have little choice?\n>\n> The only other option I see is to make current_schema() not trigger a\n> namespace creation.\n>\n\nOr can we make the test script set force_parallel_mode = off? Since\nthe failure case is a very rare in real world I think that it might be\nbetter to change the test scripts rather than changing properly of\ncurrent_schema().\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Tue, 22 Jan 2019 13:47:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Tue, Jan 22, 2019 at 01:47:05PM +0900, Masahiko Sawada wrote:\n> Or can we make the test script set force_parallel_mode = off? Since\n> the failure case is a very rare in real world I think that it might be\n> better to change the test scripts rather than changing properly of\n> current_schema().\n\nPlease see 396676b, which is in my opinion a quick workaround to the\nproblem. Even if that's a rare case, it would be confusing to the\nuser to see it :(\n--\nMichael",
"msg_date": "Tue, 22 Jan 2019 14:17:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Tue, Jan 22, 2019 at 2:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 22, 2019 at 01:47:05PM +0900, Masahiko Sawada wrote:\n> > Or can we make the test script set force_parallel_mode = off? Since\n> > the failure case is a very rare in real world I think that it might be\n> > better to change the test scripts rather than changing properly of\n> > current_schema().\n>\n> Please see 396676b, which is in my opinion a quick workaround to the\n> problem.\n\nOops, sorry for the too late response. Thank you.\n\n> Even if that's a rare case, it would be confusing to the\n> user to see it :(\n\nIndeed.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Tue, 22 Jan 2019 18:09:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On 18/01/2019 01:22, Michael Paquier wrote:\n> Restrict the use of temporary namespace in two-phase transactions\n\nWe usually don't use \"namespace\" in user-facing error messages. Can you\nchange it to say \"temporary schema\"?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Feb 2019 10:41:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Fri, Feb 08, 2019 at 10:41:59AM +0100, Peter Eisentraut wrote:\n> We usually don't use \"namespace\" in user-facing error messages. Can you\n> change it to say \"temporary schema\"?\n\nOr just switch to \"temporary objects\" like it's done on HEAD for the\nsecond message?\n\nPlease note that I have kept the error message for temporary tables\nfor compatibility reasons on stable branches, and I would rather not\ntouch that. The second one is new though.\n--\nMichael",
"msg_date": "Fri, 8 Feb 2019 19:00:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On 08/02/2019 11:00, Michael Paquier wrote:\n> On Fri, Feb 08, 2019 at 10:41:59AM +0100, Peter Eisentraut wrote:\n>> We usually don't use \"namespace\" in user-facing error messages. Can you\n>> change it to say \"temporary schema\"?\n> \n> Or just switch to \"temporary objects\" like it's done on HEAD for the\n> second message?\n\nYeah, even better.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 9 Feb 2019 16:06:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On 09/02/2019 16:06, Peter Eisentraut wrote:\n> On 08/02/2019 11:00, Michael Paquier wrote:\n>> On Fri, Feb 08, 2019 at 10:41:59AM +0100, Peter Eisentraut wrote:\n>>> We usually don't use \"namespace\" in user-facing error messages. Can you\n>>> change it to say \"temporary schema\"?\n>>\n>> Or just switch to \"temporary objects\" like it's done on HEAD for the\n>> second message?\n> \n> Yeah, even better.\n\nCommitted that way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 11 Feb 2019 10:59:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 10:59:20AM +0100, Peter Eisentraut wrote:\n> Committed that way.\n\nThanks Peter for adjusting the message, I was just going to do it.\nThere was a long weekend here and I had zero access to my laptop,\nexplaining the delay in replying.\n--\nMichael",
"msg_date": "Mon, 11 Feb 2019 19:55:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Restrict the use of temporary namespace in two-phase\n transaction"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.