threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nI've had to do quite a bit of performance investigation work this year\nand it seems that I only too often discover that the same problem is\nrepeating itself... A vacuum_cost_limit that is still set to the 200\ndefault value along with all 3 auto-vacuum workers being flat chat\ntrying and failing to keep up with the demand.\n\nI understand we often keep the default config aimed at low-end\nservers, but I don't believe we should categorise this option the same\nway as we do with shared_buffers and work_mem. What's to say that\nhaving an auto-vacuum that runs too slowly is better than one that\nruns too quickly?\n\nI have in mind that performance problems arising from having\nauto-vacuum run too quickly might be easier to diagnose and fix than\nthe ones that arise from it running too slowly. Certainly, the\naftermath cleanup involved with it running too slowly is quite a bit\nmore tricky to solve.\n\nIdeally, we'd have something smarter than the cost limits we have\ntoday, something that perhaps is adaptive and can make more use of an\nidle server than we do now, but that sounds like a pretty large\nproject to consider having it working this late in the cycle.\n\nIn the meantime, should we consider not having vacuum_cost_limit set\nso low by default?\n\nI have in mind something in the ballpark of a 5x to 10x increase. It\nseems the standard settings only allow for a maximum of ~3.9MB/s dirty\nrate and ~7.8MB/s shared buffer miss rate. That seems pretty slow\neven for the micro SD card that's in my 4-year-old phone. I think we\nshould be aiming for setting this to something good for the slightly\nbetter than average case of modern hardware.\n\nThe current default vacuum_cost_limit of 200 seems to be 15 years old\nand was added in f425b605f4e.\n\nAny supporters for raising the default?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Mon, 25 Feb 2019 18:42:12 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sun, Feb 24, 2019 at 9:42 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> The current default vacuum_cost_limit of 200 seems to be 15 years old\n> and was added in f425b605f4e.\n>\n> Any supporters for raising the default?\n\nI also think that the current default limit is far too conservative.\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Sun, 24 Feb 2019 22:17:09 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "I support rising the default.\n\n From standpoint of no-clue database admin, it's easier to give more\nresources to Postgres and google what process called \"autovacuum\" does than\nto learn why is it being slow on read.\n\nIt's also tricky that index only scans depend on working autovacuum, and\nautovacuum never comes to those tables. Since PG11 it's safe to call vacuum\non table with indexes, since index is no longer being scanned in its\nentirety. I would also propose to include \"tuples inserted\" into formula\nfor autovacuum threshold the same way it is done for autoanalyze threshold.\nThis will fix the situation where you delete 50 rows in 100-gigabyte table\nand autovacuum suddenly goes to rewrite and reread hint bits on all of it,\nsince it never touched it before.\n\nOn Mon, Feb 25, 2019 at 8:42 AM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> I've had to do quite a bit of performance investigation work this year\n> and it seems that I only too often discover that the same problem is\n> repeating itself... A vacuum_cost_limit that is still set to the 200\n> default value along with all 3 auto-vacuum workers being flat chat\n> trying and failing to keep up with the demand.\n>\n> I understand we often keep the default config aimed at low-end\n> servers, but I don't believe we should categorise this option the same\n> way as we do with shared_buffers and work_mem. What's to say that\n> having an auto-vacuum that runs too slowly is better than one that\n> runs too quickly?\n>\n> I have in mind that performance problems arising from having\n> auto-vacuum run too quickly might be easier to diagnose and fix than\n> the ones that arise from it running too slowly. Certainly, the\n> aftermath cleanup involved with it running too slowly is quite a bit\n> more tricky to solve.\n>\n> Ideally, we'd have something smarter than the cost limits we have\n> today, something that perhaps is adaptive and can make more use of an\n> idle server than we do now, but that sounds like a pretty large\n> project to consider having it working this late in the cycle.\n>\n> In the meantime, should we consider not having vacuum_cost_limit set\n> so low by default?\n>\n> I have in mind something in the ballpark of a 5x to 10x increase. It\n> seems the standard settings only allow for a maximum of ~3.9MB/s dirty\n> rate and ~7.8MB/s shared buffer miss rate. That seems pretty slow\n> even for the micro SD card that's in my 4-year-old phone. I think we\n> should be aiming for setting this to something good for the slightly\n> better than average case of modern hardware.\n>\n> The current default vacuum_cost_limit of 200 seems to be 15 years old\n> and was added in f425b605f4e.\n>\n> Any supporters for raising the default?\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n>\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nI support rising the default.From standpoint of no-clue database admin, it's easier to give more resources to Postgres and google what process called \"autovacuum\" does than to learn why is it being slow on read.It's also tricky that index only scans depend on working autovacuum, and autovacuum never comes to those tables. Since PG11 it's safe to call vacuum on table with indexes, since index is no longer being scanned in its entirety. I would also propose to include \"tuples inserted\" into formula for autovacuum threshold the same way it is done for autoanalyze threshold. This will fix the situation where you delete 50 rows in 100-gigabyte table and autovacuum suddenly goes to rewrite and reread hint bits on all of it, since it never touched it before.  On Mon, Feb 25, 2019 at 8:42 AM David Rowley <david.rowley@2ndquadrant.com> wrote:Hi,\n\nI've had to do quite a bit of performance investigation work this year\nand it seems that I only too often discover that the same problem is\nrepeating itself... A vacuum_cost_limit that is still set to the 200\ndefault value along with all 3 auto-vacuum workers being flat chat\ntrying and failing to keep up with the demand.\n\nI understand we often keep the default config aimed at low-end\nservers, but I don't believe we should categorise this option the same\nway as we do with shared_buffers and work_mem. What's to say that\nhaving an auto-vacuum that runs too slowly is better than one that\nruns too quickly?\n\nI have in mind that performance problems arising from having\nauto-vacuum run too quickly might be easier to diagnose and fix than\nthe ones that arise from it running too slowly. Certainly, the\naftermath cleanup involved with it running too slowly is quite a bit\nmore tricky to solve.\n\nIdeally, we'd have something smarter than the cost limits we have\ntoday, something that perhaps is adaptive and can make more use of an\nidle server than we do now, but that sounds like a pretty large\nproject to consider having it working this late in the cycle.\n\nIn the meantime, should we consider not having vacuum_cost_limit set\nso low by default?\n\nI have in mind something in the ballpark of a 5x to 10x increase. It\nseems the standard settings only allow for a maximum of ~3.9MB/s dirty\nrate and ~7.8MB/s shared buffer miss rate.  That seems pretty slow\neven for the micro SD card that's in my 4-year-old phone.  I think we\nshould be aiming for setting this to something good for the slightly\nbetter than average case of modern hardware.\n\nThe current default vacuum_cost_limit of 200 seems to be 15 years old\nand was added in f425b605f4e.\n\nAny supporters for raising the default?\n\n-- \n David Rowley                   http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa", "msg_date": "Mon, 25 Feb 2019 12:05:40 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On 2/25/19 1:17 AM, Peter Geoghegan wrote:\n> On Sun, Feb 24, 2019 at 9:42 PM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n>> The current default vacuum_cost_limit of 200 seems to be 15 years old\n>> and was added in f425b605f4e.\n>>\n>> Any supporters for raising the default?\n> \n> I also think that the current default limit is far too conservative.\n\nI agree entirely. In my experience you are usually much better off if\nvacuum finishes quickly. Personally I think our default scale factors\nare horrible too, especially when there are tables with large numbers of\nrows.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n", "msg_date": "Mon, 25 Feb 2019 08:06:45 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Tue, 26 Feb 2019 at 02:06, Joe Conway <mail@joeconway.com> wrote:\n>\n> On 2/25/19 1:17 AM, Peter Geoghegan wrote:\n> > On Sun, Feb 24, 2019 at 9:42 PM David Rowley\n> > <david.rowley@2ndquadrant.com> wrote:\n> >> The current default vacuum_cost_limit of 200 seems to be 15 years old\n> >> and was added in f425b605f4e.\n> >>\n> >> Any supporters for raising the default?\n> >\n> > I also think that the current default limit is far too conservative.\n>\n> I agree entirely. In my experience you are usually much better off if\n> vacuum finishes quickly. Personally I think our default scale factors\n> are horrible too, especially when there are tables with large numbers of\n> rows.\n\nAgreed that the scale factors are not perfect, but I don't think\nchanging them is as quite a no-brainer as the vacuum_cost_limit, so\nthe attached patch just does the vacuum_cost_limit.\n\nI decided to do the times by 10 option that I had mentioned.... Ensue\ndebate about that...\n\nI'll add this to the March commitfest and set the target version as PG12.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Tue, 26 Feb 2019 02:38:59 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "David Rowley wrote:\n> On Tue, 26 Feb 2019 at 02:06, Joe Conway <mail@joeconway.com> wrote:\n> >\n> > On 2/25/19 1:17 AM, Peter Geoghegan wrote:\n> > > On Sun, Feb 24, 2019 at 9:42 PM David Rowley\n> > > <david.rowley@2ndquadrant.com> wrote:\n> > >> The current default vacuum_cost_limit of 200 seems to be 15 years old\n> > >> and was added in f425b605f4e.\n> > >>\n> > >> Any supporters for raising the default?\n> > >\n> > > I also think that the current default limit is far too conservative.\n> >\n> > I agree entirely. In my experience you are usually much better off if\n> > vacuum finishes quickly. Personally I think our default scale factors\n> > are horrible too, especially when there are tables with large numbers of\n> > rows.\n> \n> Agreed that the scale factors are not perfect, but I don't think\n> changing them is as quite a no-brainer as the vacuum_cost_limit, so\n> the attached patch just does the vacuum_cost_limit.\n> \n> I decided to do the times by 10 option that I had mentioned.... Ensue\n> debate about that...\n> \n> I'll add this to the March commitfest and set the target version as PG12.\n\nI think this is a good move.\n\nIt is way easier to recover from an over-eager autovacuum than from\none that didn't manage to finish...\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 25 Feb 2019 16:43:40 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Mon, Feb 25, 2019 at 4:44 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> David Rowley wrote:\n> > On Tue, 26 Feb 2019 at 02:06, Joe Conway <mail@joeconway.com> wrote:\n> > >\n> > > On 2/25/19 1:17 AM, Peter Geoghegan wrote:\n> > > > On Sun, Feb 24, 2019 at 9:42 PM David Rowley\n> > > > <david.rowley@2ndquadrant.com> wrote:\n> > > >> The current default vacuum_cost_limit of 200 seems to be 15 years old\n> > > >> and was added in f425b605f4e.\n> > > >>\n> > > >> Any supporters for raising the default?\n> > [...]\n> > I'll add this to the March commitfest and set the target version as PG12.\n>\n> I think this is a good move.\n>\n> It is way easier to recover from an over-eager autovacuum than from\n> one that didn't manage to finish...\n\n+1\n\n", "msg_date": "Mon, 25 Feb 2019 17:39:29 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Mon, Feb 25, 2019 at 8:39 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> I decided to do the times by 10 option that I had mentioned.... Ensue\n> debate about that...\n\n+1 for raising the default substantially. In my experience, and it\nseems others are in a similar place, nobody ever gets into trouble\nbecause the default is too high, but sometimes people get in trouble\nbecause the default is too low. If we raise it enough that a few\npeople have to reduce it and a few people have to further increase it,\nIMHO that would be about right. Not sure exactly what value would\naccomplish that goal.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Mon, 25 Feb 2019 11:48:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Robert Haas wrote:\n> Not sure exactly what value would accomplish that goal.\n\nI think autovacuum_vacuum_cost_limit = 2000 is a good starting point.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 26 Feb 2019 13:04:56 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Mon, Feb 25, 2019 at 8:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> +1 for raising the default substantially. In my experience, and it\n> seems others are in a similar place, nobody ever gets into trouble\n> because the default is too high, but sometimes people get in trouble\n> because the default is too low.\n\nDoes anyone want to make an argument against the idea of raising the\ndefault? They should speak up now.\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Mon, 4 Mar 2019 16:14:40 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On 3/5/19 1:14 AM, Peter Geoghegan wrote:\n> On Mon, Feb 25, 2019 at 8:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> +1 for raising the default substantially. In my experience, and it\n>> seems others are in a similar place, nobody ever gets into trouble\n>> because the default is too high, but sometimes people get in trouble\n>> because the default is too low.\n> \n> Does anyone want to make an argument against the idea of raising the\n> default? They should speak up now.\n> \n\nI don't know.\n\nOn the one hand I don't feel very strongly about this change, and I have\nno intention to block it (because in most cases I do actually increase\nthe value anyway). I wonder if those with small systems will be happy\nabout it, though.\n\nBut on the other hand it feels a bit weird that we increase this one\nvalue and leave all the other (also very conservative) defaults alone.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 5 Mar 2019 13:53:07 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Tue, Mar 5, 2019 at 7:53 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> But on the other hand it feels a bit weird that we increase this one\n> value and leave all the other (also very conservative) defaults alone.\n\nAre you talking about vacuum-related defaults or defaults in general?\nIn 2014, we increased the defaults for work_mem and\nmaintenance_work_mem by 4x and the default for effective_cache_size by\n32x; in 2012, we increased the default for shared_buffers from by 4x.\nIt's possible some of those parameters should be further increased at\nsome point, but deciding not to increase any of them until we can\nincrease all of them is tantamount to giving up on changing anything\nat all. I think it's OK to be conservative by default, but we should\nincrease parameters where we know that the default is likely to be too\nconservative for 99% of users. My only question about this change is\nwhether to go for a lesser multiple (e.g. 4x) rather than the proposed\n10x. But I think even if 10x turns out to be too much for a few more\npeople than we'd like, we're still going to be better off increasing\nit and having some people have to turn it back down again than leaving\nit the way it is and having users regularly suffer vacuum-starvation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 5 Mar 2019 10:29:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "\nOn 2/25/19 8:38 AM, David Rowley wrote:\n> On Tue, 26 Feb 2019 at 02:06, Joe Conway <mail@joeconway.com> wrote:\n>> On 2/25/19 1:17 AM, Peter Geoghegan wrote:\n>>> On Sun, Feb 24, 2019 at 9:42 PM David Rowley\n>>> <david.rowley@2ndquadrant.com> wrote:\n>>>> The current default vacuum_cost_limit of 200 seems to be 15 years old\n>>>> and was added in f425b605f4e.\n>>>>\n>>>> Any supporters for raising the default?\n>>> I also think that the current default limit is far too conservative.\n>> I agree entirely. In my experience you are usually much better off if\n>> vacuum finishes quickly. Personally I think our default scale factors\n>> are horrible too, especially when there are tables with large numbers of\n>> rows.\n> Agreed that the scale factors are not perfect, but I don't think\n> changing them is as quite a no-brainer as the vacuum_cost_limit, so\n> the attached patch just does the vacuum_cost_limit.\n>\n> I decided to do the times by 10 option that I had mentioned.... Ensue\n> debate about that...\n>\n> I'll add this to the March commitfest and set the target version as PG12.\n>\n\nThis patch is tiny, seems perfectly reasonable, and has plenty of\nsupport. I'm going to commit it shortly unless there are last minute\nobjections.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Mar 2019 17:14:55 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On 2019-03-05 17:14:55 -0500, Andrew Dunstan wrote:\n> This patch is tiny, seems perfectly reasonable, and has plenty of\n> support. I'm going to commit it shortly unless there are last minute\n> objections.\n\n+1\n\n", "msg_date": "Tue, 5 Mar 2019 14:19:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Thanks for chipping in on this.\n\nOn Wed, 6 Mar 2019 at 01:53, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> But on the other hand it feels a bit weird that we increase this one\n> value and leave all the other (also very conservative) defaults alone.\n\nWhich others did you have in mind? Like work_mem, shared_buffers? If\nso, I mentioned in the initial post that I don't see vacuum_cost_limit\nas in the same category as those. It's not like PostgreSQL won't\nstart on a tiny server if vacuum_cost_limit is too high, but you will\nhave issues with too big a shared_buffers, for example. I think if\nwe insist that this patch is a review of all the \"how big is your\nserver\" GUCs then that's raising the bar significantly and\nunnecessarily for what I'm proposing here.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Wed, 6 Mar 2019 12:10:42 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On 3/5/19 14:14, Andrew Dunstan wrote:\n> This patch is tiny, seems perfectly reasonable, and has plenty of\n> support. I'm going to commit it shortly unless there are last minute\n> objections.\n\n+1\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n", "msg_date": "Wed, 6 Mar 2019 10:38:10 -0800", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "\nOn 3/6/19 1:38 PM, Jeremy Schneider wrote:\n> On 3/5/19 14:14, Andrew Dunstan wrote:\n>> This patch is tiny, seems perfectly reasonable, and has plenty of\n>> support. I'm going to commit it shortly unless there are last minute\n>> objections.\n> +1\n>\n\ndone.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 Mar 2019 14:54:27 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "\n\nOn 3/6/19 12:10 AM, David Rowley wrote:\n> Thanks for chipping in on this.\n> \n> On Wed, 6 Mar 2019 at 01:53, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> But on the other hand it feels a bit weird that we increase this one\n>> value and leave all the other (also very conservative) defaults alone.\n> \n> Which others did you have in mind? Like work_mem, shared_buffers? If\n> so, I mentioned in the initial post that I don't see vacuum_cost_limit\n> as in the same category as those. It's not like PostgreSQL won't\n> start on a tiny server if vacuum_cost_limit is too high, but you will\n> have issues with too big a shared_buffers, for example. I think if\n> we insist that this patch is a review of all the \"how big is your\n> server\" GUCs then that's raising the bar significantly and\n> unnecessarily for what I'm proposing here.\n> \n\nOn second thought, I think you're right. It's still true that you need\nto bump up various other GUCs on reasonably current hardware, but it's\ntrue vacuum_cost_limit is special enough to increase it separately.\n\nso +1 (I see Andrew already pushed it, but anyway ...)\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 6 Mar 2019 21:26:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Thu, 7 Mar 2019 at 08:54, Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 3/6/19 1:38 PM, Jeremy Schneider wrote:\n> > On 3/5/19 14:14, Andrew Dunstan wrote:\n> >> This patch is tiny, seems perfectly reasonable, and has plenty of\n> >> support. I'm going to commit it shortly unless there are last minute\n> >> objections.\n> > +1\n> >\n>\n> done.\n\nThanks!\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 11:01:19 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Wed, Mar 6, 2019 at 2:54 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 3/6/19 1:38 PM, Jeremy Schneider wrote:\n> > On 3/5/19 14:14, Andrew Dunstan wrote:\n> >> This patch is tiny, seems perfectly reasonable, and has plenty of\n> >> support. I'm going to commit it shortly unless there are last minute\n> >> objections.\n> > +1\n> >\n>\n> done.\n>\n\nNow that this is done, the default value is only 5x below the hard-coded\nmaximum of 10,000.\n\nThis seems a bit odd, and not very future-proof. Especially since the\nhard-coded maximum appears to have no logic to it anyway, at least none\nthat is documented. Is it just mindless nannyism?\n\nAny reason not to increase by at least a factor of 10, but preferably the\nlargest value that does not cause computational problems (which I think\nwould be INT_MAX)?\n\nCheers,\n\nJeff\n\nOn Wed, Mar 6, 2019 at 2:54 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 3/6/19 1:38 PM, Jeremy Schneider wrote:\n> On 3/5/19 14:14, Andrew Dunstan wrote:\n>> This patch is tiny, seems perfectly reasonable, and has plenty of\n>> support. I'm going to commit it shortly unless there are last minute\n>> objections.\n> +1\n>\n\ndone.Now that this is done, the default value is only 5x below the hard-coded maximum of 10,000.This seems a bit odd, and not very future-proof.  Especially since the hard-coded maximum appears to have no logic to it anyway, at least none that is documented.  Is it just mindless nannyism?Any reason not to increase by at least a factor of 10, but preferably the largest value that does not cause computational problems (which I think would be INT_MAX)?Cheers,Jeff", "msg_date": "Fri, 8 Mar 2019 10:20:19 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> Now that this is done, the default value is only 5x below the hard-coded\n> maximum of 10,000.\n> This seems a bit odd, and not very future-proof. Especially since the\n> hard-coded maximum appears to have no logic to it anyway, at least none\n> that is documented. Is it just mindless nannyism?\n\nHm. I think the idea was that rather than setting it to \"something very\nlarge\", you'd want to just disable the feature via vacuum_cost_delay.\nBut I agree that the threshold for what is ridiculously large probably\nought to be well more than 5x the default, and maybe it is just mindless\nnannyism to have a limit less than what the implementation can handle.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 08 Mar 2019 13:10:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sat, 9 Mar 2019 at 07:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > Now that this is done, the default value is only 5x below the hard-coded\n> > maximum of 10,000.\n> > This seems a bit odd, and not very future-proof. Especially since the\n> > hard-coded maximum appears to have no logic to it anyway, at least none\n> > that is documented. Is it just mindless nannyism?\n>\n> Hm. I think the idea was that rather than setting it to \"something very\n> large\", you'd want to just disable the feature via vacuum_cost_delay.\n> But I agree that the threshold for what is ridiculously large probably\n> ought to be well more than 5x the default, and maybe it is just mindless\n> nannyism to have a limit less than what the implementation can handle.\n\nYeah, +1 to increasing it. I imagine that the 10,000 limit would not\nallow people to explore the upper limits of a modern PCI-E SSD with\nthe standard delay time and dirty/miss scores. Also, it doesn't seem\nentirely unreasonable that someone somewhere might also want to\nfine-tune the hit/miss/dirty scores so that they're some larger factor\napart from each other the standard scores are. The 10,000 limit does\nnot allow much wiggle room for that.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Sat, 9 Mar 2019 12:47:34 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "\nOn 3/8/19 6:47 PM, David Rowley wrote:\n> On Sat, 9 Mar 2019 at 07:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Jeff Janes <jeff.janes@gmail.com> writes:\n>>> Now that this is done, the default value is only 5x below the hard-coded\n>>> maximum of 10,000.\n>>> This seems a bit odd, and not very future-proof. Especially since the\n>>> hard-coded maximum appears to have no logic to it anyway, at least none\n>>> that is documented. Is it just mindless nannyism?\n>> Hm. I think the idea was that rather than setting it to \"something very\n>> large\", you'd want to just disable the feature via vacuum_cost_delay.\n>> But I agree that the threshold for what is ridiculously large probably\n>> ought to be well more than 5x the default, and maybe it is just mindless\n>> nannyism to have a limit less than what the implementation can handle.\n> Yeah, +1 to increasing it. I imagine that the 10,000 limit would not\n> allow people to explore the upper limits of a modern PCI-E SSD with\n> the standard delay time and dirty/miss scores. Also, it doesn't seem\n> entirely unreasonable that someone somewhere might also want to\n> fine-tune the hit/miss/dirty scores so that they're some larger factor\n> apart from each other the standard scores are. The 10,000 limit does\n> not allow much wiggle room for that.\n>\n\n\nIncrease it to what?\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 Mar 2019 20:29:14 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Increase it to what?\n\nJeff's opinion that it could be INT_MAX without causing trouble is\na bit over-optimistic, see vacuum_delay_point():\n\n if (VacuumCostActive && !InterruptPending &&\n VacuumCostBalance >= VacuumCostLimit)\n {\n int msec;\n\n msec = VacuumCostDelay * VacuumCostBalance / VacuumCostLimit;\n\nIn the first place, if VacuumCostLimit is too close to INT_MAX,\nit'd be possible for VacuumCostBalance (also an int) to overflow\nbetween visits to vacuum_delay_point, wrapping around to negative\nand thus missing the need to nap altogether.\n\nIn the second place, since large VacuumCostLimit implies large\nVacuumCostBalance when we do get through this if-check, there's\na hazard of integer overflow in the VacuumCostDelay * VacuumCostBalance\nmultiplication. The final value of the msec calculation should be\neasily within integer range, since VacuumCostDelay is constrained to\nnot be very large, but that's no help if we have intermediate overflow.\n\nPossibly we could forestall both of those problems by changing\nVacuumCostBalance to double, but that would make the cost\nbookkeeping noticeably more expensive than it used to be.\nI think it'd be better to keep VacuumCostBalance as int,\nwhich would then mean we'd better limit VacuumCostLimit to no\nmore than say INT_MAX/2 --- call it 1 billion for the sake of\na round number.\n\nThat'd still leave us at risk of an integer overflow in the\nmsec-to-sleep calculation, but that calculation could be changed\nto double at little price, since once we get here we're going\nto sleep awhile anyway.\n\nBTW, don't forget autovacuum_cost_limit should have the same maximum.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 08 Mar 2019 20:54:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "I wrote:\n> [ worries about overflow with VacuumCostLimit approaching INT_MAX ]\n\nActually, now that I think a bit harder, that disquisition was silly.\nIn fact, I'm inclined to argue that the already-committed patch\nis taking the wrong approach, and we should revert it in favor of a\ndifferent idea.\n\nThe reason is this: what we want to do is throttle VACUUM's I/O demand,\nand by \"throttle\" I mean \"gradually reduce\". There is nothing gradual\nabout issuing a few million I/Os and then sleeping for many milliseconds;\nthat'll just produce spikes and valleys in the I/O demand. Ideally,\nwhat we'd have it do is sleep for a very short interval after each I/O.\nBut that's not too practical, both for code-structure reasons and because\nmost platforms don't give us a way to so finely control the length of a\nsleep. Hence the design of sleeping for awhile after every so many I/Os.\n\nHowever, the current settings are predicated on the assumption that\nyou can't get the kernel to give you a sleep of less than circa 10ms.\nThat assumption is way outdated, I believe; poking around on systems\nI have here, the minimum delay time using pg_usleep(1) seems to be\ngenerally less than 100us, and frequently less than 10us, on anything\nreleased in the last decade.\n\nI propose therefore that instead of increasing vacuum_cost_limit,\nwhat we ought to be doing is reducing vacuum_cost_delay by a similar\nfactor. And, to provide some daylight for people to reduce it even\nmore, we ought to arrange for it to be specifiable in microseconds\nnot milliseconds. There's no GUC_UNIT_US right now, but it's time.\n(Perhaps we should also look into using other delay APIs, such as\nnanosleep(2), where available.)\n\nI don't have any particular objection to kicking up the maximum\nvalue of vacuum_cost_limit by 10X or so, if anyone's hot to do that.\nBut that's not where we ought to be focusing our concern. And there\nreally is a good reason, not just nannyism, not to make that\nsetting huge --- it's just the wrong thing to do, as compared to\nreducing vacuum_cost_delay.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 08 Mar 2019 22:11:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sat, 9 Mar 2019 at 16:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I propose therefore that instead of increasing vacuum_cost_limit,\n> what we ought to be doing is reducing vacuum_cost_delay by a similar\n> factor. And, to provide some daylight for people to reduce it even\n> more, we ought to arrange for it to be specifiable in microseconds\n> not milliseconds. There's no GUC_UNIT_US right now, but it's time.\n> (Perhaps we should also look into using other delay APIs, such as\n> nanosleep(2), where available.)\n\nIt does seem like a genuine concern that there might be too much all\nor nothing. It's no good being on a highspeed train if it stops at\nevery platform.\n\nI agree that vacuum_cost_delay might not be granular enough, however.\nIf we're going to change the vacuum_cost_delay into microseconds, then\nI'm a little concerned that it'll silently break existing code that\nsets it. Scripts that do manual off-peak vacuums are pretty common\nout in the wild.\n\nIn an ideal world we'd just redesign the vacuum throttling to have\nMB/s for hit/read/dirty, and possible also WAL write rate. I'm not\nsure exactly how they'd cooperate together, but we could likely\nminimise gettimeofday() calls by sampling the time it took to process\nN pages, and if N pages didn't take the time we wanted them to take we\ncould set N = Min(N * ($target_gettimeofday_sample_rate / $timetaken),\n1); e.g if N was 2000 and it just took us 1 second to do 2000 pages,\nbut we want to sleep every millisecond, then just do N *= (0.001 / 1),\nso the next run we only do 2 pages before checking how long we should\nsleep for. If we happened to process those 2 pages in 0.5\nmilliseconds, then N would become 4, etc.\n\nWe'd just need to hard code the $target_gettimeofday_sample_rate.\nProbably 1 millisecond would be about right and we'd need to just\nguess the first value of N, but if we guess a low value, it'll be\nquick to correct itself after the first batch of pages.\n\nIf anyone thinks that idea has any potential, then maybe it's better\nto leave the new vacuum_cost_limit default in place and consider\nredesigning this for PG13... as such a change is too late for PG12.\n\nIt may also be possible to make this a vacuum rate limit in %. Say 10%\nwould just sleep for 10x as long is it took to process the last set of\npages. The problem with this is that if the server was under heavy\nload then auto-vacuum might crawl along, but that might be the exact\nopposite of what's required as it might be crawling due to inadequate\nvacuuming.\n\n> I don't have any particular objection to kicking up the maximum\n> value of vacuum_cost_limit by 10X or so, if anyone's hot to do that.\n> But that's not where we ought to be focusing our concern. And there\n> really is a good reason, not just nannyism, not to make that\n> setting huge --- it's just the wrong thing to do, as compared to\n> reducing vacuum_cost_delay.\n\nMy vote is to 10x the maximum for vacuum_cost_limit and consider\nchanging how it all works in PG13. If nothing happens before this\ntime next year then we can consider making vacuum_cost_delay a\nmicroseconds GUC.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Sat, 9 Mar 2019 22:28:12 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "\nOn 3/9/19 4:28 AM, David Rowley wrote:\n> On Sat, 9 Mar 2019 at 16:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I propose therefore that instead of increasing vacuum_cost_limit,\n>> what we ought to be doing is reducing vacuum_cost_delay by a similar\n>> factor. And, to provide some daylight for people to reduce it even\n>> more, we ought to arrange for it to be specifiable in microseconds\n>> not milliseconds. There's no GUC_UNIT_US right now, but it's time.\n>> (Perhaps we should also look into using other delay APIs, such as\n>> nanosleep(2), where available.)\n> It does seem like a genuine concern that there might be too much all\n> or nothing. It's no good being on a highspeed train if it stops at\n> every platform.\n>\n> I agree that vacuum_cost_delay might not be granular enough, however.\n> If we're going to change the vacuum_cost_delay into microseconds, then\n> I'm a little concerned that it'll silently break existing code that\n> sets it. Scripts that do manual off-peak vacuums are pretty common\n> out in the wild.\n\n\nMaybe we could leave the default units as msec but store it and allow\nspecifying as usec. Not sure how well the GUC mechanism would cope with\nthat.\n\n\n[other good ideas]\n\n\n>> I don't have any particular objection to kicking up the maximum\n>> value of vacuum_cost_limit by 10X or so, if anyone's hot to do that.\n>> But that's not where we ought to be focusing our concern. And there\n>> really is a good reason, not just nannyism, not to make that\n>> setting huge --- it's just the wrong thing to do, as compared to\n>> reducing vacuum_cost_delay.\n> My vote is to 10x the maximum for vacuum_cost_limit and consider\n> changing how it all works in PG13. If nothing happens before this\n> time next year then we can consider making vacuum_cost_delay a\n> microseconds GUC.\n>\n\n+1.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 Mar 2019 08:28:22 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I agree that vacuum_cost_delay might not be granular enough, however.\n> If we're going to change the vacuum_cost_delay into microseconds, then\n> I'm a little concerned that it'll silently break existing code that\n> sets it. Scripts that do manual off-peak vacuums are pretty common\n> out in the wild.\n\nTrue. Perhaps we could keep the units as ms but make it a float?\nNot sure if the \"units\" logic can cope though.\n\n> My vote is to 10x the maximum for vacuum_cost_limit and consider\n> changing how it all works in PG13. If nothing happens before this\n> time next year then we can consider making vacuum_cost_delay a\n> microseconds GUC.\n\nI'm not really happy with the idea of changing the defaults in this area\nand then changing them again next year. That's going to lead to a lot\nof confusion, and a mess for people who may have changed (some) of\nthe settings manually.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 09 Mar 2019 11:31:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/9/19 4:28 AM, David Rowley wrote:\n>> I agree that vacuum_cost_delay might not be granular enough, however.\n>> If we're going to change the vacuum_cost_delay into microseconds, then\n>> I'm a little concerned that it'll silently break existing code that\n>> sets it. Scripts that do manual off-peak vacuums are pretty common\n>> out in the wild.\n\n> Maybe we could leave the default units as msec but store it and allow\n> specifying as usec. Not sure how well the GUC mechanism would cope with\n> that.\n\nI took a quick look at that and I'm afraid it'd be a mess. GUC doesn't\nreally distinguish between a variable's storage unit, its default input\nunit, or its default output unit (as seen in e.g. pg_settings). Perhaps\nwe could split those into two or three distinct concepts, but it seems\ncomplicated and bug-prone. Also I think we'd still be forced into\nmaking obviously-incompatible changes in what pg_settings shows for\nthis variable, since what it shows right now is integer ms. That last\nisn't a deal-breaker perhaps, but 100% compatibility isn't going to\nhappen this way.\n\nThe idea of converting vacuum_cost_delay into a float variable, while\nkeeping its native unit as ms, seems probably more feasible from a\ncompatibility standpoint. There are two sub-possibilities:\n\n1. Just do that and lose units support for the variable. I don't\nthink this is totally unreasonable, because up to now ms is the\n*only* workable unit for it:\n\nregression=# set vacuum_cost_delay = '1s';\nERROR: 1000 is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n\nStill, it'd mean that anyone who'd been explicitly setting it with\nan \"ms\" qualifier would have to change their postgresql.conf entry.\n\n2. Add support for units for float variables, too. I don't think\nthis'd be a huge amount of work, and we'd surely have other uses\nfor it in the long run.\n\nI'm inclined to go look into #2. Anybody think this is a bad idea?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 09 Mar 2019 12:55:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "\nOn 3/9/19 12:55 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/9/19 4:28 AM, David Rowley wrote:\n>>> I agree that vacuum_cost_delay might not be granular enough, however.\n>>> If we're going to change the vacuum_cost_delay into microseconds, then\n>>> I'm a little concerned that it'll silently break existing code that\n>>> sets it. Scripts that do manual off-peak vacuums are pretty common\n>>> out in the wild.\n>> Maybe we could leave the default units as msec but store it and allow\n>> specifying as usec. Not sure how well the GUC mechanism would cope with\n>> that.\n> I took a quick look at that and I'm afraid it'd be a mess. GUC doesn't\n> really distinguish between a variable's storage unit, its default input\n> unit, or its default output unit (as seen in e.g. pg_settings). Perhaps\n> we could split those into two or three distinct concepts, but it seems\n> complicated and bug-prone. Also I think we'd still be forced into\n> making obviously-incompatible changes in what pg_settings shows for\n> this variable, since what it shows right now is integer ms. That last\n> isn't a deal-breaker perhaps, but 100% compatibility isn't going to\n> happen this way.\n>\n> The idea of converting vacuum_cost_delay into a float variable, while\n> keeping its native unit as ms, seems probably more feasible from a\n> compatibility standpoint. There are two sub-possibilities:\n>\n> 1. Just do that and lose units support for the variable. I don't\n> think this is totally unreasonable, because up to now ms is the\n> *only* workable unit for it:\n>\n> regression=# set vacuum_cost_delay = '1s';\n> ERROR: 1000 is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n>\n> Still, it'd mean that anyone who'd been explicitly setting it with\n> an \"ms\" qualifier would have to change their postgresql.conf entry.\n>\n> 2. Add support for units for float variables, too. I don't think\n> this'd be a huge amount of work, and we'd surely have other uses\n> for it in the long run.\n>\n> I'm inclined to go look into #2. Anybody think this is a bad idea?\n>\n> \t\n\n\nSounds good to me, seems much more likely to be future-proof.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 Mar 2019 13:58:20 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sat, Mar 9, 2019 at 7:58 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n> On 3/9/19 12:55 PM, Tom Lane wrote:\n> >> Maybe we could leave the default units as msec but store it and allow\n> >> specifying as usec. Not sure how well the GUC mechanism would cope with\n> >> that.\n> > I took a quick look at that and I'm afraid it'd be a mess. GUC doesn't\n> > really distinguish between a variable's storage unit, its default input\n> > unit, or its default output unit (as seen in e.g. pg_settings). Perhaps\n> > we could split those into two or three distinct concepts, but it seems\n> > complicated and bug-prone. Also I think we'd still be forced into\n> > making obviously-incompatible changes in what pg_settings shows for\n> > this variable, since what it shows right now is integer ms. That last\n> > isn't a deal-breaker perhaps, but 100% compatibility isn't going to\n> > happen this way.\n> >\n> > The idea of converting vacuum_cost_delay into a float variable, while\n> > keeping its native unit as ms, seems probably more feasible from a\n> > compatibility standpoint. There are two sub-possibilities:\n> >\n> > 1. Just do that and lose units support for the variable. I don't\n> > think this is totally unreasonable, because up to now ms is the\n> > *only* workable unit for it:\n> >\n> > regression=# set vacuum_cost_delay = '1s';\n> > ERROR: 1000 is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n> >\n> > Still, it'd mean that anyone who'd been explicitly setting it with\n> > an \"ms\" qualifier would have to change their postgresql.conf entry.\n> >\n> > 2. Add support for units for float variables, too. I don't think\n> > this'd be a huge amount of work, and we'd surely have other uses\n> > for it in the long run.\n> >\n> > I'm inclined to go look into #2. Anybody think this is a bad idea?\n>\n> Sounds good to me, seems much more likely to be future-proof.\n\nAgreed.\n\n", "msg_date": "Sat, 9 Mar 2019 20:12:26 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On 10/03/2019 06:55, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/9/19 4:28 AM, David Rowley wrote:\n>>> I agree that vacuum_cost_delay might not be granular enough, however.\n>>> If we're going to change the vacuum_cost_delay into microseconds, then\n>>> I'm a little concerned that it'll silently break existing code that\n>>> sets it. Scripts that do manual off-peak vacuums are pretty common\n>>> out in the wild.\n>> Maybe we could leave the default units as msec but store it and allow\n>> specifying as usec. Not sure how well the GUC mechanism would cope with\n>> that.\n> I took a quick look at that and I'm afraid it'd be a mess. GUC doesn't\n> really distinguish between a variable's storage unit, its default input\n> unit, or its default output unit (as seen in e.g. pg_settings). Perhaps\n> we could split those into two or three distinct concepts, but it seems\n> complicated and bug-prone. Also I think we'd still be forced into\n> making obviously-incompatible changes in what pg_settings shows for\n> this variable, since what it shows right now is integer ms. That last\n> isn't a deal-breaker perhaps, but 100% compatibility isn't going to\n> happen this way.\n>\n> The idea of converting vacuum_cost_delay into a float variable, while\n> keeping its native unit as ms, seems probably more feasible from a\n> compatibility standpoint. There are two sub-possibilities:\n>\n> 1. Just do that and lose units support for the variable. I don't\n> think this is totally unreasonable, because up to now ms is the\n> *only* workable unit for it:\n>\n> regression=# set vacuum_cost_delay = '1s';\n> ERROR: 1000 is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n>\n> Still, it'd mean that anyone who'd been explicitly setting it with\n> an \"ms\" qualifier would have to change their postgresql.conf entry.\n>\n> 2. Add support for units for float variables, too. I don't think\n> this'd be a huge amount of work, and we'd surely have other uses\n> for it in the long run.\n>\n> I'm inclined to go look into #2. Anybody think this is a bad idea?\n>\n> \t\t\tregards, tom lane\n>\nHope aboutᅵ keeping the default unit of ms, but converting it to a \n'double' for input, but storing it as int (or long?) number of \nnanoseconds.ᅵ Gives finer grain of control withouthaving to specify a \nunit, while still allowing calculations to be fast?\n\n\nCheers,\nGavin\n\n\n", "msg_date": "Sun, 10 Mar 2019 09:36:59 +1300", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 9, 2019 at 7:58 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> On 3/9/19 12:55 PM, Tom Lane wrote:\n>>> The idea of converting vacuum_cost_delay into a float variable, while\n>>> keeping its native unit as ms, seems probably more feasible from a\n>>> compatibility standpoint. There are two sub-possibilities:\n>>> ...\n>>> 2. Add support for units for float variables, too. I don't think\n>>> this'd be a huge amount of work, and we'd surely have other uses\n>>> for it in the long run.\n>>> ...\n>>> I'm inclined to go look into #2. Anybody think this is a bad idea?\n\n>> Sounds good to me, seems much more likely to be future-proof.\n\n> Agreed.\n\nI tried this, and it seems to work pretty well. The first of the two\nattached patches just teaches guc.c to support units for float values,\nincidentally allowing \"us\" as an input unit for time-based GUCs.\nThe second converts [autovacuum_]cost_delay to float GUCs, and changes\nthe default value for autovacuum_cost_delay from 20ms to 2ms.\nWe'd want to revert the previous patch that changed the default value\nof vacuum_cost_limit, else this means a 100x not 10x change in the\ndefault autovac speed ... but I've not included that in this patch.\n\nSome notes:\n\n1. I hadn't quite absorbed until doing this that we'd need a catversion\nbump because of format change in StdRdOptions. Since this isn't proposed\nfor back-patching, that doesn't seem problematic.\n\n2. It's always bugged me that we don't allow fractional unit\nspecifications, say \"0.1GB\", even for GUCs that are integers underneath.\nThat would be a simple additional change on top of this, but I didn't\ndo it here.\n\n3. I noticed that parse_real doesn't reject infinity or NaN values\nfor float GUCs. This seems like a bug, maybe even a back-patchable one;\nI doubt the planner will react sanely to SET seq_page_cost TO 'NaN'\nfor instance. I didn't do anything about that here either.\n\n4. I've not done anything here about increasing the max value of\n[autovacuum_]vacuum_cost_limit. I have no objection to kicking that\nup 10x or so if somebody wants to do the work, but I'm not sure it's\nvery useful given this patch.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 09 Mar 2019 16:04:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Gavin Flower <GavinFlower@archidevsys.co.nz> writes:\n> Hope about  keeping the default unit of ms, but converting it to a \n> 'double' for input, but storing it as int (or long?) number of \n> nanoseconds.  Gives finer grain of control withouthaving to specify a \n> unit, while still allowing calculations to be fast?\n\nDon't really see the point. The only places where we do any calculations\nwith the value are where we're about to sleep, so shaving a few nanosec\ndoesn't seem very interesting.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 09 Mar 2019 16:06:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "BTW ... I noticed while fooling with this that GUC's out-of-range\nmessages can be confusing:\n\nregression=# set vacuum_cost_delay = '1s';\nERROR: 1000 is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n\nOne's immediate reaction to that is \"I put in 1, not 1000\". I think\nit'd be much clearer if we included the unit we'd converted to, thus:\n\nERROR: 1000 ms is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n\n(Notice that this also implicitly tells what units the range limits\nare being quoted in. We could repeat the unit name in that part,\nviz \"(0 .. 100 ms)\", but it seems unnecessary.)\n\nA small problem with this idea is that GUC_UNIT_[X]BLOCK variables don't\nreally have a natural unit name. If we follow the lead of pg_settings,\nsuch errors would look something like\n\nERROR: 1000 8kB is outside the valid range for ...\n\nI can't think of a better idea, though, and it'd still be clearer than\nwhat happens now.\n\nBarring objections I'll go make this happen.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 09 Mar 2019 17:14:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sat, Mar 9, 2019 at 10:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I tried this, and it seems to work pretty well. The first of the two\n> attached patches just teaches guc.c to support units for float values,\n> incidentally allowing \"us\" as an input unit for time-based GUCs.\n\nWhy not allowing third party extensions to declare a GUC stored in us?\n We need a backward-compatible format for vacuum setting, but I don't\nsee a good reason to force external extensions to do the same, and it\nwouldn't require much extra work.\n\n> The second converts [autovacuum_]cost_delay to float GUCs, and changes\n> the default value for autovacuum_cost_delay from 20ms to 2ms.\n> We'd want to revert the previous patch that changed the default value\n> of vacuum_cost_limit, else this means a 100x not 10x change in the\n> default autovac speed ... but I've not included that in this patch.\n\nOtherwise everything looks good to me. BTW the patches didn't apply\ncleanly with git-apply, but patch -p1 didn't complain.\n\n> 2. It's always bugged me that we don't allow fractional unit\n> specifications, say \"0.1GB\", even for GUCs that are integers underneath.\n> That would be a simple additional change on top of this, but I didn't\n> do it here.\n\nIt annoyed me multiple times, so +1 for making that happen.\n\n", "msg_date": "Sun, 10 Mar 2019 14:10:43 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sat, Mar 9, 2019 at 11:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> BTW ... I noticed while fooling with this that GUC's out-of-range\n> messages can be confusing:\n>\n> regression=# set vacuum_cost_delay = '1s';\n> ERROR: 1000 is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n>\n> One's immediate reaction to that is \"I put in 1, not 1000\". I think\n> it'd be much clearer if we included the unit we'd converted to, thus:\n>\n> ERROR: 1000 ms is outside the valid range for parameter \"vacuum_cost_delay\" (0 .. 100)\n>\n> (Notice that this also implicitly tells what units the range limits\n> are being quoted in.\n\nI like it!\n\n> A small problem with this idea is that GUC_UNIT_[X]BLOCK variables don't\n> really have a natural unit name. If we follow the lead of pg_settings,\n> such errors would look something like\n>\n> ERROR: 1000 8kB is outside the valid range for ...\n>\n> I can't think of a better idea, though, and it'd still be clearer than\n> what happens now.\n>\n> Barring objections I'll go make this happen.\n\nNo objection here.\n\n", "msg_date": "Sun, 10 Mar 2019 14:18:26 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 9, 2019 at 10:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I tried this, and it seems to work pretty well. The first of the two\n>> attached patches just teaches guc.c to support units for float values,\n>> incidentally allowing \"us\" as an input unit for time-based GUCs.\n\n> Why not allowing third party extensions to declare a GUC stored in us?\n\nI think that adding a new base unit type (GUC_UNIT_US) is possible but\nI'm disinclined to do it on the basis of zero evidence that it's needed.\nOnly three of the five already-known time units are allowed to be base\nunits (ms, s, min are but d and h aren't) so it's not like there's no\nprecedent for excluding this one. Anyway, such a patch would be mostly\northogonal to what I've done here, so it should be considered on its\nown merits.\n\n(BTW, if we're expecting to have GUCs that are meant to measure only\nvery short time intervals, maybe it'd be more forward-looking for\ntheir base unit to be NS not US.)\n\n>> 2. It's always bugged me that we don't allow fractional unit\n>> specifications, say \"0.1GB\", even for GUCs that are integers underneath.\n>> That would be a simple additional change on top of this, but I didn't\n>> do it here.\n\n> It annoyed me multiple times, so +1 for making that happen.\n\nOK, will do.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 10 Mar 2019 11:47:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Sun, Mar 10, 2019 at 4:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sat, Mar 9, 2019 at 10:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I tried this, and it seems to work pretty well. The first of the two\n> >> attached patches just teaches guc.c to support units for float values,\n> >> incidentally allowing \"us\" as an input unit for time-based GUCs.\n>\n> > Why not allowing third party extensions to declare a GUC stored in us?\n>\n> I think that adding a new base unit type (GUC_UNIT_US) is possible but\n> I'm disinclined to do it on the basis of zero evidence that it's needed.\n> Only three of the five already-known time units are allowed to be base\n> units (ms, s, min are but d and h aren't) so it's not like there's no\n> precedent for excluding this one. Anyway, such a patch would be mostly\n> orthogonal to what I've done here, so it should be considered on its\n> own merits.\n> (BTW, if we're expecting to have GUCs that are meant to measure only\n> very short time intervals, maybe it'd be more forward-looking for\n> their base unit to be NS not US.)\n\nThat's fair.\n\n> >> 2. It's always bugged me that we don't allow fractional unit\n> >> specifications, say \"0.1GB\", even for GUCs that are integers underneath.\n> >> That would be a simple additional change on top of this, but I didn't\n> >> do it here.\n>\n> > It annoyed me multiple times, so +1 for making that happen.\n>\n> OK, will do.\n\nThanks!\n\n", "msg_date": "Sun, 10 Mar 2019 16:56:34 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 9, 2019 at 10:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 2. It's always bugged me that we don't allow fractional unit\n>> specifications, say \"0.1GB\", even for GUCs that are integers underneath.\n>> That would be a simple additional change on top of this, but I didn't\n>> do it here.\n\n> It annoyed me multiple times, so +1 for making that happen.\n\nThe first patch below does that, but I noticed that if we just do it\nwithout any subtlety, you get results like this:\n\nregression=# set work_mem = '30.1GB';\nSET\nregression=# show work_mem;\n work_mem \n------------\n 31562138kB\n(1 row)\n\nThe second patch is a delta that rounds off to the next smaller unit\nif there is one, producing a less noisy result:\n\nregression=# set work_mem = '30.1GB';\nSET\nregression=# show work_mem;\n work_mem \n----------\n 30822MB\n(1 row)\n\nI'm not sure if that's a good idea or just overthinking the problem.\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 Mar 2019 16:58:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The second patch is a delta that rounds off to the next smaller unit\n> if there is one, producing a less noisy result:\n>\n> regression=# set work_mem = '30.1GB';\n> SET\n> regression=# show work_mem;\n> work_mem\n> ----------\n> 30822MB\n> (1 row)\n>\n> I'm not sure if that's a good idea or just overthinking the problem.\n> Thoughts?\n\nI don't think you're over thinking it. I often have to look at such\nsettings and I'm probably not unique in when I glance at 30822MB I can\nsee that's roughly 30GB, whereas when I look at 31562138kB, I'm either\ncounting digits or reaching for a calculator. This is going to reduce\nthe time it takes for a human to process the pg_settings output, so I\nthink it's a good idea.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Mon, 11 Mar 2019 22:03:11 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "On Mon, Mar 11, 2019 at 10:03 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The second patch is a delta that rounds off to the next smaller unit\n> > if there is one, producing a less noisy result:\n> >\n> > regression=# set work_mem = '30.1GB';\n> > SET\n> > regression=# show work_mem;\n> > work_mem\n> > ----------\n> > 30822MB\n> > (1 row)\n> >\n> > I'm not sure if that's a good idea or just overthinking the problem.\n> > Thoughts?\n>\n> I don't think you're over thinking it. I often have to look at such\n> settings and I'm probably not unique in when I glance at 30822MB I can\n> see that's roughly 30GB, whereas when I look at 31562138kB, I'm either\n> counting digits or reaching for a calculator. This is going to reduce\n> the time it takes for a human to process the pg_settings output, so I\n> think it's a good idea.\n\nDefinitely, rounding up will spare people from wasting time to check\nwhat's the actual value.\n\n", "msg_date": "Mon, 11 Mar 2019 13:57:21 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we increase the default vacuum_cost_limit?" }, { "msg_contents": "At Mon, 11 Mar 2019 13:57:21 +0100, Julien Rouhaud <rjuju123@gmail.com> wrote in <CAOBaU_a2tLyonOMJ62=SiDmo84Xo1fy81YA8K=B+=OtTc3sYSQ@mail.gmail.com>\n> On Mon, Mar 11, 2019 at 10:03 AM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n> >\n> > On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > The second patch is a delta that rounds off to the next smaller unit\n> > > if there is one, producing a less noisy result:\n> > >\n> > > regression=# set work_mem = '30.1GB';\n> > > SET\n> > > regression=# show work_mem;\n> > > work_mem\n> > > ----------\n> > > 30822MB\n> > > (1 row)\n> > >\n> > > I'm not sure if that's a good idea or just overthinking the problem.\n> > > Thoughts?\n> >\n> > I don't think you're over thinking it. I often have to look at such\n> > settings and I'm probably not unique in when I glance at 30822MB I can\n> > see that's roughly 30GB, whereas when I look at 31562138kB, I'm either\n> > counting digits or reaching for a calculator. This is going to reduce\n> > the time it takes for a human to process the pg_settings output, so I\n> > think it's a good idea.\n> \n> Definitely, rounding up will spare people from wasting time to check\n> what's the actual value.\n\n+1. I don't think it overthinking, too.\n\nAnyone who specifies memory size in GB won't care under-MB\nfraction. I don't think '0.01GB' is a sane setting but it being\n10MB doesn't matter. However, I don't think that '0.1d' becoming\n'2h' is reasonable. \"10 times per day\" is \"rounded\" to \"12 times\nper day\" by that.\n\nIs it worth showing values with at most two or three fraction\ndigits instead of rounding the value on setting? In the attached\nPoC patch - instead of the 'roundoff-fractions-harder' patch -\nshows values in the shortest exact representation.\n\nwork_mem:\n 31562138 => '30.1 GB'\n 31562137 => '31562137 kB'\n '0.1GB' => '0.1 GB'\n '0.01GB' => '0.01 GB'\n '0.001GB' => '1049 kB'\n\nlock_timeout:\n '0.1h' => '6 min'\n '90 min' => '90 min'\n '120 min' => '2 h'\n '0.1 d' => '0.1 d'\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 12 Mar 2019 12:31:28 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: [Suspect SPAM] Re: Should we increase the default\n vacuum_cost_limit?" }, { "msg_contents": "Sorry, I sent a wrong patch. The attached is the right one.\n\nAt Mon, 11 Mar 2019 13:57:21 +0100, Julien Rouhaud <rjuju123@gmail.com> wrote in <CAOBaU_a2tLyonOMJ62=SiDmo84Xo1fy81YA8K=B+=OtTc3sYSQ@mail.gmail.com>\n> On Mon, Mar 11, 2019 at 10:03 AM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n> >\n> > On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > The second patch is a delta that rounds off to the next smaller unit\n> > > if there is one, producing a less noisy result:\n> > >\n> > > regression=# set work_mem = '30.1GB';\n> > > SET\n> > > regression=# show work_mem;\n> > > work_mem\n> > > ----------\n> > > 30822MB\n> > > (1 row)\n> > >\n> > > I'm not sure if that's a good idea or just overthinking the problem.\n> > > Thoughts?\n> >\n> > I don't think you're over thinking it. I often have to look at such\n> > settings and I'm probably not unique in when I glance at 30822MB I can\n> > see that's roughly 30GB, whereas when I look at 31562138kB, I'm either\n> > counting digits or reaching for a calculator. This is going to reduce\n> > the time it takes for a human to process the pg_settings output, so I\n> > think it's a good idea.\n> \n> Definitely, rounding up will spare people from wasting time to check\n> what's the actual value.\n\n+1. I don't think it overthinking, too.\n\nAnyone who specifies memory size in GB won't care under-MB\nfraction. I don't think '0.01GB' is a sane setting but it being\n10MB doesn't matter. However, I don't think that '0.1d' becoming\n'2h' is reasonable. \"10 times per day\" is \"rounded\" to \"12 times\nper day\" by that.\n\nIs it worth showing values with at most two or three fraction\ndigits instead of rounding the value on setting? In the attached\nPoC patch - instead of the 'roundoff-fractions-harder' patch -\nshows values in the shortest exact representation.\n\nwork_mem:\n 31562138 => '30.1 GB'\n 31562137 => '31562137 kB'\n '0.1GB' => '0.1 GB'\n '0.01GB' => '0.01 GB'\n '0.001GB' => '1049 kB'\n\nlock_timeout:\n '0.1h' => '6 min'\n '90 min' => '90 min'\n '120 min' => '2 h'\n '0.1 d' => '0.1 d'\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 12 Mar 2019 12:45:59 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: [Suspect SPAM] Re: Should we increase the default\n vacuum_cost_limit?" } ]
[ { "msg_contents": "hi,\n\nI have researched buffer manager with postgresql9.4.18.\nFor exact result, I must change IO style to Direct IO.\nThen I changed the align and O_DIRECT flags according to the version, and\nset the options at compile time.\n\nHowever, when I run postgres with the modified version, I get ERROR: could\nnot read block 0 in file \"global/XXXXX\": Bad address as soon as I started\nthe program.\nBut I do not have any idea where the code was changed with a wrong way.\n\nI tried changing ext4 to xfs because of file system problem, but the same\nproblem occurred.\n\nWhich part can I find a problem with the above error? or Do you have\nguesswork where the problem is?\nᐧ\n\nhi,I have researched buffer manager with postgresql9.4.18.For exact result, I must change IO style to Direct IO.Then I changed the align and O_DIRECT flags according to the version, and set the options at compile time.However, when I run postgres with the modified version, I get ERROR: could not read block 0 in file \"global/XXXXX\": Bad address as soon as I started the program.But I do not have any idea where the code was changed with a wrong way.I tried changing ext4 to xfs because of file system problem, but the same problem occurred.Which part can I find a problem with the above error? or Do you have guesswork where the problem is?ᐧ", "msg_date": "Mon, 25 Feb 2019 15:36:58 +0900", "msg_from": "=?UTF-8?B?7J207J6s7ZuI?= <ljhh0611@gmail.com>", "msg_from_op": true, "msg_subject": "ERROR: could not read block 0 in file \"global/XXXXX\": Bad address\n Problem" } ]
[ { "msg_contents": "Hi,\nI did upgrade of my test pg. Part of this is pg_dump -Fd of each\ndatabase, then upgrade binaries, then initdb, and pg_restore.\n\nBut - I can't restore any database that has any data - I get segfaults.\n\nFor example, with gdb:\n\n=$ gdb --args pg_restore -vvvvv -C -Fd backup-20190225074600.10361-db-depesz.dump\nGNU gdb (Debian 7.12-6) 7.12.0.20161007-git\nCopyright (C) 2016 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>. \nFind the GDB manual and other documentation resources online at:\n<http://www.gnu.org/software/gdb/documentation/>.\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\"...\nReading symbols from pg_restore...done. \n(gdb) run \nStarting program: /home/pgdba/work/bin/pg_restore -vvvvv -C -Fd backup-20190225074600.10361-db-depesz.dump\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\". \n-- \n-- PostgreSQL database dump \n-- \n \n-- Dumped from database version 12devel\n-- Dumped by pg_dump version 12devel\n\n-- Started on 2019-02-25 07:46:01 CET\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET client_min_messages = warning;\nSET row_security = off;\n\npg_restore: creating DATABASE \"depesz\"\n--\n-- TOC entry 2148 (class 1262 OID 16631)\n-- Name: depesz; Type: DATABASE; Schema: -; Owner: depesz\n--\n\nCREATE DATABASE depesz WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';\n\n\nALTER DATABASE depesz OWNER TO depesz;\n\npg_restore: connecting to new database \"depesz\"\n\\connect depesz\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET client_min_messages = warning;\nSET row_security = off;\n\npg_restore: creating TABLE \"public.test\"\nSET default_tablespace = '';\n\nSET default_with_oids = false;\n\n--\n-- TOC entry 196 (class 1259 OID 16632)\n-- Name: test; Type: TABLE; Schema: public; Owner: depesz\n--\n\nCREATE TABLE public.test (\n i integer\n);\n\n\nALTER TABLE public.test OWNER TO depesz;\n\n--\n-- TOC entry 2142 (class 0 OID 16632)\n-- Dependencies: 196\n-- Data for Name: test; Type: TABLE DATA; Schema: public; Owner: depesz\n-- File: 2142.dat\n--\n\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x000055555555d99c in _printTocEntry (AH=AH@entry=0x55555577e350, te=te@entry=0x5555557844a0, isData=isData@entry=true) at pg_backup_archiver.c:3636\n3636 pg_backup_archiver.c: No such file or directory.\n(gdb) bt\n#0 0x000055555555d99c in _printTocEntry (AH=AH@entry=0x55555577e350, te=te@entry=0x5555557844a0, isData=isData@entry=true) at pg_backup_archiver.c:3636\n#1 0x000055555555e41d in restore_toc_entry (AH=AH@entry=0x55555577e350, te=te@entry=0x5555557844a0, is_parallel=is_parallel@entry=false) at pg_backup_archiver.c:852\n#2 0x000055555555eebb in RestoreArchive (AHX=0x55555577e350) at pg_backup_archiver.c:675\n#3 0x000055555555aaab in main (argc=5, argv=<optimized out>) at pg_restore.c:432\n(gdb)\n\nif you'd need the dump to investigate - it's available here:\nhttps://www.depesz.com/wp-content/uploads/2019/02/bad.dump.tar.gz\n\nUnfortunately I don't have previous binaries, but I refreshed previously\naround a week ago.\n\nBest regards,\n\ndepesz\n\n\n", "msg_date": "Mon, 25 Feb 2019 08:45:39 +0100", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Mon, Feb 25, 2019 at 8:45 AM hubert depesz lubaczewski <depesz@depesz.com> wrote:\n>\n> I did upgrade of my test pg. Part of this is pg_dump -Fd of each\n> database, then upgrade binaries, then initdb, and pg_restore.\n>\n> But - I can't restore any database that has any data - I get segfaults.\n\nThank for reporting. Unfortunately, I can't reproduce this issue on the master\n(for me it's currently bc09d5e4cc) with the dump you've provided - do I need to\ndo something more than just pg_restore to trigger it?\n\n", "msg_date": "Mon, 25 Feb 2019 11:01:05 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Mon, Feb 25, 2019 at 11:01:05AM +0100, Dmitry Dolgov wrote:\n> > On Mon, Feb 25, 2019 at 8:45 AM hubert depesz lubaczewski <depesz@depesz.com> wrote:\n> >\n> > I did upgrade of my test pg. Part of this is pg_dump -Fd of each\n> > database, then upgrade binaries, then initdb, and pg_restore.\n> >\n> > But - I can't restore any database that has any data - I get segfaults.\n> \n> Thank for reporting. Unfortunately, I can't reproduce this issue on the master\n> (for me it's currently bc09d5e4cc) with the dump you've provided - do I need to\n> do something more than just pg_restore to trigger it?\n\nWhat's crashing for me is restoring the (12dev) dumpfile using v11 psql:\n\n[pryzbyj@database tmp]$ pg_restore backup-20190225074600.10361-db-depesz.dump >/dev/null\nSegmentation fault (core dumped)\n[pryzbyj@database tmp]$ pg_restore -V\npg_restore (PostgreSQL) 11.2\n\nI would restore dump into v12dev and re-dump and compare dump output, except it\nseems like pg_restore -d no longer restores into a database ??\n\n[pryzbyj@database tmp]$ PGHOST=/tmp PGPORT=5678 PATH=~/src/postgresql.bin/bin pg_restore backup-20190225074600.10361-db-depesz.dump -d postgres |wc -l\n44\n\nJustin\n\n", "msg_date": "Mon, 25 Feb 2019 09:15:31 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> What's crashing for me is restoring the (12dev) dumpfile using v11 psql:\n\nYeah, I can reproduce that here, using either -Fc or -Fd format dumps.\nThe immediate problem in your example is that the \"defn\" field of a\nTABLE DATA entry is now null where it used to be an empty string.\nPoking at related examples suggests that other fields have suffered\nthe same fate.\n\nIt appears to me that f831d4acc required a good deal more adult\nsupervision than it actually got. That was alleged to be a small\nnotational refactoring, not a redefinition of what gets put into\ndump files.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 25 Feb 2019 11:20:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Mon, Feb 25, 2019 at 08:45:39AM +0100, hubert depesz lubaczewski wrote:\n> Hi,\n> I did upgrade of my test pg. Part of this is pg_dump -Fd of each\n> database, then upgrade binaries, then initdb, and pg_restore.\n\nSorry, please disregard this problem.\n\nError was sitting on a chair.\n\nBest regards,\n\ndepesz\n\n\n", "msg_date": "Mon, 25 Feb 2019 17:20:54 +0100", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Mon, Feb 25, 2019 at 11:20:14AM -0500, Tom Lane wrote:\n> It appears to me that f831d4acc required a good deal more adult\n> supervision than it actually got. That was alleged to be a small\n> notational refactoring, not a redefinition of what gets put into\n> dump files.\n\nHow much consistent do we need to be for custom dump archives\nregarding backward and upward-compatibility? For dumps, we give no\nguarantee that a dump taken with pg_dump on version N will be\ncompatible with a backend at version (N+1), so there is a effort for\nbackward compatibility, not really upward compatibility.\n\nIt seems to me that f831d4acc should have bumped at least\nMAKE_ARCHIVE_VERSION as it changes the dump contents, still it seems\nlike a lot of for some refactoring? FWIW, I have gone through the\ncommit's thread and I actually agree that instead of a mix of empty\nstrings and NULL, using only NULL is cleaner.\n--\nMichael", "msg_date": "Tue, 26 Feb 2019 13:37:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Feb 25, 2019 at 11:20:14AM -0500, Tom Lane wrote:\n>> It appears to me that f831d4acc required a good deal more adult\n>> supervision than it actually got. That was alleged to be a small\n>> notational refactoring, not a redefinition of what gets put into\n>> dump files.\n\n> How much consistent do we need to be for custom dump archives\n> regarding backward and upward-compatibility?\n\nWell, if we didn't want to fix this, a reasonable way to go about\nit would be to bump the archive version number in pg_dump output,\nso that old versions would issue a useful complaint instead of crashing.\nHowever, I repeat that this patch was sold as a notational improvement,\nnot something that was going to break format compatibility. I think if\nanyone had mentioned the latter, there would have been push-back against\nits being committed at all. I am providing such push-back right now,\nbecause I don't think we should break file compatibility for this.\n\nAlso, I'm now realizing that 4dbe19690 was probably not fixing an\naboriginal bug, but something that this patch introduced, because\nwe'd likely have noticed it before if the owner field could have\nbeen a null pointer all along. How much do you want to bet on\nwhether there are other such bugs now, that we haven't yet tripped\nover?\n\nI think this patch needs to be worked over so that what it writes\nis exactly what was written before. If the author is unwilling\nto do that PDQ, it should be reverted.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Feb 2019 00:16:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Tue, Feb 26, 2019 at 12:16:35AM -0500, Tom Lane wrote:\n> Well, if we didn't want to fix this, a reasonable way to go about\n> it would be to bump the archive version number in pg_dump output,\n> so that old versions would issue a useful complaint instead of crashing.\n> However, I repeat that this patch was sold as a notational improvement,\n> not something that was going to break format compatibility. I think if\n> anyone had mentioned the latter, there would have been push-back against\n> its being committed at all. I am providing such push-back right now,\n> because I don't think we should break file compatibility for this.\n\nWhile I agree that the patch makes handling of the different fields in\narchive entries cleaner, I agree as well that this is not enough to\njustify a dump version bump.\n\n> I think this patch needs to be worked over so that what it writes\n> is exactly what was written before. If the author is unwilling\n> to do that PDQ, it should be reverted.\n\nWorks for me. With a quick read of the code, it seems to me that it\nis possible to keep compatibility while keeping the simplifications\naround ArchiveEntry()'s refactoring. Alvaro?\n--\nMichael", "msg_date": "Tue, 26 Feb 2019 14:37:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Tue, Feb 26, 2019 at 6:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Works for me. With a quick read of the code, it seems to me that it\n> is possible to keep compatibility while keeping the simplifications\n> around ArchiveEntry()'s refactoring.\n\nYes, it should be rather simple, we can e.g. return to the old less consistent\nNULL handling approach something (like in the attached patch), or replace a NULL\nvalue with an empty string in WriteToc. Give me a moment, I'll check it out. At\nthe same time I would suggest to keep replace_line_endings -> sanitize_line,\nsince it doesn't break compatibility.", "msg_date": "Tue, 26 Feb 2019 09:13:49 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Tue, Feb 26, 2019 at 9:13 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Tue, Feb 26, 2019 at 6:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Works for me. With a quick read of the code, it seems to me that it\n> > is possible to keep compatibility while keeping the simplifications\n> > around ArchiveEntry()'s refactoring.\n>\n> Yes, it should be rather simple, we can e.g. return to the old less consistent\n> NULL handling approach something (like in the attached patch), or replace a NULL\n> value with an empty string in WriteToc. Give me a moment, I'll check it out. At\n> the same time I would suggest to keep replace_line_endings -> sanitize_line,\n> since it doesn't break compatibility.\n\nOk, unfortunately, I don't see any other ways, so the patch from the previous\nemail is probably the best option (we could also check NULLs in WriteToc, but\nit means the same kind of inconsistency, and in this case I guess it's better\nto keep NULL handling as it was before).\n\nBut I hope it still makes sense to consider more consisten approach here, maybe\nwith the next dump version bump, if it's going to happen in the foreseeable\nfuture.\n\n", "msg_date": "Tue, 26 Feb 2019 11:47:13 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On 2019-Feb-26, Michael Paquier wrote:\n\n> On Tue, Feb 26, 2019 at 12:16:35AM -0500, Tom Lane wrote:\n> > Well, if we didn't want to fix this, a reasonable way to go about\n> > it would be to bump the archive version number in pg_dump output,\n> > so that old versions would issue a useful complaint instead of crashing.\n> > However, I repeat that this patch was sold as a notational improvement,\n> > not something that was going to break format compatibility. I think if\n> > anyone had mentioned the latter, there would have been push-back against\n> > its being committed at all. I am providing such push-back right now,\n> > because I don't think we should break file compatibility for this.\n> \n> While I agree that the patch makes handling of the different fields in\n> archive entries cleaner, I agree as well that this is not enough to\n> justify a dump version bump.\n\nYeah, it was a nice thing to have, but I didn't keep in mind that we\nought to be able to provide such upwards compatibility. (Is this\nupwards or downwards or backwards or forwards compatibility, now,\nactually? I can't quite figure it out which direction it goes.)\n\n\n> > I think this patch needs to be worked over so that what it writes\n> > is exactly what was written before. If the author is unwilling\n> > to do that PDQ, it should be reverted.\n> \n> Works for me. With a quick read of the code, it seems to me that it\n> is possible to keep compatibility while keeping the simplifications\n> around ArchiveEntry()'s refactoring. Alvaro?\n\nYeah, let me review the patch Dmitry just sent.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 26 Feb 2019 19:49:53 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On 2019-Feb-26, Dmitry Dolgov wrote:\n\n> Yes, it should be rather simple, we can e.g. return to the old less consistent\n> NULL handling approach something (like in the attached patch), or replace a NULL\n> value with an empty string in WriteToc. Give me a moment, I'll check it out. At\n> the same time I would suggest to keep replace_line_endings -> sanitize_line,\n> since it doesn't break compatibility.\n\nHmm, shouldn't we modify sanitize_line so that it returns strdup(hyphen)\nwhen input is empty and want_hyphen, too?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 26 Feb 2019 19:53:10 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Hmm, shouldn't we modify sanitize_line so that it returns strdup(hyphen)\n> when input is empty and want_hyphen, too?\n\nIf this patch is touching the behavior of functions like that, then\nit's going in the wrong direction; the need for any such change suggests\nstrongly that you've failed to restore the old behavior as to which TOC\nfields can be null or not.\n\nThere might be reason to make such cleanups/improvements separately,\nbut let's *not* fuzz things up by doing them in the corrective patch.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Feb 2019 18:02:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On 2019-Feb-26, Dmitry Dolgov wrote:\n\n> > On Tue, Feb 26, 2019 at 6:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Works for me. With a quick read of the code, it seems to me that it\n> > is possible to keep compatibility while keeping the simplifications\n> > around ArchiveEntry()'s refactoring.\n> \n> Yes, it should be rather simple, we can e.g. return to the old less consistent\n> NULL handling approach something (like in the attached patch), or replace a NULL\n> value with an empty string in WriteToc. Give me a moment, I'll check it out. At\n> the same time I would suggest to keep replace_line_endings -> sanitize_line,\n> since it doesn't break compatibility.\n\nI think it would be better to just put back the .defn = \"\" (etc) to the\nArchiveEntry calls.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 26 Feb 2019 20:42:29 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I think it would be better to just put back the .defn = \"\" (etc) to the\n> ArchiveEntry calls.\n\nYeah, that was what I was imagining --- just make the ArchiveEntry calls\nact exactly like they did before in terms of what gets filled into the\nTOC fields. This episode is a fine reminder of why premature optimization\nis the root of all evil...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Feb 2019 18:55:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Tue, Feb 26, 2019 at 11:53 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Feb-26, Dmitry Dolgov wrote:\n>\n> > Yes, it should be rather simple, we can e.g. return to the old less consistent\n> > NULL handling approach something (like in the attached patch), or replace a NULL\n> > value with an empty string in WriteToc. Give me a moment, I'll check it out. At\n> > the same time I would suggest to keep replace_line_endings -> sanitize_line,\n> > since it doesn't break compatibility.\n>\n> Hmm, shouldn't we modify sanitize_line so that it returns strdup(hyphen)\n> when input is empty and want_hyphen, too?\n\nYes, you're right.\n\n> I think it would be better to just put back the .defn = \"\" (etc) to the\n> ArchiveEntry calls.\n\nThen we should do this not only for defn, but for owner and dropStmt too. I can\nupdate the fix patch I've sent before, if it's preferrable approach in this\nparticular situation. But I hope there are no objections if I'll then submit\nthe original changes with more consistent null handling separately to make\ndecision about them more consciously.\n\n", "msg_date": "Wed, 27 Feb 2019 10:36:22 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On 2019-Feb-27, Dmitry Dolgov wrote:\n\n> > On Tue, Feb 26, 2019 at 11:53 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > I think it would be better to just put back the .defn = \"\" (etc) to the\n> > ArchiveEntry calls.\n> \n> Then we should do this not only for defn, but for owner and dropStmt too.\n\nYeah, absolutely.\n\n> I can\n> update the fix patch I've sent before, if it's preferrable approach in this\n> particular situation.\n\nI'm not sure we need those changes, since we're forced to update all\ncallsites anyway.\n\n> But I hope there are no objections if I'll then submit the original\n> changes with more consistent null handling separately to make decision\n> about them more consciously.\n\nI think we should save such a patch for whenever we next update the\narchive version number, which could take a couple of years given past\nhistory. I'm inclined to add a comment near K_VERS_SELF to remind\nwhoever next patches it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 27 Feb 2019 09:32:17 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Feb-27, Dmitry Dolgov wrote:\n>> But I hope there are no objections if I'll then submit the original\n>> changes with more consistent null handling separately to make decision\n>> about them more consciously.\n\n> I think we should save such a patch for whenever we next update the\n> archive version number, which could take a couple of years given past\n> history. I'm inclined to add a comment near K_VERS_SELF to remind\n> whoever next patches it.\n\n+1. This isn't an unreasonable cleanup idea, but being only a cleanup\nidea, it doesn't seem worth creating compatibility issues for. Let's\nwait till there is some more-pressing reason to change the archive format,\nand then fix this in the same release cycle.\n\nI'd also note that given what we've seen so far, there are going to be\nsome slow-to-flush-out null pointer dereferencing bugs from this.\nI'm not really eager to introduce that towards the tail end of a devel\ncycle.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 27 Feb 2019 12:02:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Wed, Feb 27, 2019 at 1:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> > > I think it would be better to just put back the .defn = \"\" (etc) to the\n> > > ArchiveEntry calls.\n> >\n> > Then we should do this not only for defn, but for owner and dropStmt too.\n>\n> Yeah, absolutely.\n\nDone, please find the attached patch.\n\n> > I can\n> > update the fix patch I've sent before, if it's preferrable approach in this\n> > particular situation.\n>\n> I'm not sure we need those changes, since we're forced to update all\n> callsites anyway.\n\nI guess we can keep the part about removing null checks before using strlen,\nsince it's going to be useless.\n\n> On Wed, Feb 27, 2019 at 10:36 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Tue, Feb 26, 2019 at 11:53 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Feb-26, Dmitry Dolgov wrote:\n> >\n> > > Yes, it should be rather simple, we can e.g. return to the old less consistent\n> > > NULL handling approach something (like in the attached patch), or replace a NULL\n> > > value with an empty string in WriteToc. Give me a moment, I'll check it out. At\n> > > the same time I would suggest to keep replace_line_endings -> sanitize_line,\n> > > since it doesn't break compatibility.\n> >\n> > Hmm, shouldn't we modify sanitize_line so that it returns strdup(hyphen)\n> > when input is empty and want_hyphen, too?\n>\n> Yes, you're right.\n\nI've looked closer, and looks like I was mistaken. In the only place where it\nmatters we anyway pass NULL after verifying noOwner:\n\n sanitized_owner = sanitize_line(ropt->noOwner ? NULL : te->owner, true);\n\nSo I haven't change sanitize_line yet, but can update it if there is a strong\nopinion about this function.", "msg_date": "Wed, 27 Feb 2019 18:21:08 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Wed, Feb 27, 2019 at 12:02:43PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I think we should save such a patch for whenever we next update the\n>> archive version number, which could take a couple of years given past\n>> history. I'm inclined to add a comment near K_VERS_SELF to remind\n>> whoever next patches it.\n> \n> +1. This isn't an unreasonable cleanup idea, but being only a cleanup\n> idea, it doesn't seem worth creating compatibility issues for. Let's\n> wait till there is some more-pressing reason to change the archive format,\n> and then fix this in the same release cycle.\n\n+1. Having a comment as reminder would be really nice.\n--\nMichael", "msg_date": "Thu, 28 Feb 2019 11:10:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On 2019-Feb-27, Dmitry Dolgov wrote:\n\n> > On Wed, Feb 27, 2019 at 1:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > > > I think it would be better to just put back the .defn = \"\" (etc) to the\n> > > > ArchiveEntry calls.\n> > >\n> > > Then we should do this not only for defn, but for owner and dropStmt too.\n> >\n> > Yeah, absolutely.\n> \n> Done, please find the attached patch.\n\nPushed, thanks. I added the reminder comment I mentioned.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 28 Feb 2019 17:24:43 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Thu, Feb 28, 2019 at 9:24 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Pushed, thanks. I added the reminder comment I mentioned.\n\nThank you, sorry for troubles.\n\n", "msg_date": "Thu, 28 Feb 2019 21:38:27 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Hi,\n\nOn 2019-02-27 09:32:17 -0300, Alvaro Herrera wrote:\n> On 2019-Feb-27, Dmitry Dolgov wrote:\n> > But I hope there are no objections if I'll then submit the original\n> > changes with more consistent null handling separately to make decision\n> > about them more consciously.\n> \n> I think we should save such a patch for whenever we next update the\n> archive version number, which could take a couple of years given past\n> history. I'm inclined to add a comment near K_VERS_SELF to remind\n> whoever next patches it.\n\nThe pluggable storage patchset contains exactly that... I've attached\nthe precursor patch (CREATE ACCESS METHOD ... TYPE TABLE), and the patch\nfor pg_dump support. They need a bit more cleanup, but it might be\nuseful information for this thread.\n\nOne thing I want to bring up here rather than in the pluggable storage\nthread is that currently the pg_dump support for access methods deals\nwith table access methods in a manner similar to the way we deal with\ntablespaces. Instead of specifying the AM on every table creation, we\nset the default AM when needed. That makes it easier to adjust dumps.\n\nBut it does basically require breaking archive compatibility. I\npersonally am OK with that, but I thought it might be worth discussing.\n\nI guess we could try avoid the compat issue by only increasing the\narchive format if there actually are any non-default AMs, but to me that\ndoesn't seem like an improvement worthy of the necessary complications.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 4 Mar 2019 10:15:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One thing I want to bring up here rather than in the pluggable storage\n> thread is that currently the pg_dump support for access methods deals\n> with table access methods in a manner similar to the way we deal with\n> tablespaces. Instead of specifying the AM on every table creation, we\n> set the default AM when needed. That makes it easier to adjust dumps.\n\nHm. I wonder if it'd make more sense to consider that an access method is\na property of a tablespace? That is, all tables in a tablespace have the\nsame access method, so you don't need to label tables individually?\n\n> But it does basically require breaking archive compatibility. I\n> personally am OK with that, but I thought it might be worth discussing.\n\nI don't recall there being huge pushback when we did that in the past,\nso I'm fine with it as long as there's an identifiable feature making\nit necessary.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 04 Mar 2019 13:25:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Hi,\n\nOn 2019-03-04 13:25:40 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > One thing I want to bring up here rather than in the pluggable storage\n> > thread is that currently the pg_dump support for access methods deals\n> > with table access methods in a manner similar to the way we deal with\n> > tablespaces. Instead of specifying the AM on every table creation, we\n> > set the default AM when needed. That makes it easier to adjust dumps.\n> \n> Hm. I wonder if it'd make more sense to consider that an access method is\n> a property of a tablespace? That is, all tables in a tablespace have the\n> same access method, so you don't need to label tables individually?\n\nI don't think that'd work well. That'd basically necessitate creating\nmultiple tablespaces just to create a table with a different AM -\ncreating tablespaces is a superuser only activity that makes backups etc\nmore complicated. It also doesn't correspond well to pg_class.relam etc.\n\n\n> > But it does basically require breaking archive compatibility. I\n> > personally am OK with that, but I thought it might be worth discussing.\n> \n> I don't recall there being huge pushback when we did that in the past,\n> so I'm fine with it as long as there's an identifiable feature making\n> it necessary.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Mon, 4 Mar 2019 10:32:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "> On Mon, Mar 4, 2019 at 7:15 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> The pluggable storage patchset contains exactly that... I've attached\n> the precursor patch (CREATE ACCESS METHOD ... TYPE TABLE), and the patch\n> for pg_dump support. They need a bit more cleanup, but it might be\n> useful information for this thread.\n\nDidn't expect this to happen so quickly, thanks!\n\n> On 2019-03-04 13:25:40 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > >\n> > > But it does basically require breaking archive compatibility. I\n> > > personally am OK with that, but I thought it might be worth discussing.\n> >\n> > I don't recall there being huge pushback when we did that in the past,\n> > so I'm fine with it as long as there's an identifiable feature making\n> > it necessary.\n>\n> Cool.\n\nThen I guess we need to add the attached patch on top of a pg_dump support for\ntable am. It contains changes to use NULL as a default value for owner / defn /\ndropStmt (exactly what we've changed back in 19455c9f56), and doesn't crash,\nsince K_VERS is different.", "msg_date": "Sun, 10 Mar 2019 22:38:10 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On 2019-Mar-10, Dmitry Dolgov wrote:\n\n> Then I guess we need to add the attached patch on top of a pg_dump support for\n> table am. It contains changes to use NULL as a default value for owner / defn /\n> dropStmt (exactly what we've changed back in 19455c9f56), and doesn't crash,\n> since K_VERS is different.\n\nI think this is an open item; we were supposed to do it right after\n3b925e905de3, but failed to notice.\n\nWould anybody be too upset if I push this patch now? CC'ed RMT.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 25 Apr 2019 15:05:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Hi,\n\nOn 2019-04-25 15:05:47 -0400, Alvaro Herrera wrote:\n> On 2019-Mar-10, Dmitry Dolgov wrote:\n> \n> > Then I guess we need to add the attached patch on top of a pg_dump support for\n> > table am. It contains changes to use NULL as a default value for owner / defn /\n> > dropStmt (exactly what we've changed back in 19455c9f56), and doesn't crash,\n> > since K_VERS is different.\n> \n> I think this is an open item; we were supposed to do it right after\n> 3b925e905de3, but failed to notice.\n> \n> Would anybody be too upset if I push this patch now? CC'ed RMT.\n\nI think that'd make sense. The rest of the RMT probably isn't awake\nhowever, so I think it'd be good to give them 24h to object.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Apr 2019 12:54:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Thu, Apr 25, 2019 at 12:54:06PM -0700, Andres Freund wrote:\n> I think that'd make sense. The rest of the RMT probably isn't awake\n> however, so I think it'd be good to give them 24h to object.\n\nIt would be nice to clean all that now, so +1 from me to apply the\npatch.\n--\nMichael", "msg_date": "Fri, 26 Apr 2019 11:55:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "On Fri, Apr 26, 2019 at 11:55:17AM +0900, Michael Paquier wrote:\n>On Thu, Apr 25, 2019 at 12:54:06PM -0700, Andres Freund wrote:\n>> I think that'd make sense. The rest of the RMT probably isn't awake\n>> however, so I think it'd be good to give them 24h to object.\n>\n>It would be nice to clean all that now, so +1 from me to apply the\n>patch.\n\n+1 from me too\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 26 Apr 2019 15:42:02 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" }, { "msg_contents": "Thanks! Pushed. Marking the open item as closed too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Apr 2019 12:38:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Segfault when restoring -Fd dump on current HEAD" } ]
[ { "msg_contents": "Hi hackers,\n\nSmall issue with readir implementation for Windows.\nRight now it returns ENOENT in case of any error returned by FindFirstFile.\nSo all places in Postgres where opendir/listdir are used will assume \nthat directory is empty and\ndo nothing without reporting any error.\nIt is not so good if directory is actually not empty but there are not \nenough permissions for accessing the directory and FindFirstFile\nreturns ERROR_ACCESS_DENIED:\n\nstruct dirent *\nreaddir(DIR *d)\n{\n     WIN32_FIND_DATA fd;\n\n     if (d->handle == INVALID_HANDLE_VALUE)\n     {\n         d->handle = FindFirstFile(d->dirname, &fd);\n         if (d->handle == INVALID_HANDLE_VALUE)\n         {\n             errno = ENOENT;\n             return NULL;\n         }\n     }\n\n\nAttached please find small patch fixing the problem.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 25 Feb 2019 18:38:16 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "readdir is incorrectly implemented at Windows" }, { "msg_contents": "Originally bug was reported by Yuri Kurenkov:\nhttps://github.com/postgrespro/pg_probackup/issues/48\n\nAs pg_probackup rely on readdir() for listing files to backup, wrong \npermissions could lead to a broken backup.\n\nOn 02/25/2019 06:38 PM, Konstantin Knizhnik wrote:\n> Hi hackers,\n>\n> Small issue with readir implementation for Windows.\n> Right now it returns ENOENT in case of any error returned by \n> FindFirstFile.\n> So all places in Postgres where opendir/listdir are used will assume \n> that directory is empty and\n> do nothing without reporting any error.\n> It is not so good if directory is actually not empty but there are not \n> enough permissions for accessing the directory and FindFirstFile\n> returns ERROR_ACCESS_DENIED:\n>\n> struct dirent *\n> readdir(DIR *d)\n> {\n> WIN32_FIND_DATA fd;\n>\n> if (d->handle == INVALID_HANDLE_VALUE)\n> {\n> d->handle = FindFirstFile(d->dirname, &fd);\n> if (d->handle == INVALID_HANDLE_VALUE)\n> {\n> errno = ENOENT;\n> return NULL;\n> }\n> }\n>\n>\n> Attached please find small patch fixing the problem.\n>\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 27 Feb 2019 17:00:52 +0300", "msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: readdir is incorrectly implemented at Windows" }, { "msg_contents": "On Mon, Feb 25, 2019 at 06:38:16PM +0300, Konstantin Knizhnik wrote:\n> Small issue with readir implementation for Windows.\n> Right now it returns ENOENT in case of any error returned by FindFirstFile.\n> So all places in Postgres where opendir/listdir are used will assume that\n> directory is empty and\n> do nothing without reporting any error.\n> It is not so good if directory is actually not empty but there are not\n> enough permissions for accessing the directory and FindFirstFile\n> returns ERROR_ACCESS_DENIED:\n\nYeah, I think that you are right here. We should have a clean error\nif permissions are incorrect, and your patch does what is mentioned on\nWindows docs:\nhttps://docs.microsoft.com/en-us/windows/desktop/api/fileapi/nf-fileapi-findfirstfilea\n\nCould you add it to the next commit fest as a bug fix please? I think\nthat I will be able to look at that in details soon, but if not it\nwould be better to not lose track of your fix.\n--\nMichael", "msg_date": "Thu, 28 Feb 2019 11:15:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: readdir is incorrectly implemented at Windows" }, { "msg_contents": "On Thu, Feb 28, 2019 at 11:15:53AM +0900, Michael Paquier wrote:\n> Could you add it to the next commit fest as a bug fix please? I think\n> that I will be able to look at that in details soon, but if not it\n> would be better to not lose track of your fix.\n\nOkay, I have looked at your patch. And double-checked on Windows. To\nput it short, I agree with the approach you are taking. I have been\ncurious about the mention to MinGW in the existing code as well as in\nthe patch you are proposing, and I have checked if what your patch and\nwhat the current state are correct, and I think that HEAD is wrong on\none thing.\n\nFirst mingw64 code can be found here:\nhttps://sourceforge.net/p/mingw-w64/mingw-w64/ci/master/tree/\nhttps://github.com/Alexpux/mingw-w64/\n\nThen, the implementation of readdir/opendir/closedir can be found in\nmingw-w64-crt/misc/dirent.c. Looking at their implementation of\nreaddir, I can see two things:\n1) When beginning a search in a directory, _tfindfirst gets used,\nwhich returns ENOENT as error if no matches are found:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/findfirst-functions?view=vs-2017\nSo from this point of view your patch is right: you make readdir()\nreturn errno=0 which matches the normal *nix behaviors. And MinGW\ndoes not do that.\n2) On follow-up lookups, MinGW code uses _tfindnext, and actually\n*enforces* errno=0 when seeing ERROR_NO_MORE_FILES. So from this\npoint of view the code's comment in HEAD is incorrect as of today.\nThe current implementation exists since 399a36a so perhaps MinGW was\nnot like that when dirent.c has been added in src/port/, but that's\nnot true today. So let's fix the comment at the same time.\n\nAttached is an updated patch with my suggestions. Does it look fine\nto you?\n\nAlso, I think that we should credit Yuri Kurenkov for the discovery of\nthe issue, with yourself, Konstantin, as the author of the patch.\nAre there other people involved which should be credited? Like\nGrigory?\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 15:13:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: readdir is incorrectly implemented at Windows" }, { "msg_contents": "\n\nOn 01.03.2019 9:13, Michael Paquier wrote:\n> On Thu, Feb 28, 2019 at 11:15:53AM +0900, Michael Paquier wrote:\n>> Could you add it to the next commit fest as a bug fix please? I think\n>> that I will be able to look at that in details soon, but if not it\n>> would be better to not lose track of your fix.\n> Okay, I have looked at your patch. And double-checked on Windows. To\n> put it short, I agree with the approach you are taking. I have been\n> curious about the mention to MinGW in the existing code as well as in\n> the patch you are proposing, and I have checked if what your patch and\n> what the current state are correct, and I think that HEAD is wrong on\n> one thing.\n>\n> First mingw64 code can be found here:\n> https://sourceforge.net/p/mingw-w64/mingw-w64/ci/master/tree/\n> https://github.com/Alexpux/mingw-w64/\n>\n> Then, the implementation of readdir/opendir/closedir can be found in\n> mingw-w64-crt/misc/dirent.c. Looking at their implementation of\n> readdir, I can see two things:\n> 1) When beginning a search in a directory, _tfindfirst gets used,\n> which returns ENOENT as error if no matches are found:\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/findfirst-functions?view=vs-2017\n> So from this point of view your patch is right: you make readdir()\n> return errno=0 which matches the normal *nix behaviors. And MinGW\n> does not do that.\n> 2) On follow-up lookups, MinGW code uses _tfindnext, and actually\n> *enforces* errno=0 when seeing ERROR_NO_MORE_FILES. So from this\n> point of view the code's comment in HEAD is incorrect as of today.\n> The current implementation exists since 399a36a so perhaps MinGW was\n> not like that when dirent.c has been added in src/port/, but that's\n> not true today. So let's fix the comment at the same time.\n>\n> Attached is an updated patch with my suggestions. Does it look fine\n> to you?\nYes, certainly.\n\n>\n> Also, I think that we should credit Yuri Kurenkov for the discovery of\n> the issue, with yourself, Konstantin, as the author of the patch.\n> Are there other people involved which should be credited? Like\n> Grigory?\n\nYes,� Yuri Kurenkov and Grigory Smalking did a lot in investigation of \nthis problem.\n(the irony is that the problem detected by Yuri was caused by another \nbug in pg_probackup, but we thought\nthat it was related with permissions and come to this issue).\n\n> --\n> Michael\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 1 Mar 2019 10:23:02 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: readdir is incorrectly implemented at Windows" }, { "msg_contents": "On Fri, Mar 01, 2019 at 10:23:02AM +0300, Konstantin Knizhnik wrote:\n> Yes, Yuri Kurenkov and Grigory Smalking did a lot in investigation\n> of this problem.\n> (the irony is that the problem detected by Yuri was caused by\n> another bug in pg_probackup, but we thought that it was related with\n> permissions and come to this issue).\n\nThanks for confirming, Konstantin. Let's wait a couple of days to see\nif anybody has objections or comments, and I'll try to commit this\npatch.\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 16:43:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: readdir is incorrectly implemented at Windows" }, { "msg_contents": "\nOn 2/25/19 10:38 AM, Konstantin Knizhnik wrote:\n> Hi hackers,\n>\n> Small issue with readir implementation for Windows.\n> Right now it returns ENOENT in case of any error returned by\n> FindFirstFile.\n> So all places in Postgres where opendir/listdir are used will assume\n> that directory is empty and\n> do nothing without reporting any error.\n> It is not so good if directory is actually not empty but there are not\n> enough permissions for accessing the directory and FindFirstFile\n> returns ERROR_ACCESS_DENIED:\n>\n> struct dirent *\n> readdir(DIR *d)\n> {\n>     WIN32_FIND_DATA fd;\n>\n>     if (d->handle == INVALID_HANDLE_VALUE)\n>     {\n>         d->handle = FindFirstFile(d->dirname, &fd);\n>         if (d->handle == INVALID_HANDLE_VALUE)\n>         {\n>             errno = ENOENT;\n>             return NULL;\n>         }\n>     }\n>\n>\n> Attached please find small patch fixing the problem.\n>\n\nDiagnosis seems correct. I wonder if this is responsible for some odd\nthings we've seen over time on Windows.\n\nThis reads a bit oddly:\n\n\n          {\n -            errno = ENOENT;\n +            if (GetLastError() == ERROR_FILE_NOT_FOUND)\n +            {\n +                /* No more files, force errno=0 (unlike mingw) */\n +                errno = 0;\n +                return NULL;\n +            }\n +            _dosmaperr(GetLastError());\n              return NULL;\n          }\n\n\nWhy not something like:\n\n\n if (GetLastError() == ERROR_FILE_NOT_FOUND)\n     errno = 0;\n else\n     _dosmaperr(GetLastError());\n return NULL;\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Mar 2019 12:36:55 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: readdir is incorrectly implemented at Windows" }, { "msg_contents": "On Fri, Mar 01, 2019 at 04:43:13PM +0900, Michael Paquier wrote:\n> Thanks for confirming, Konstantin. Let's wait a couple of days to see\n> if anybody has objections or comments, and I'll try to commit this\n> patch.\n\nDone and backpatched down to 9.4, with Andrew's suggestion from\nupthread included.\n--\nMichael", "msg_date": "Mon, 4 Mar 2019 09:52:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: readdir is incorrectly implemented at Windows" } ]
[ { "msg_contents": "Greetings,\n\nI've been looking through the startup code for postmaster forks and\nsaw a couple of mechanisms used. Most forks seem to be using\nStartChildProcess with a MyAuxProc emumerator, but then some\n(autovacuum launcher/worker, syslogger, bgworker, archiver) are forked\nthrough their own start functions.\n\nI noticed some implication in the pgsql-hackers archives [1] that\nnon-AuxProcs are as such because they don't need access to shared\nmemory. Is this an accurate explanation?\n\nFor some context, I'm trying to come up with a patch to set the\nprocess identifier (MyAuxProc/am_autovacuumworker/launcher,\nam_archiver, etc.) immediately after forking, so we can provide\nprocess context info to a hook in early startup. I wanted to make sure\nto do things as cleanly as possible.\n\nWith the current startup architecture, we have to touch multiple\nentrypoints to achieve the desired effect. Is there any particular\nreason we couldn't fold all of the startup processes into the\nStartChildProcess code and assign MyAuxProc for processes that don't\ncurrently have one, or is this a non-starter?\n\nThank you for your time.\n\n--\nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n\n[1] https://www.postgresql.org/message-id/20181127.175949.06807946.horiguchi.kyotaro%40lab.ntt.co.jp\n\n", "msg_date": "Mon, 25 Feb 2019 12:58:03 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Auxiliary Processes and MyAuxProc" }, { "msg_contents": "Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> I've been looking through the startup code for postmaster forks and\n> saw a couple of mechanisms used. Most forks seem to be using\n> StartChildProcess with a MyAuxProc emumerator, but then some\n> (autovacuum launcher/worker, syslogger, bgworker, archiver) are forked\n> through their own start functions.\n\n> I noticed some implication in the pgsql-hackers archives [1] that\n> non-AuxProcs are as such because they don't need access to shared\n> memory. Is this an accurate explanation?\n\nThat was the original idea, but it hasn't stood the test of time very\nwell. In particular, the AuxProcType enum only works for child\nprocesses that there's supposed to be exactly one of, so we haven't\nused it for autovac workers or bgworkers, although those certainly\nhave to be connected to shmem (hm, maybe that's not true for all\ntypes of bgworker, not sure). The autovac launcher is kind of a\nweird special case --- I think it could have been included in AuxProcType,\nbut it wasn't, possibly to minimize the cosmetic difference between it\nand autovac workers.\n\n> For some context, I'm trying to come up with a patch to set the\n> process identifier (MyAuxProc/am_autovacuumworker/launcher,\n> am_archiver, etc.) immediately after forking,\n\nDon't we do that already?\n\n> With the current startup architecture, we have to touch multiple\n> entrypoints to achieve the desired effect. Is there any particular\n> reason we couldn't fold all of the startup processes into the\n> StartChildProcess code and assign MyAuxProc for processes that don't\n> currently have one, or is this a non-starter?\n\nIf memory serves, StartChildProcess already was an attempt to unify\nthe treatment of postmaster children. It's possible that another\nround of unification would be productive, but I think you'll find\nthat there are random small differences in requirements that'd\nmake it messy.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 25 Feb 2019 13:29:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Mon, Feb 25, 2019 at 1:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> <snip>\n>\n> > For some context, I'm trying to come up with a patch to set the\n> > process identifier (MyAuxProc/am_autovacuumworker/launcher,\n> > am_archiver, etc.) immediately after forking,\n>\n> Don't we do that already?\n\nKind of. The indicators are set in the Main functions after\nInitPostmasterChild and some other setup. In my little bit of digging\nI found InitPostmasterChild to be the best place for a centralized\n\"early start-up hook.\" Is there a better place you can think of\noff-hand?\n\n> <snip>\n>\n> If memory serves, StartChildProcess already was an attempt to unify\n> the treatment of postmaster children. It's possible that another\n> round of unification would be productive, but I think you'll find\n> that there are random small differences in requirements that'd\n> make it messy.\n\nIt kind of seemed like it, but I noticed the small differences in\nrequirements, which made me a bit hesitant. I'll go ahead and see what\nI can do and submit the patch for consideration.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n", "msg_date": "Mon, 25 Feb 2019 13:41:59 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Mon, Feb 25, 2019 at 1:41 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> <snip>\n> >\n> > If memory serves, StartChildProcess already was an attempt to unify\n> > the treatment of postmaster children. It's possible that another\n> > round of unification would be productive, but I think you'll find\n> > that there are random small differences in requirements that'd\n> > make it messy.\n>\n> It kind of seemed like it, but I noticed the small differences in\n> requirements, which made me a bit hesitant. I'll go ahead and see what\n> I can do and submit the patch for consideration.\n\nI'm considering changing StartChildProcess to take a struct with data\nfor forking/execing each different process. Each different backend\ntype would build up the struct and then pass it on to\nStartChildProcess, which would handle each separately. This would\nensure that the fork type is set prior to InitPostmasterChild and\nwould provide us with the information necessary to do what we need in\nthe InitPostmasterChild_hook.\n\nAttached is a patch to fork_process.h which shows roughly what I'm\nthinking. Does this seem somewhat sane as a first step?\n\n\n\n\n--\nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com", "msg_date": "Mon, 25 Feb 2019 17:25:11 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Mon, Feb 25, 2019 at 5:25 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Mon, Feb 25, 2019 at 1:41 PM Mike Palmiotto\n> <mike.palmiotto@crunchydata.com> wrote:\n> >\n> > <snip>\n> > >\n> > > If memory serves, StartChildProcess already was an attempt to unify\n> > > the treatment of postmaster children. It's possible that another\n> > > round of unification would be productive, but I think you'll find\n> > > that there are random small differences in requirements that'd\n> > > make it messy.\n> >\n> > It kind of seemed like it, but I noticed the small differences in\n> > requirements, which made me a bit hesitant. I'll go ahead and see what\n> > I can do and submit the patch for consideration.\n>\n> I'm considering changing StartChildProcess to take a struct with data\n> for forking/execing each different process. Each different backend\n> type would build up the struct and then pass it on to\n> StartChildProcess, which would handle each separately. This would\n> ensure that the fork type is set prior to InitPostmasterChild and\n> would provide us with the information necessary to do what we need in\n> the InitPostmasterChild_hook.\n>\n> Attached is a patch to fork_process.h which shows roughly what I'm\n> thinking. Does this seem somewhat sane as a first step?\n>\n\nAll,\n\nMike and I have written two patches to solve the issues\ndiscussed in this thread.\n\nThe first patch centralizes the startup of workers and extends\nworker identification that was introduced by AuxProcType. The worker\nid can then be leveraged by extensions for identification of each\nprocess.\n\nThe second patch adds a new hook that allows extensions to modify a worker\nprocess' metadata in backends.\n\nThese patches should make future worker implementation more\nstraightforward as there is now one function to call that sets up\neach worker. There is also code cleanup and removal of startup\nredundancies.\n\nPlease let me know your thoughts. I look forward to your feedback.\n\nThanks,\n\nYuli", "msg_date": "Wed, 14 Aug 2019 11:01:14 -0400", "msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "0002 seems way too large (and it doesn't currently apply). Is there\nsomething we can do to make it more manageable?\n\nI think it would be better to put your 0001 in second place rather than\nfirst, since your other patch doesn't use it AFAICS and it adds\nfunctionality that has not yet received approval [or even been\ndiscussed], while the other is necessary refactoring.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Sep 2019 10:49:11 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Sep 26, 2019 at 9:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> 0002 seems way too large (and it doesn't currently apply). Is there\n> something we can do to make it more manageable?\n\nInitially we were thinking of submitting one patch for the\ncentralization work and then separate patches per backend type. We\nopted not to go that route, mainly because of the number of resulting\npatches (there were somewhere around 13 total, as I remember). If it\nmakes sense, we can go ahead and split the patches up in that fashion\nafter rebasing.\n\n>\n> I think it would be better to put your 0001 in second place rather than\n> first, since your other patch doesn't use it AFAICS and it adds\n> functionality that has not yet received approval [or even been\n> discussed], while the other is necessary refactoring.\n\nSounds good.\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Thu, 26 Sep 2019 10:11:04 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2019-Sep-26, Mike Palmiotto wrote:\n\n> On Thu, Sep 26, 2019 at 9:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > 0002 seems way too large (and it doesn't currently apply). Is there\n> > something we can do to make it more manageable?\n> \n> Initially we were thinking of submitting one patch for the\n> centralization work and then separate patches per backend type. We\n> opted not to go that route, mainly because of the number of resulting\n> patches (there were somewhere around 13 total, as I remember). If it\n> makes sense, we can go ahead and split the patches up in that fashion\n> after rebasing.\n\nWell, I think it would be easier to manage as split patches, yeah.\nI think it'd be infrastructure that needs to be carefully reviewed,\nwhile the other ones are mostly boilerplate. If I were the committer\nfor it, I would push that initial patch first immediately followed by\nconversion of some process that's heavily exercised in buildfarm, wait\nuntil lack of trouble is evident, followed by a trickle of pushes to\nadapt the other processes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Sep 2019 11:56:21 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Sep 26, 2019 at 10:56 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Sep-26, Mike Palmiotto wrote:\n>\n> > On Thu, Sep 26, 2019 at 9:49 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > >\n> > > 0002 seems way too large (and it doesn't currently apply). Is there\n> > > something we can do to make it more manageable?\n> >\n> > Initially we were thinking of submitting one patch for the\n> > centralization work and then separate patches per backend type. We\n> > opted not to go that route, mainly because of the number of resulting\n> > patches (there were somewhere around 13 total, as I remember). If it\n> > makes sense, we can go ahead and split the patches up in that fashion\n> > after rebasing.\n>\n> Well, I think it would be easier to manage as split patches, yeah.\n> I think it'd be infrastructure that needs to be carefully reviewed,\n> while the other ones are mostly boilerplate. If I were the committer\n> for it, I would push that initial patch first immediately followed by\n> conversion of some process that's heavily exercised in buildfarm, wait\n> until lack of trouble is evident, followed by a trickle of pushes to\n> adapt the other processes.\n\nThanks for the feedback! I've rebased and tested on my F30 box with\nand without EXEC_BACKEND. Just working on splitting out the patches\nnow and will post the new patchset as soon as that's done (hopefully\nsometime tomorrow).\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Thu, 26 Sep 2019 18:03:24 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Sep 26, 2019 at 6:03 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Thu, Sep 26, 2019 at 10:56 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> <snip>\n> >\n> > Well, I think it would be easier to manage as split patches, yeah.\n> > I think it'd be infrastructure that needs to be carefully reviewed,\n> > while the other ones are mostly boilerplate. If I were the committer\n> > for it, I would push that initial patch first immediately followed by\n> > conversion of some process that's heavily exercised in buildfarm, wait\n> > until lack of trouble is evident, followed by a trickle of pushes to\n> > adapt the other processes.\n>\n> Thanks for the feedback! I've rebased and tested on my F30 box with\n> and without EXEC_BACKEND. Just working on splitting out the patches\n> now and will post the new patchset as soon as that's done (hopefully\n> sometime tomorrow).\n\nAttached is the reworked and rebased patch set. I put the hook on top\nand a separate commit for each process type. Note that avworker and\navlauncher were intentionally left together. Let me know if you think\nthose should be split out as well.\n\nTested again with and without EXEC_BACKEND. Note that you'll need to\nset randomize_va_space to 0 to disable ASLR in order to avoid\noccasional failure in the EXEC_BACKEND case.\n\nPlease let me know if anything else jumps out to you.\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Mon, 30 Sep 2019 15:13:18 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "Hi,\n\nOn 2019-09-30 15:13:18 -0400, Mike Palmiotto wrote:\n> Attached is the reworked and rebased patch set. I put the hook on top\n> and a separate commit for each process type. Note that avworker and\n> avlauncher were intentionally left together. Let me know if you think\n> those should be split out as well.\n\nI've not looked at this before, nor the thread. Just jumping here\nbecause of the reference to this patch you made in another thread...\n\nIn short, I think this is far from ready, nor am I sure this is going in\nthe right direction. There's a bit more detail about where I think this\nought to go interspersed below (in particular search for PgSubProcess,\nEXEC_BACKEND).\n\n\n\n> From 205ea86dde898f7edac327d46b2b43b04721aadc Mon Sep 17 00:00:00 2001\n> From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> Date: Mon, 30 Sep 2019 14:42:53 -0400\n> Subject: [PATCH 7/8] Add backends to process centralization\n\n.oO(out of order attached patches, how fun). 7, 8, 6, 5, 4, 2, 3, 1...\n\n\n\n> From 2a3a35f2e80cb2badcb0efbce1bad2484e364b7b Mon Sep 17 00:00:00 2001\n> From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> Date: Fri, 27 Sep 2019 12:28:19 -0400\n> Subject: [PATCH 1/8] Add ForkProcType infrastructure\n\nA better explanation would be nice. It's hard to review non-trivial\npatches that have little explanation as to what they're\nachieving. Especially when there's some unrelated seeming changes.\n\n\n> diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c\n> index 9238fbe98d..9f3dad1c6d 100644\n> --- a/src/backend/bootstrap/bootstrap.c\n> +++ b/src/backend/bootstrap/bootstrap.c\n> @@ -70,6 +70,7 @@ static void cleanup(void);\n> */\n>\n> AuxProcType MyAuxProcType = NotAnAuxProcess;\t/* declared in miscadmin.h */\n> +ForkProcType MyForkProcType = NoForkProcess;\t/* declared in fork_process.h */\n>\n> Relation\tboot_reldesc;\t\t/* current relation descriptor\n> */\n\nIMO, before this stuff gets massaged meaningfully, we ought to move code\nlike this (including AuxiliaryProcessMain etc) somewhere where it makes\nsense. If we're going to rejigger things, it ought to be towards a more\nunderstandable architecture, rather than keeping weird quirks around.\n\nI'm inclined to think that this better also include properly\ncentralizing a good portion of the EXEC_BACKEND code. Building\ninfrastructure that's supposed to be long-lived with that being left as\nis just seems like recipe for yet another odd framework.\n\n\n> @@ -538,11 +539,11 @@ static void ShmemBackendArrayAdd(Backend *bn);\n> static void ShmemBackendArrayRemove(Backend *bn);\n> #endif\t\t\t\t\t\t\t/* EXEC_BACKEND */\n>\n> -#define StartupDataBase()\t\tStartChildProcess(StartupProcess)\n> -#define StartBackgroundWriter() StartChildProcess(BgWriterProcess)\n> -#define StartCheckpointer()\t\tStartChildProcess(CheckpointerProcess)\n> -#define StartWalWriter()\t\tStartChildProcess(WalWriterProcess)\n> -#define StartWalReceiver()\t\tStartChildProcess(WalReceiverProcess)\n> +#define StartupDataBase()\t\tStartChildProcess(StartupFork)\n> +#define StartBackgroundWriter() StartChildProcess(BgWriterFork)\n> +#define StartCheckpointer()\t\tStartChildProcess(CheckpointerFork)\n> +#define StartWalWriter()\t\tStartChildProcess(WalWriterFork)\n> +#define StartWalReceiver()\t\tStartChildProcess(WalReceiverFork)\n\nFWIW, I'd just rip this out, it adds exactly nothing but one more place\nto change.\n\n\n> +/*\n> + * PrepAuxProcessFork\n> + *\n> + * Prepare a ForkProcType struct for the auxiliary process specified by\n\nForkProcData, not ForkProcType\n\n\n> + * AuxProcType. This does all prep related to av parameters and error messages.\n> + */\n> +static void\n> +PrepAuxProcessFork(ForkProcData *aux_fork)\n> +{\n> +\tint\t\t\tac = 0;\n> +\n> +\t/*\n> +\t * Set up command-line arguments for subprocess\n> +\t */\n> +\taux_fork->av[ac++] = pstrdup(\"postgres\");\n\nThere's no documentation as to what 'av' is actually supposed to mean. I\nassume auxvector or such, but if so that's neither obvious, nor\nnecessarily good.\n\n\n> +#ifdef EXEC_BACKEND\n> +\taux_fork->av[ac++] = pstrdup(\"--forkboot\");\n> +\taux_fork->av[ac++] = NULL;\t\t\t/* filled in by postmaster_forkexec */\n> +#endif\n\nWhat's the point of pstrdup()ing constant strings here? Afaict you're\nnot freeing them ever?\n\n\n> +\taux_fork->av[ac++] = psprintf(\"-x%d\", MyForkProcType);\n> +\n> +\taux_fork->av[ac] = NULL;\n> +\tAssert(ac < lengthof(*aux_fork->av));\n\nUnless I miss something, the lengthof here doesn't do anythign\nuseful. ForkProcData->av is defined as:\n\n> +typedef struct ForkProcData\n> +{\n> + char **av;\n\nwhich I think means you're computing sizeof(char*)/sizeof(char)?\n\n\n\n> +\taux_fork->ac = ac;\n> +\tswitch (MyForkProcType)\n> +\t{\n> +\t\tcase StartupProcess:\n> +\t\t\taux_fork->type_desc = pstrdup(\"startup\");\n> +\t\t\tbreak;\n> +\t\tcase BgWriterProcess:\n> +\t\t\taux_fork->type_desc = pstrdup(\"background writer\");\n> +\t\t\tbreak;\n> +\t\tcase CheckpointerProcess:\n> +\t\t\taux_fork->type_desc = pstrdup(\"checkpointer\");\n> +\t\t\tbreak;\n> +\t\tcase WalWriterProcess:\n> +\t\t\taux_fork->type_desc = pstrdup(\"WAL writer\");\n> +\t\t\tbreak;\n> +\t\tcase WalReceiverProcess:\n> +\t\t\taux_fork->type_desc = pstrdup(\"WAL receiver\");\n> +\t\t\tbreak;\n> +\t\tdefault:\n> +\t\t\taux_fork->type_desc = pstrdup(\"child\");\n> +\t\t\tbreak;\n> +\t}\n> +\n> +\taux_fork->child_main = AuxiliaryProcessMain;\n\nI'm really not in love in having details like the descriptors of\nprocesses, the types of processes, the mapping from MyForkProcType to\nAuxProcType in multiple places. I feel this is adding even more\nconfusion to this, rather than reducing it.\n\nI feel like we should define the properties of the different types of\nprocesses in *one* place, rather multiple. Define one array of\nstructs that contains information about the different types of\nprocesses, rather than distributing that knowledge all over.\n\n\n> @@ -4477,11 +4530,11 @@ BackendRun(Port *port)\n> pid_t\n> postmaster_forkexec(int argc, char *argv[])\n> {\n> -\tPort\t\tport;\n> -\n> \t/* This entry point passes dummy values for the Port variables */\n> -\tmemset(&port, 0, sizeof(port));\n> -\treturn internal_forkexec(argc, argv, &port);\n> +\tif (!ConnProcPort)\n> +\t\tConnProcPort = palloc0(sizeof(*ConnProcPort));\n> +\n> +\treturn internal_forkexec(argc, argv);\n> }\n\nWhat's the point of creating a dummy ConnProcPort? And when is this\nbeing pfreed? I mean previously this at least used a stack variable that\nwas automatically being released by the time postmaster_forkexec\nreturned, now it sure looks like a permanent leak to me.\n\nAlso, why is it a good idea to rely *more* on data being implicitly\npassed by stuffing things into global variables? That strikes me as\n\n\n> \t\t/* in parent, fork failed */\n> -\t\tint\t\t\tsave_errno = errno;\n> +\t\tint save_errno = errno;\n>\n> -\t\terrno = save_errno;\n> -\t\tswitch (type)\n> -\t\t{\n> -\t\t\tcase StartupProcess:\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errmsg(\"could not fork startup process: %m\")));\n> -\t\t\t\tbreak;\n> -\t\t\tcase BgWriterProcess:\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errmsg(\"could not fork background writer process: %m\")));\n> -\t\t\t\tbreak;\n> -\t\t\tcase CheckpointerProcess:\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errmsg(\"could not fork checkpointer process: %m\")));\n> -\t\t\t\tbreak;\n> -\t\t\tcase WalWriterProcess:\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errmsg(\"could not fork WAL writer process: %m\")));\n> -\t\t\t\tbreak;\n> -\t\t\tcase WalReceiverProcess:\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errmsg(\"could not fork WAL receiver process: %m\")));\n> -\t\t\t\tbreak;\n> -\t\t\tdefault:\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errmsg(\"could not fork process: %m\")));\n> -\t\t\t\tbreak;\n> -\t\t}\n> +\t\terrno = save_errno;\n> +\t\tereport(LOG,\n> +\t\t\t(errmsg(\"could not fork %s process: %m\", fork_data->type_desc)));\n\nPreviously the process type was translatable, now it is not\nanymore. You'd probably have to solve that somehow, perhaps by\ntranslating the type_desc once, when setting it.\n\n\n> @@ -1113,7 +1113,7 @@ process_startup_options(Port *port, bool am_superuser)\n> \t\tav = (char **) palloc(maxac * sizeof(char *));\n> \t\tac = 0;\n>\n> -\t\tav[ac++] = \"postgres\";\n> +\t\tav[ac++] = pstrdup(\"postgres\");\n>\n> \t\tpg_split_opts(av, &ac, port->cmdline_options);\n\nAgain, what's with all these conversions to dynamic memory allocations?\n\n\n\n> /* Flags to tell if we are in an autovacuum process */\n> -static bool am_autovacuum_launcher = false;\n> -static bool am_autovacuum_worker = false;\n> +bool am_autovacuum_launcher = false;\n> +bool am_autovacuum_worker = false;\n\nI'm against exposing a multitude of am_* variables for each process\ntype, especially globally. If we're building proper infrastructure\nhere, why do we need multiple variables, rather than exactly one\nindicating the process type?\n\n\n\n> @@ -347,88 +343,53 @@ static void avl_sigusr2_handler(SIGNAL_ARGS);\n> static void avl_sigterm_handler(SIGNAL_ARGS);\n> static void autovac_refresh_stats(void);\n>\n> -\n> -\n> /********************************************************************\n> *\t\t\t\t\t AUTOVACUUM LAUNCHER CODE\n> ********************************************************************/\n\nSpurious whitespace changes (in a few other places too).\n\n\n> +void\n> +PrepAutoVacProcessFork(ForkProcData *autovac_fork)\n> {\n\nI'm really not in love with keeping \"fork\" in all these, given that\nwe're not actually going to fork in some cases. Why do we not just call\nit subprocess and get rid of the \"fork\" and \"forkexec\" suffixes?\n\n\n\n\n> +/*\n> + * shmemSetup\n> + *\n> + * Helper function for a child to set up shmem before\n> + * executing.\n> + *\n> + * aux_process - set to true if an auxiliary process.\n> + */\n> +static void\n> +shmemSetup(bool aux_process)\n> +{\n> +\t/* Restore basic shared memory pointers */\n> +\tInitShmemAccess(UsedShmemSegAddr);\n> +\n> +\t/* Need a PGPROC to run CreateSharedMemoryAndSemaphores */\n> +\tif (aux_process)\n> +\t\tInitAuxiliaryProcess();\n> +\telse\n> +\t\tInitProcess();\n> +\n> +\t/* Attach process to shared data structures */\n> +\tCreateSharedMemoryAndSemaphores();\n> +}\n\nNot quite convinced this is a useful abstraction. I'm inclined to think\nthat this again should use something more like one array of\nper-subprocess properties.\n\n\nImagine if we had an array like\n\ntypedef struct PgSubProcess\n{\n const char *name;\n const char *translated_name;\n bool\t\tneeds_shmem;\n bool\t\tneeds_aux_proc;\n bool\t\tneeds_full_proc;\n SubProcessEntryPoint entrypoint;\n} PgSubProcess;\n\nPgSubProcess process_types[] = {\n {.name = \"startup process\", .needs_shmem = true, .needs_aux_proc = true, .entrypoint = StartupProcessMain},\n {.name = \"background writer\", .needs_shmem = true, .needs_aux_proc = true, .entrypoint = BackgroundWriterMain},\n {.name = \"autovacuum launcher\", .needs_shmem = true, .needs_aux_proc = true, .entrypoint = AutoVacLauncherMain},\n {.name = \"backend\", .needs_shmem = true, .needs_full_proc = true, , .entrypoint = PostgresMain},\n\n\n> From efc6f0531b71847c6500062b5a0fe43331a6ea7e Mon Sep 17 00:00:00 2001\n> From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> Date: Fri, 27 Sep 2019 19:13:53 -0400\n> Subject: [PATCH 3/8] Add centralized stat collector\n>\n> ---\n> src/backend/postmaster/pgstat.c | 88 ++++++---------------------\n> src/backend/postmaster/postmaster.c | 5 +-\n> src/include/pgstat.h | 4 +-\n> src/include/postmaster/fork_process.h | 1 +\n> 4 files changed, 26 insertions(+), 72 deletions(-)\n>\n> diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\n> index 011076c3e3..73d953aedf 100644\n> --- a/src/backend/postmaster/pgstat.c\n> +++ b/src/backend/postmaster/pgstat.c\n> @@ -280,10 +280,6 @@ static instr_time total_func_time;\n> * Local function forward declarations\n> * ----------\n> */\n> -#ifdef EXEC_BACKEND\n> -static pid_t pgstat_forkexec(void);\n> -#endif\n> -\n> NON_EXEC_STATIC void PgstatCollectorMain(int argc, char *argv[]) pg_attribute_noreturn();\n> static void pgstat_exit(SIGNAL_ARGS);\n> static void pgstat_beshutdown_hook(int code, Datum arg);\n> @@ -689,53 +685,26 @@ pgstat_reset_all(void)\n> \tpgstat_reset_remove_files(PGSTAT_STAT_PERMANENT_DIRECTORY);\n> }\n>\n> -#ifdef EXEC_BACKEND\n>\n> /*\n> - * pgstat_forkexec() -\n> - *\n> - * Format up the arglist for, then fork and exec, statistics collector process\n> - */\n> -static pid_t\n> -pgstat_forkexec(void)\n> -{\n> -\tchar\t *av[10];\n> -\tint\t\t\tac = 0;\n> -\n> -\tav[ac++] = \"postgres\";\n> -\tav[ac++] = \"--forkcol\";\n> -\tav[ac++] = NULL;\t\t\t/* filled in by postmaster_forkexec */\n> -\n> -\tav[ac] = NULL;\n> -\tAssert(ac < lengthof(av));\n> -\n> -\treturn postmaster_forkexec(ac, av);\n> -}\n> -#endif\t\t\t\t\t\t\t/* EXEC_BACKEND */\n> -\n> -\n> -/*\n> - * pgstat_start() -\n> + * PrepPgstatCollectorFork\n> *\n> *\tCalled from postmaster at startup or after an existing collector\n> *\tdied. Attempt to fire up a fresh statistics collector.\n> *\n> - *\tReturns PID of child process, or 0 if fail.\n> - *\n> - *\tNote: if fail, we will be called again from the postmaster main loop.\n> */\n> -int\n> -pgstat_start(void)\n> +void\n> +PrepPgstatCollectorFork(ForkProcData *pgstat_fork)\n> {\n> +\tint\t\t\t\tac = 0;\n> \ttime_t\t\tcurtime;\n> -\tpid_t\t\tpgStatPid;\n>\n> \t/*\n> \t * Check that the socket is there, else pgstat_init failed and we can do\n> \t * nothing useful.\n> \t */\n> \tif (pgStatSock == PGINVALID_SOCKET)\n> -\t\treturn 0;\n> +\t\treturn;\n>\n> \t/*\n> \t * Do nothing if too soon since last collector start. This is a safety\n> @@ -746,45 +715,20 @@ pgstat_start(void)\n> \tcurtime = time(NULL);\n> \tif ((unsigned int) (curtime - last_pgstat_start_time) <\n> \t\t(unsigned int) PGSTAT_RESTART_INTERVAL)\n> -\t\treturn 0;\n> +\t\treturn;\n> \tlast_pgstat_start_time = curtime;\n>\n> -\t/*\n> -\t * Okay, fork off the collector.\n> -\t */\n> #ifdef EXEC_BACKEND\n> -\tswitch ((pgStatPid = pgstat_forkexec()))\n> -#else\n> -\tswitch ((pgStatPid = fork_process()))\n> -#endif\n> -\t{\n> -\t\tcase -1:\n> -\t\t\tereport(LOG,\n> -\t\t\t\t\t(errmsg(\"could not fork statistics collector: %m\")));\n> -\t\t\treturn 0;\n> -\n> -#ifndef EXEC_BACKEND\n> -\t\tcase 0:\n> -\t\t\t/* in postmaster child ... */\n> -\t\t\tInitPostmasterChild();\n> -\n> -\t\t\t/* Close the postmaster's sockets */\n> -\t\t\tClosePostmasterPorts(false);\n> -\n> -\t\t\t/* Drop our connection to postmaster's shared memory, as well */\n> -\t\t\tdsm_detach_all();\n> -\t\t\tPGSharedMemoryDetach();\n> -\n> -\t\t\tPgstatCollectorMain(0, NULL);\n> -\t\t\tbreak;\n> +\tpgstat_fork->av[ac++] = pstrdup(\"postgres\");\n> +\tpgstat_fork->av[ac++] = pstrdup(\"--forkcol\");\n> +\tpgstat_fork->av[ac++] = NULL;\t\t\t/* filled in by postmaster_forkexec */\n> #endif\n>\n> -\t\tdefault:\n> -\t\t\treturn (int) pgStatPid;\n> -\t}\n> +\tpgstat_fork->ac = ac;\n> +\tAssert(pgstat_fork->ac < lengthof(*pgstat_fork->av));\n>\n> -\t/* shouldn't get here */\n> -\treturn 0;\n> +\tpgstat_fork->type_desc = pstrdup(\"statistics collector\");\n> +\tpgstat_fork->child_main = PgstatCollectorMain;\n> }\n>\n> void\n> @@ -4425,12 +4369,16 @@ pgstat_send_bgwriter(void)\n> * ----------\n> */\n> NON_EXEC_STATIC void\n> -PgstatCollectorMain(int argc, char *argv[])\n> +PgstatCollectorMain(pg_attribute_unused() int argc, pg_attribute_unused() char *argv[])\n> {\n> \tint\t\t\tlen;\n> \tPgStat_Msg\tmsg;\n> \tint\t\t\twr;\n>\n> +\t/* Drop our connection to postmaster's shared memory, as well */\n> +\tdsm_detach_all();\n> +\tPGSharedMemoryDetach();\n> +\n> \t/*\n> \t * Ignore all signals usually bound to some action in the postmaster,\n> \t * except SIGHUP and SIGQUIT. Note we don't need a SIGUSR1 handler to\n> diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> index 35cd1479b9..37a36387a3 100644\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -549,6 +549,7 @@ static void ShmemBackendArrayRemove(Backend *bn);\n> #define StartWalReceiver()\t\tStartChildProcess(WalReceiverFork)\n> #define StartAutoVacLauncher()\t\tStartChildProcess(AutoVacLauncherFork)\n> #define StartAutoVacWorker()\t\tStartChildProcess(AutoVacWorkerFork)\n> +#define pgstat_start()\t\t\tStartChildProcess(PgstatCollectorFork)\n>\n> /* Macros to check exit status of a child process */\n> #define EXIT_STATUS_0(st) ((st) == 0)\n> @@ -5459,7 +5460,9 @@ StartChildProcess(ForkProcType type)\n> \t\tcase AutoVacWorkerFork:\n> \t\t\tPrepAutoVacProcessFork(fork_data);\n> \t\t\tbreak;\n> -\n> +\t\tcase PgstatCollectorFork:\n> +\t\t\tPrepPgstatCollectorFork(fork_data);\n> +\t\t\tbreak;\n> \t\tdefault:\n> \t\t\tbreak;\n> \t}\n> diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n> index fe076d823d..00e95caa60 100644\n> --- a/src/include/pgstat.h\n> +++ b/src/include/pgstat.h\n> @@ -15,6 +15,7 @@\n> #include \"libpq/pqcomm.h\"\n> #include \"port/atomics.h\"\n> #include \"portability/instr_time.h\"\n> +#include \"postmaster/fork_process.h\"\n> #include \"postmaster/pgarch.h\"\n> #include \"storage/proc.h\"\n> #include \"utils/hsearch.h\"\n> @@ -1235,10 +1236,11 @@ extern Size BackendStatusShmemSize(void);\n> extern void CreateSharedBackendStatus(void);\n>\n> extern void pgstat_init(void);\n> -extern int\tpgstat_start(void);\n> extern void pgstat_reset_all(void);\n> extern void allow_immediate_pgstat_restart(void);\n>\n> +extern void PrepPgstatCollectorFork(ForkProcData *pgstat_fork);\n> +\n> #ifdef EXEC_BACKEND\n> extern void PgstatCollectorMain(int argc, char *argv[]) pg_attribute_noreturn();\n> #endif\n> diff --git a/src/include/postmaster/fork_process.h b/src/include/postmaster/fork_process.h\n> index 1f319fc98f..16ca8be968 100644\n> --- a/src/include/postmaster/fork_process.h\n> +++ b/src/include/postmaster/fork_process.h\n> @@ -26,6 +26,7 @@ typedef enum\n> WalReceiverFork,\t/* end of Auxiliary Process Forks */\n> AutoVacLauncherFork,\n> AutoVacWorkerFork,\n> + PgstatCollectorFork,\n>\n> NUMFORKPROCTYPES\t\t\t/* Must be last! */\n> } ForkProcType;\n\n\n\n\n> ---\n> src/backend/postmaster/postmaster.c | 309 +++++++++++++-------------\n> src/include/postmaster/fork_process.h | 5 +\n> src/include/postmaster/postmaster.h | 1 -\n> 3 files changed, 160 insertions(+), 155 deletions(-)\n>\n> diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> index 8f862fcd64..b55cc4556d 100644\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -144,6 +144,7 @@ static Backend *ShmemBackendArray;\n>\n> BackgroundWorker *MyBgworkerEntry = NULL;\n> RegisteredBgWorker *CurrentBgWorker = NULL;\n> +static Backend\t *MyBackend;\n> static int\t\t\tchild_errno;\n\nThis is another global variable that strikes me as a bad idea. Whereas\nMyBgworkerEntry is only set in the bgworker itself, you appear to set\nMyBackend *in postmaster*:\n\n> +/*\n> + * PrepBackendFork\n> + *\n> + * Prepare a ForkProcType struct for starting a Backend.\n> + * This does all prep related to av parameters and error messages.\n> +*/\n> +static void\n> +PrepBackendFork(ForkProcData *backend_fork)\n> +{\n> +\tint\t\t\tac = 0;\n> +\n> +\t/*\n> +\t * Create backend data structure. Better before the fork() so we can\n> +\t * handle failure cleanly.\n> +\t */\n> +\tMyBackend = (Backend *) malloc(sizeof(Backend));\n> +\tif (!MyBackend)\n> +\t{\n> +\t\tereport(LOG,\n> +\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> +\t\t\t\t errmsg(\"out of memory\")));\n> +\t}\n\nThat's totally not ok, we'll constantly overwrite the variable set by\nanother process, and otherwise have now obsolete values in there. And\neven worse, you leave it dangling in some cases:\n\n> +\t/*\n> +\t * Compute the cancel key that will be assigned to this backend. The\n> +\t * backend will have its own copy in the forked-off process' value of\n> +\t * MyCancelKey, so that it can transmit the key to the frontend.\n> +\t */\n> +\tif (!RandomCancelKey(&MyCancelKey))\n> +\t{\n> +\t\tfree(MyBackend);\n> +\t\tereport(LOG,\n> +\t\t\t\t(errcode(ERRCODE_INTERNAL_ERROR),\n> +\t\t\t\t errmsg(\"could not generate random cancel key\")));\n> +\t}\n\n\nNor do I see where you're actually freeing that memory in postmaster, in\nthe successful case. Previously it was freed in postmaster in the\nsuccess case\n\n> -#ifdef EXEC_BACKEND\n> -\tpid = backend_forkexec(port);\n> -#else\t\t\t\t\t\t\t/* !EXEC_BACKEND */\n> -\tpid = fork_process();\n> -\tif (pid == 0)\t\t\t\t/* child */\n> -\t{\n> -\t\tfree(bn);\n> -\n\nbut now it appears it's only freed in the failure case:\n\n> +/*\n> + *\t BackendFailFork\n> + *\n> + *\t Backend cleanup in case a failure occurs forking a new Backend.\n> + */\n> +static void BackendFailFork(pg_attribute_unused() int argc, pg_attribute_unused() char *argv[])\n> +{\n> +\tif (!MyBackend->dead_end)\n> +\t\t(void) ReleasePostmasterChildSlot(MyBackend->child_slot);\n> +\tfree(MyBackend);\n> +\n> +\treport_fork_failure_to_client(MyProcPort, child_errno);\n> +}\n\n\n\n> From 718c6598e9e02e1ce9ade99afb3c8948c67ef5ae Mon Sep 17 00:00:00 2001\n> From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> Date: Tue, 19 Feb 2019 15:29:33 +0000\n> Subject: [PATCH 8/8] Add a hook to allow extensions to set worker metadata\n>\n> This patch implements a hook that allows extensions to modify a worker\n> process' metadata in backends.\n>\n> Additional work done by Yuli Khodorkovskiy <yuli@crunchydata.com>\n> Original discussion here: https://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO%3DhtC6LnA6aW4r-%2Bjq%3D3Q5RAoFQgW8EtA%40mail.gmail.com\n> ---\n> src/backend/postmaster/fork_process.c | 11 +++++++++++\n> src/include/postmaster/fork_process.h | 4 ++++\n> 2 files changed, 15 insertions(+)\n>\n> diff --git a/src/backend/postmaster/fork_process.c b/src/backend/postmaster/fork_process.c\n> index a9138f589e..964c04d846 100644\n> --- a/src/backend/postmaster/fork_process.c\n> +++ b/src/backend/postmaster/fork_process.c\n> @@ -21,6 +21,14 @@\n> #include <openssl/rand.h>\n> #endif\n>\n> +/*\n> + * This hook allows plugins to get control of the child process following\n> + * a successful fork_process(). It can be used to perform some action in\n> + * extensions to set metadata in backends (including special backends) upon\n> + * setting of the session user.\n> + */\n> +ForkProcess_post_hook_type ForkProcess_post_hook = NULL;\n> +\n> #ifndef WIN32\n> /*\n> * Wrapper for fork(). Return values are the same as those for fork():\n> @@ -113,6 +121,9 @@ fork_process(void)\n> #ifdef USE_OPENSSL\n> \t\tRAND_cleanup();\n> #endif\n> +\n> +\t\tif (ForkProcess_post_hook)\n> +\t\t\t(*ForkProcess_post_hook) ();\n> \t}\n\n\n> \treturn result;\n> diff --git a/src/include/postmaster/fork_process.h b/src/include/postmaster/fork_process.h\n> index 631a2113b5..d756fcead7 100644\n> --- a/src/include/postmaster/fork_process.h\n> +++ b/src/include/postmaster/fork_process.h\n> @@ -58,4 +58,8 @@ extern pid_t fork_process(void);\n> extern pid_t postmaster_forkexec(ForkProcData *fork_data);\n> #endif\n>\n> +/* Hook for plugins to get control after a successful fork_process() */\n> +typedef void (*ForkProcess_post_hook_type) ();\n> +extern PGDLLIMPORT ForkProcess_post_hook_type ForkProcess_post_hook;\n> +\n> #endif\t\t\t\t\t\t\t/* FORK_PROCESS_H */\n> --\n> 2.23.0\n>\n\nWhy do we want libraries to allow to hook into processes like the\nstartup process etc?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Oct 2019 10:31:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Oct 3, 2019 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-09-30 15:13:18 -0400, Mike Palmiotto wrote:\n> > Attached is the reworked and rebased patch set. I put the hook on top\n> > and a separate commit for each process type. Note that avworker and\n> > avlauncher were intentionally left together. Let me know if you think\n> > those should be split out as well.\n>\n> I've not looked at this before, nor the thread. Just jumping here\n> because of the reference to this patch you made in another thread...\n>\n> In short, I think this is far from ready, nor am I sure this is going in\n> the right direction. There's a bit more detail about where I think this\n> ought to go interspersed below (in particular search for PgSubProcess,\n> EXEC_BACKEND).\n\nGreat, thanks for the review!\n\n>\n>\n>\n> > From 205ea86dde898f7edac327d46b2b43b04721aadc Mon Sep 17 00:00:00 2001\n> > From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> > Date: Mon, 30 Sep 2019 14:42:53 -0400\n> > Subject: [PATCH 7/8] Add backends to process centralization\n>\n> .oO(out of order attached patches, how fun). 7, 8, 6, 5, 4, 2, 3, 1...\n\nYeah, hm. Not really sure why/how that happened.\n\n>\n>\n>\n> > From 2a3a35f2e80cb2badcb0efbce1bad2484e364b7b Mon Sep 17 00:00:00 2001\n> > From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> > Date: Fri, 27 Sep 2019 12:28:19 -0400\n> > Subject: [PATCH 1/8] Add ForkProcType infrastructure\n>\n> A better explanation would be nice. It's hard to review non-trivial\n> patches that have little explanation as to what they're\n> achieving. Especially when there's some unrelated seeming changes.\n\nI should have maintained the initial patch description from the last\ncut -- there is a lot more detail there. I will pull that back in\nafter addressing the other concerns. :)\n\n>\n>\n> > diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c\n> > index 9238fbe98d..9f3dad1c6d 100644\n> > --- a/src/backend/bootstrap/bootstrap.c\n> > +++ b/src/backend/bootstrap/bootstrap.c\n> > @@ -70,6 +70,7 @@ static void cleanup(void);\n> > */\n> >\n> > AuxProcType MyAuxProcType = NotAnAuxProcess; /* declared in miscadmin.h */\n> > +ForkProcType MyForkProcType = NoForkProcess; /* declared in fork_process.h */\n> >\n> > Relation boot_reldesc; /* current relation descriptor\n> > */\n>\n> IMO, before this stuff gets massaged meaningfully, we ought to move code\n> like this (including AuxiliaryProcessMain etc) somewhere where it makes\n> sense. If we're going to rejigger things, it ought to be towards a more\n> understandable architecture, rather than keeping weird quirks around.\n\nSounds good. I'll address this in the next round.\n\n>\n> I'm inclined to think that this better also include properly\n> centralizing a good portion of the EXEC_BACKEND code. Building\n> infrastructure that's supposed to be long-lived with that being left as\n> is just seems like recipe for yet another odd framework.\n\n+1\n\n>\n>\n> > @@ -538,11 +539,11 @@ static void ShmemBackendArrayAdd(Backend *bn);\n> > static void ShmemBackendArrayRemove(Backend *bn);\n> > #endif /* EXEC_BACKEND */\n> >\n> > -#define StartupDataBase() StartChildProcess(StartupProcess)\n> > -#define StartBackgroundWriter() StartChildProcess(BgWriterProcess)\n> > -#define StartCheckpointer() StartChildProcess(CheckpointerProcess)\n> > -#define StartWalWriter() StartChildProcess(WalWriterProcess)\n> > -#define StartWalReceiver() StartChildProcess(WalReceiverProcess)\n> > +#define StartupDataBase() StartChildProcess(StartupFork)\n> > +#define StartBackgroundWriter() StartChildProcess(BgWriterFork)\n> > +#define StartCheckpointer() StartChildProcess(CheckpointerFork)\n> > +#define StartWalWriter() StartChildProcess(WalWriterFork)\n> > +#define StartWalReceiver() StartChildProcess(WalReceiverFork)\n>\n> FWIW, I'd just rip this out, it adds exactly nothing but one more place\n> to change.\n\n+1\n\n>\n>\n> > +/*\n> > + * PrepAuxProcessFork\n> > + *\n> > + * Prepare a ForkProcType struct for the auxiliary process specified by\n>\n> ForkProcData, not ForkProcType\n>\n>\n> > + * AuxProcType. This does all prep related to av parameters and error messages.\n> > + */\n> > +static void\n> > +PrepAuxProcessFork(ForkProcData *aux_fork)\n> > +{\n> > + int ac = 0;\n> > +\n> > + /*\n> > + * Set up command-line arguments for subprocess\n> > + */\n> > + aux_fork->av[ac++] = pstrdup(\"postgres\");\n>\n> There's no documentation as to what 'av' is actually supposed to mean. I\n> assume auxvector or such, but if so that's neither obvious, nor\n> necessarily good.\n\nGot it, better variable names here.\n\n>\n>\n> > +#ifdef EXEC_BACKEND\n> > + aux_fork->av[ac++] = pstrdup(\"--forkboot\");\n> > + aux_fork->av[ac++] = NULL; /* filled in by postmaster_forkexec */\n> > +#endif\n>\n> What's the point of pstrdup()ing constant strings here? Afaict you're\n> not freeing them ever?\n\nThe idea was supposed to be that the prep work lives up until the fork\nhappens and then is freed en masse with the rest of the MemoryContext.\nPerhaps there was some oversight here. I'll revisit this and others in\nthe next pass.\n\n>\n>\n> > + aux_fork->av[ac++] = psprintf(\"-x%d\", MyForkProcType);\n> > +\n> > + aux_fork->av[ac] = NULL;\n> > + Assert(ac < lengthof(*aux_fork->av));\n>\n> Unless I miss something, the lengthof here doesn't do anythign\n> useful. ForkProcData->av is defined as:\n>\n> > +typedef struct ForkProcData\n> > +{\n> > + char **av;\n>\n> which I think means you're computing sizeof(char*)/sizeof(char)?\n\nI think this might be a leftover assertion from the previous\ninfrastructure. Agreed that this is probably useless.\n\n>\n>\n>\n> > + aux_fork->ac = ac;\n> > + switch (MyForkProcType)\n> > + {\n> > + case StartupProcess:\n> > + aux_fork->type_desc = pstrdup(\"startup\");\n> > + break;\n> > + case BgWriterProcess:\n> > + aux_fork->type_desc = pstrdup(\"background writer\");\n> > + break;\n> > + case CheckpointerProcess:\n> > + aux_fork->type_desc = pstrdup(\"checkpointer\");\n> > + break;\n> > + case WalWriterProcess:\n> > + aux_fork->type_desc = pstrdup(\"WAL writer\");\n> > + break;\n> > + case WalReceiverProcess:\n> > + aux_fork->type_desc = pstrdup(\"WAL receiver\");\n> > + break;\n> > + default:\n> > + aux_fork->type_desc = pstrdup(\"child\");\n> > + break;\n> > + }\n> > +\n> > + aux_fork->child_main = AuxiliaryProcessMain;\n>\n> I'm really not in love in having details like the descriptors of\n> processes, the types of processes, the mapping from MyForkProcType to\n> AuxProcType in multiple places. I feel this is adding even more\n> confusion to this, rather than reducing it.\n>\n> I feel like we should define the properties of the different types of\n> processes in *one* place, rather multiple. Define one array of\n> structs that contains information about the different types of\n> processes, rather than distributing that knowledge all over.\n\nAgreed here. Thanks for the suggestion.\n\n>\n>\n> > @@ -4477,11 +4530,11 @@ BackendRun(Port *port)\n> > pid_t\n> > postmaster_forkexec(int argc, char *argv[])\n> > {\n> > - Port port;\n> > -\n> > /* This entry point passes dummy values for the Port variables */\n> > - memset(&port, 0, sizeof(port));\n> > - return internal_forkexec(argc, argv, &port);\n> > + if (!ConnProcPort)\n> > + ConnProcPort = palloc0(sizeof(*ConnProcPort));\n> > +\n> > + return internal_forkexec(argc, argv);\n> > }\n>\n> What's the point of creating a dummy ConnProcPort? And when is this\n> being pfreed? I mean previously this at least used a stack variable that\n> was automatically being released by the time postmaster_forkexec\n> returned, now it sure looks like a permanent leak to me.\n\nWill address this in the next pass.\n\n>\n> Also, why is it a good idea to rely *more* on data being implicitly\n> passed by stuffing things into global variables? That strikes me as\n\nIt is certainly not. This will be addressed in the next pass, thank you!\n\n>\n>\n> > /* in parent, fork failed */\n> > - int save_errno = errno;\n> > + int save_errno = errno;\n> >\n> > - errno = save_errno;\n> > - switch (type)\n> > - {\n> > - case StartupProcess:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork startup process: %m\")));\n> > - break;\n> > - case BgWriterProcess:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork background writer process: %m\")));\n> > - break;\n> > - case CheckpointerProcess:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork checkpointer process: %m\")));\n> > - break;\n> > - case WalWriterProcess:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork WAL writer process: %m\")));\n> > - break;\n> > - case WalReceiverProcess:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork WAL receiver process: %m\")));\n> > - break;\n> > - default:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork process: %m\")));\n> > - break;\n> > - }\n> > + errno = save_errno;\n> > + ereport(LOG,\n> > + (errmsg(\"could not fork %s process: %m\", fork_data->type_desc)));\n>\n> Previously the process type was translatable, now it is not\n> anymore. You'd probably have to solve that somehow, perhaps by\n> translating the type_desc once, when setting it.\n\n+1\n\n>\n>\n> > @@ -1113,7 +1113,7 @@ process_startup_options(Port *port, bool am_superuser)\n> > av = (char **) palloc(maxac * sizeof(char *));\n> > ac = 0;\n> >\n> > - av[ac++] = \"postgres\";\n> > + av[ac++] = pstrdup(\"postgres\");\n> >\n> > pg_split_opts(av, &ac, port->cmdline_options);\n>\n> Again, what's with all these conversions to dynamic memory allocations?\n>\n>\n>\n> > /* Flags to tell if we are in an autovacuum process */\n> > -static bool am_autovacuum_launcher = false;\n> > -static bool am_autovacuum_worker = false;\n> > +bool am_autovacuum_launcher = false;\n> > +bool am_autovacuum_worker = false;\n>\n> I'm against exposing a multitude of am_* variables for each process\n> type, especially globally. If we're building proper infrastructure\n> here, why do we need multiple variables, rather than exactly one\n> indicating the process type?\n\nNoted.\n\n>\n>\n>\n> > @@ -347,88 +343,53 @@ static void avl_sigusr2_handler(SIGNAL_ARGS);\n> > static void avl_sigterm_handler(SIGNAL_ARGS);\n> > static void autovac_refresh_stats(void);\n> >\n> > -\n> > -\n> > /********************************************************************\n> > * AUTOVACUUM LAUNCHER CODE\n> > ********************************************************************/\n>\n> Spurious whitespace changes (in a few other places too).\n\nNoted.\n\n>\n>\n> > +void\n> > +PrepAutoVacProcessFork(ForkProcData *autovac_fork)\n> > {\n>\n> I'm really not in love with keeping \"fork\" in all these, given that\n> we're not actually going to fork in some cases. Why do we not just call\n> it subprocess and get rid of the \"fork\" and \"forkexec\" suffixes?\n\nSounds good to me.\n\n>\n>\n>\n>\n> > +/*\n> > + * shmemSetup\n> > + *\n> > + * Helper function for a child to set up shmem before\n> > + * executing.\n> > + *\n> > + * aux_process - set to true if an auxiliary process.\n> > + */\n> > +static void\n> > +shmemSetup(bool aux_process)\n> > +{\n> > + /* Restore basic shared memory pointers */\n> > + InitShmemAccess(UsedShmemSegAddr);\n> > +\n> > + /* Need a PGPROC to run CreateSharedMemoryAndSemaphores */\n> > + if (aux_process)\n> > + InitAuxiliaryProcess();\n> > + else\n> > + InitProcess();\n> > +\n> > + /* Attach process to shared data structures */\n> > + CreateSharedMemoryAndSemaphores();\n> > +}\n>\n> Not quite convinced this is a useful abstraction. I'm inclined to think\n> that this again should use something more like one array of\n> per-subprocess properties.\n>\n>\n> Imagine if we had an array like\n>\n> typedef struct PgSubProcess\n> {\n> const char *name;\n> const char *translated_name;\n> bool needs_shmem;\n> bool needs_aux_proc;\n> bool needs_full_proc;\n> SubProcessEntryPoint entrypoint;\n> } PgSubProcess;\n>\n> PgSubProcess process_types[] = {\n> {.name = \"startup process\", .needs_shmem = true, .needs_aux_proc = true, .entrypoint = StartupProcessMain},\n> {.name = \"background writer\", .needs_shmem = true, .needs_aux_proc = true, .entrypoint = BackgroundWriterMain},\n> {.name = \"autovacuum launcher\", .needs_shmem = true, .needs_aux_proc = true, .entrypoint = AutoVacLauncherMain},\n> {.name = \"backend\", .needs_shmem = true, .needs_full_proc = true, , .entrypoint = PostgresMain},\n\nGreat idea, thanks!\n\n>\n>\n> > From efc6f0531b71847c6500062b5a0fe43331a6ea7e Mon Sep 17 00:00:00 2001\n> > From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> > Date: Fri, 27 Sep 2019 19:13:53 -0400\n> > Subject: [PATCH 3/8] Add centralized stat collector\n> >\n> > ---\n> > src/backend/postmaster/pgstat.c | 88 ++++++---------------------\n> > src/backend/postmaster/postmaster.c | 5 +-\n> > src/include/pgstat.h | 4 +-\n> > src/include/postmaster/fork_process.h | 1 +\n> > 4 files changed, 26 insertions(+), 72 deletions(-)\n> >\n> > diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\n> > index 011076c3e3..73d953aedf 100644\n> > --- a/src/backend/postmaster/pgstat.c\n> > +++ b/src/backend/postmaster/pgstat.c\n> > @@ -280,10 +280,6 @@ static instr_time total_func_time;\n> > * Local function forward declarations\n> > * ----------\n> > */\n> > -#ifdef EXEC_BACKEND\n> > -static pid_t pgstat_forkexec(void);\n> > -#endif\n> > -\n> > NON_EXEC_STATIC void PgstatCollectorMain(int argc, char *argv[]) pg_attribute_noreturn();\n> > static void pgstat_exit(SIGNAL_ARGS);\n> > static void pgstat_beshutdown_hook(int code, Datum arg);\n> > @@ -689,53 +685,26 @@ pgstat_reset_all(void)\n> > pgstat_reset_remove_files(PGSTAT_STAT_PERMANENT_DIRECTORY);\n> > }\n> >\n> > -#ifdef EXEC_BACKEND\n> >\n> > /*\n> > - * pgstat_forkexec() -\n> > - *\n> > - * Format up the arglist for, then fork and exec, statistics collector process\n> > - */\n> > -static pid_t\n> > -pgstat_forkexec(void)\n> > -{\n> > - char *av[10];\n> > - int ac = 0;\n> > -\n> > - av[ac++] = \"postgres\";\n> > - av[ac++] = \"--forkcol\";\n> > - av[ac++] = NULL; /* filled in by postmaster_forkexec */\n> > -\n> > - av[ac] = NULL;\n> > - Assert(ac < lengthof(av));\n> > -\n> > - return postmaster_forkexec(ac, av);\n> > -}\n> > -#endif /* EXEC_BACKEND */\n> > -\n> > -\n> > -/*\n> > - * pgstat_start() -\n> > + * PrepPgstatCollectorFork\n> > *\n> > * Called from postmaster at startup or after an existing collector\n> > * died. Attempt to fire up a fresh statistics collector.\n> > *\n> > - * Returns PID of child process, or 0 if fail.\n> > - *\n> > - * Note: if fail, we will be called again from the postmaster main loop.\n> > */\n> > -int\n> > -pgstat_start(void)\n> > +void\n> > +PrepPgstatCollectorFork(ForkProcData *pgstat_fork)\n> > {\n> > + int ac = 0;\n> > time_t curtime;\n> > - pid_t pgStatPid;\n> >\n> > /*\n> > * Check that the socket is there, else pgstat_init failed and we can do\n> > * nothing useful.\n> > */\n> > if (pgStatSock == PGINVALID_SOCKET)\n> > - return 0;\n> > + return;\n> >\n> > /*\n> > * Do nothing if too soon since last collector start. This is a safety\n> > @@ -746,45 +715,20 @@ pgstat_start(void)\n> > curtime = time(NULL);\n> > if ((unsigned int) (curtime - last_pgstat_start_time) <\n> > (unsigned int) PGSTAT_RESTART_INTERVAL)\n> > - return 0;\n> > + return;\n> > last_pgstat_start_time = curtime;\n> >\n> > - /*\n> > - * Okay, fork off the collector.\n> > - */\n> > #ifdef EXEC_BACKEND\n> > - switch ((pgStatPid = pgstat_forkexec()))\n> > -#else\n> > - switch ((pgStatPid = fork_process()))\n> > -#endif\n> > - {\n> > - case -1:\n> > - ereport(LOG,\n> > - (errmsg(\"could not fork statistics collector: %m\")));\n> > - return 0;\n> > -\n> > -#ifndef EXEC_BACKEND\n> > - case 0:\n> > - /* in postmaster child ... */\n> > - InitPostmasterChild();\n> > -\n> > - /* Close the postmaster's sockets */\n> > - ClosePostmasterPorts(false);\n> > -\n> > - /* Drop our connection to postmaster's shared memory, as well */\n> > - dsm_detach_all();\n> > - PGSharedMemoryDetach();\n> > -\n> > - PgstatCollectorMain(0, NULL);\n> > - break;\n> > + pgstat_fork->av[ac++] = pstrdup(\"postgres\");\n> > + pgstat_fork->av[ac++] = pstrdup(\"--forkcol\");\n> > + pgstat_fork->av[ac++] = NULL; /* filled in by postmaster_forkexec */\n> > #endif\n> >\n> > - default:\n> > - return (int) pgStatPid;\n> > - }\n> > + pgstat_fork->ac = ac;\n> > + Assert(pgstat_fork->ac < lengthof(*pgstat_fork->av));\n> >\n> > - /* shouldn't get here */\n> > - return 0;\n> > + pgstat_fork->type_desc = pstrdup(\"statistics collector\");\n> > + pgstat_fork->child_main = PgstatCollectorMain;\n> > }\n> >\n> > void\n> > @@ -4425,12 +4369,16 @@ pgstat_send_bgwriter(void)\n> > * ----------\n> > */\n> > NON_EXEC_STATIC void\n> > -PgstatCollectorMain(int argc, char *argv[])\n> > +PgstatCollectorMain(pg_attribute_unused() int argc, pg_attribute_unused() char *argv[])\n> > {\n> > int len;\n> > PgStat_Msg msg;\n> > int wr;\n> >\n> > + /* Drop our connection to postmaster's shared memory, as well */\n> > + dsm_detach_all();\n> > + PGSharedMemoryDetach();\n> > +\n> > /*\n> > * Ignore all signals usually bound to some action in the postmaster,\n> > * except SIGHUP and SIGQUIT. Note we don't need a SIGUSR1 handler to\n> > diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> > index 35cd1479b9..37a36387a3 100644\n> > --- a/src/backend/postmaster/postmaster.c\n> > +++ b/src/backend/postmaster/postmaster.c\n> > @@ -549,6 +549,7 @@ static void ShmemBackendArrayRemove(Backend *bn);\n> > #define StartWalReceiver() StartChildProcess(WalReceiverFork)\n> > #define StartAutoVacLauncher() StartChildProcess(AutoVacLauncherFork)\n> > #define StartAutoVacWorker() StartChildProcess(AutoVacWorkerFork)\n> > +#define pgstat_start() StartChildProcess(PgstatCollectorFork)\n> >\n> > /* Macros to check exit status of a child process */\n> > #define EXIT_STATUS_0(st) ((st) == 0)\n> > @@ -5459,7 +5460,9 @@ StartChildProcess(ForkProcType type)\n> > case AutoVacWorkerFork:\n> > PrepAutoVacProcessFork(fork_data);\n> > break;\n> > -\n> > + case PgstatCollectorFork:\n> > + PrepPgstatCollectorFork(fork_data);\n> > + break;\n> > default:\n> > break;\n> > }\n> > diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n> > index fe076d823d..00e95caa60 100644\n> > --- a/src/include/pgstat.h\n> > +++ b/src/include/pgstat.h\n> > @@ -15,6 +15,7 @@\n> > #include \"libpq/pqcomm.h\"\n> > #include \"port/atomics.h\"\n> > #include \"portability/instr_time.h\"\n> > +#include \"postmaster/fork_process.h\"\n> > #include \"postmaster/pgarch.h\"\n> > #include \"storage/proc.h\"\n> > #include \"utils/hsearch.h\"\n> > @@ -1235,10 +1236,11 @@ extern Size BackendStatusShmemSize(void);\n> > extern void CreateSharedBackendStatus(void);\n> >\n> > extern void pgstat_init(void);\n> > -extern int pgstat_start(void);\n> > extern void pgstat_reset_all(void);\n> > extern void allow_immediate_pgstat_restart(void);\n> >\n> > +extern void PrepPgstatCollectorFork(ForkProcData *pgstat_fork);\n> > +\n> > #ifdef EXEC_BACKEND\n> > extern void PgstatCollectorMain(int argc, char *argv[]) pg_attribute_noreturn();\n> > #endif\n> > diff --git a/src/include/postmaster/fork_process.h b/src/include/postmaster/fork_process.h\n> > index 1f319fc98f..16ca8be968 100644\n> > --- a/src/include/postmaster/fork_process.h\n> > +++ b/src/include/postmaster/fork_process.h\n> > @@ -26,6 +26,7 @@ typedef enum\n> > WalReceiverFork, /* end of Auxiliary Process Forks */\n> > AutoVacLauncherFork,\n> > AutoVacWorkerFork,\n> > + PgstatCollectorFork,\n> >\n> > NUMFORKPROCTYPES /* Must be last! */\n> > } ForkProcType;\n>\n>\n>\n>\n> > ---\n> > src/backend/postmaster/postmaster.c | 309 +++++++++++++-------------\n> > src/include/postmaster/fork_process.h | 5 +\n> > src/include/postmaster/postmaster.h | 1 -\n> > 3 files changed, 160 insertions(+), 155 deletions(-)\n> >\n> > diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> > index 8f862fcd64..b55cc4556d 100644\n> > --- a/src/backend/postmaster/postmaster.c\n> > +++ b/src/backend/postmaster/postmaster.c\n> > @@ -144,6 +144,7 @@ static Backend *ShmemBackendArray;\n> >\n> > BackgroundWorker *MyBgworkerEntry = NULL;\n> > RegisteredBgWorker *CurrentBgWorker = NULL;\n> > +static Backend *MyBackend;\n> > static int child_errno;\n>\n> This is another global variable that strikes me as a bad idea. Whereas\n> MyBgworkerEntry is only set in the bgworker itself, you appear to set\n> MyBackend *in postmaster*:\n\nOkay, will address this in the next pass.\n\n>\n> > +/*\n> > + * PrepBackendFork\n> > + *\n> > + * Prepare a ForkProcType struct for starting a Backend.\n> > + * This does all prep related to av parameters and error messages.\n> > +*/\n> > +static void\n> > +PrepBackendFork(ForkProcData *backend_fork)\n> > +{\n> > + int ac = 0;\n> > +\n> > + /*\n> > + * Create backend data structure. Better before the fork() so we can\n> > + * handle failure cleanly.\n> > + */\n> > + MyBackend = (Backend *) malloc(sizeof(Backend));\n> > + if (!MyBackend)\n> > + {\n> > + ereport(LOG,\n> > + (errcode(ERRCODE_OUT_OF_MEMORY),\n> > + errmsg(\"out of memory\")));\n> > + }\n>\n> That's totally not ok, we'll constantly overwrite the variable set by\n> another process, and otherwise have now obsolete values in there. And\n> even worse, you leave it dangling in some cases:\n\nGot it. Will address in the next pass.\n\n>\n> > + /*\n> > + * Compute the cancel key that will be assigned to this backend. The\n> > + * backend will have its own copy in the forked-off process' value of\n> > + * MyCancelKey, so that it can transmit the key to the frontend.\n> > + */\n> > + if (!RandomCancelKey(&MyCancelKey))\n> > + {\n> > + free(MyBackend);\n> > + ereport(LOG,\n> > + (errcode(ERRCODE_INTERNAL_ERROR),\n> > + errmsg(\"could not generate random cancel key\")));\n> > + }\n>\n>\n> Nor do I see where you're actually freeing that memory in postmaster, in\n> the successful case. Previously it was freed in postmaster in the\n> success case\n>\n> > -#ifdef EXEC_BACKEND\n> > - pid = backend_forkexec(port);\n> > -#else /* !EXEC_BACKEND */\n> > - pid = fork_process();\n> > - if (pid == 0) /* child */\n> > - {\n> > - free(bn);\n> > -\n>\n> but now it appears it's only freed in the failure case:\n\nGood catch. Will fix.\n\n>\n> > +/*\n> > + * BackendFailFork\n> > + *\n> > + * Backend cleanup in case a failure occurs forking a new Backend.\n> > + */\n> > +static void BackendFailFork(pg_attribute_unused() int argc, pg_attribute_unused() char *argv[])\n> > +{\n> > + if (!MyBackend->dead_end)\n> > + (void) ReleasePostmasterChildSlot(MyBackend->child_slot);\n> > + free(MyBackend);\n> > +\n> > + report_fork_failure_to_client(MyProcPort, child_errno);\n> > +}\n>\n>\n>\n> > From 718c6598e9e02e1ce9ade99afb3c8948c67ef5ae Mon Sep 17 00:00:00 2001\n> > From: Mike Palmiotto <mike.palmiotto@crunchydata.com>\n> > Date: Tue, 19 Feb 2019 15:29:33 +0000\n> > Subject: [PATCH 8/8] Add a hook to allow extensions to set worker metadata\n> >\n> > This patch implements a hook that allows extensions to modify a worker\n> > process' metadata in backends.\n> >\n> > Additional work done by Yuli Khodorkovskiy <yuli@crunchydata.com>\n> > Original discussion here: https://www.postgresql.org/message-id/flat/CAMN686FE0OdZKp9YPO%3DhtC6LnA6aW4r-%2Bjq%3D3Q5RAoFQgW8EtA%40mail.gmail.com\n> > ---\n> > src/backend/postmaster/fork_process.c | 11 +++++++++++\n> > src/include/postmaster/fork_process.h | 4 ++++\n> > 2 files changed, 15 insertions(+)\n> >\n> > diff --git a/src/backend/postmaster/fork_process.c b/src/backend/postmaster/fork_process.c\n> > index a9138f589e..964c04d846 100644\n> > --- a/src/backend/postmaster/fork_process.c\n> > +++ b/src/backend/postmaster/fork_process.c\n> > @@ -21,6 +21,14 @@\n> > #include <openssl/rand.h>\n> > #endif\n> >\n> > +/*\n> > + * This hook allows plugins to get control of the child process following\n> > + * a successful fork_process(). It can be used to perform some action in\n> > + * extensions to set metadata in backends (including special backends) upon\n> > + * setting of the session user.\n> > + */\n> > +ForkProcess_post_hook_type ForkProcess_post_hook = NULL;\n> > +\n> > #ifndef WIN32\n> > /*\n> > * Wrapper for fork(). Return values are the same as those for fork():\n> > @@ -113,6 +121,9 @@ fork_process(void)\n> > #ifdef USE_OPENSSL\n> > RAND_cleanup();\n> > #endif\n> > +\n> > + if (ForkProcess_post_hook)\n> > + (*ForkProcess_post_hook) ();\n> > }\n>\n>\n> > return result;\n> > diff --git a/src/include/postmaster/fork_process.h b/src/include/postmaster/fork_process.h\n> > index 631a2113b5..d756fcead7 100644\n> > --- a/src/include/postmaster/fork_process.h\n> > +++ b/src/include/postmaster/fork_process.h\n> > @@ -58,4 +58,8 @@ extern pid_t fork_process(void);\n> > extern pid_t postmaster_forkexec(ForkProcData *fork_data);\n> > #endif\n> >\n> > +/* Hook for plugins to get control after a successful fork_process() */\n> > +typedef void (*ForkProcess_post_hook_type) ();\n> > +extern PGDLLIMPORT ForkProcess_post_hook_type ForkProcess_post_hook;\n> > +\n> > #endif /* FORK_PROCESS_H */\n> > --\n> > 2.23.0\n> >\n>\n> Why do we want libraries to allow to hook into processes like the\n> startup process etc?\n\nThere are a number of OS-level process manipulations that this could\nafford you as an extension developer. For instance, you could roll\nyour own seccomp implementation (to limit syscalls per-process-type,\nperhaps?), perform some setcap magic, or some other security-related\nmagic.\n\nThanks for your patience with this patchset. I understand it's still\nvery immature. Hopefully all of the concerns raised will be adjusted\nin the follow-up patches.\n\n\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Thu, 3 Oct 2019 14:33:26 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "Hi,\n\nOn 2019-10-03 14:33:26 -0400, Mike Palmiotto wrote:\n> > > +#ifdef EXEC_BACKEND\n> > > + aux_fork->av[ac++] = pstrdup(\"--forkboot\");\n> > > + aux_fork->av[ac++] = NULL; /* filled in by postmaster_forkexec */\n> > > +#endif\n> >\n> > What's the point of pstrdup()ing constant strings here? Afaict you're\n> > not freeing them ever?\n>\n> The idea was supposed to be that the prep work lives up until the fork\n> happens and then is freed en masse with the rest of the MemoryContext.\n> Perhaps there was some oversight here. I'll revisit this and others in\n> the next pass.\n\nWell, two things:\n1) That'd *NEVER* be more efficient that just referencing constant\n strings. Unless you pfree() the variable, and there sometimes can be\n constant, and sometimes non-constant strings, there is simply no\n reason to ever pstrdup a constant string.\n2) Which context are you talking about? Are you thinking of? A forked\n process doing MemoryContextDelete(PostmasterContext); doesn't free\n that memory in postmaster.\n\n\n> > > @@ -58,4 +58,8 @@ extern pid_t fork_process(void);\n> > > extern pid_t postmaster_forkexec(ForkProcData *fork_data);\n> > > #endif\n> > >\n> > > +/* Hook for plugins to get control after a successful fork_process() */\n> > > +typedef void (*ForkProcess_post_hook_type) ();\n> > > +extern PGDLLIMPORT ForkProcess_post_hook_type ForkProcess_post_hook;\n> > > +\n> > > #endif /* FORK_PROCESS_H */\n> > > --\n> > > 2.23.0\n> > >\n> >\n> > Why do we want libraries to allow to hook into processes like the\n> > startup process etc?\n>\n> There are a number of OS-level process manipulations that this could\n> afford you as an extension developer. For instance, you could roll\n> your own seccomp implementation (to limit syscalls per-process-type,\n> perhaps?), perform some setcap magic, or some other security-related\n> magic.\n\nColor me unconvinced.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Oct 2019 11:39:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Oct 03, 2019 at 11:39:37AM -0700, Andres Freund wrote:\n> Color me unconvinced.\n\nThe latest comments of the thread have not been addressed yet. so I am\nmarking the patch as returned with feedback. If you think that's not\ncorrect, please feel free to update the status of the patch. If you\ndo so, please provide at the same time a rebased version of the patch,\nas it is failing to apply on HEAD.\n--\nMichael", "msg_date": "Sun, 1 Dec 2019 11:15:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Sat, Nov 30, 2019 at 9:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 03, 2019 at 11:39:37AM -0700, Andres Freund wrote:\n> > Color me unconvinced.\n>\n> The latest comments of the thread have not been addressed yet. so I am\n> marking the patch as returned with feedback. If you think that's not\n> correct, please feel free to update the status of the patch. If you\n> do so, please provide at the same time a rebased version of the patch,\n> as it is failing to apply on HEAD.\n\nI've finally had some time to update the patchset with an attempt at\naddressing Andres' concerns. Note that the current spin has a bug\nwhich does not allow syslogger to run properly. Additionally, it is\nsquashed into one patch again. I intend to split the patch out into\nseparate functional patches and fix the syslogger issue, but wanted to\nget this on the ticket for the open CommitFest before it closes. I'll\ngo ahead and re-add it and will be pushing updates as I have them.\n\nI intend to give this patchset the time it deserves, so will be much\nmore responsive this go-around.\n\nThanks,\n\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Sat, 29 Feb 2020 21:51:25 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Sat, Feb 29, 2020 at 9:51 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Sat, Nov 30, 2019 at 9:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Oct 03, 2019 at 11:39:37AM -0700, Andres Freund wrote:\n> > > Color me unconvinced.\n> >\n> > The latest comments of the thread have not been addressed yet. so I am\n> > marking the patch as returned with feedback. If you think that's not\n> > correct, please feel free to update the status of the patch. If you\n> > do so, please provide at the same time a rebased version of the patch,\n> > as it is failing to apply on HEAD.\n>\n> I've finally had some time to update the patchset with an attempt at\n> addressing Andres' concerns. Note that the current spin has a bug\n> which does not allow syslogger to run properly. Additionally, it is\n> squashed into one patch again. I intend to split the patch out into\n> separate functional patches and fix the syslogger issue, but wanted to\n> get this on the ticket for the open CommitFest before it closes. I'll\n> go ahead and re-add it and will be pushing updates as I have them.\n\nOkay, here is an updated and rebased patch that passes all regression\ntests with and without EXEC_BACKEND. This also treats more of the\nissues raised by Andres.\n\nI still need to do the following:\n- split giant patch out into separate functional commits\n- add translated process descriptions\n- fix some variable names here and there (notably the PgSubprocess\nstruct \".progname\" member -- that's a misnomer.\n- address some valgrind findings\n\nI should be able to split this out into smaller commits sometime today\nand will continue iterating to scratch the other items off the list.\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Mon, 2 Mar 2020 09:53:55 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Mon, Mar 2, 2020 at 9:53 AM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Sat, Feb 29, 2020 at 9:51 PM Mike Palmiotto\n> <mike.palmiotto@crunchydata.com> wrote:\n> >\n<snip>\n> Okay, here is an updated and rebased patch that passes all regression\n> tests with and without EXEC_BACKEND. This also treats more of the\n> issues raised by Andres.\n>\n> I still need to do the following:\n> - split giant patch out into separate functional commits\n> - add translated process descriptions\n> - fix some variable names here and there (notably the PgSubprocess\n> struct \".progname\" member -- that's a misnomer.\n> - address some valgrind findings\n>\n> I should be able to split this out into smaller commits sometime today\n> and will continue iterating to scratch the other items off the list.\n\nI've addressed all of the points above (except splitting the base\npatch up) and rebased on master. I didn't squash before generating the\npatches so it may be easier to read the progress here, or annoying\nthat there are so many attachments.\n\nThis patchset hits all of Andres' points with the exception of the\nreshuffling of functions like AuxiliaryProcessMain, but it does get\nrid of a lot of direct argv string comparison. I'll reorganize a bit\nwhen I re-write the patchset and will likely move\nAuxiliaryProcessMain, along with the Backend subprocess functions but\njust wanted to get this up here in the meantime.\n\nThere is quite a lot to be gained from this patchset in the form of\nre-organization and removal of redundant code-paths/globals. I see\nthis as a good first step and am willing to continue to push it\nforward with any new suggestions.\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Tue, 3 Mar 2020 23:56:26 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Tue, Mar 3, 2020 at 11:56 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Mon, Mar 2, 2020 at 9:53 AM Mike Palmiotto\n> <mike.palmiotto@crunchydata.com> wrote:\n> >\n> > On Sat, Feb 29, 2020 at 9:51 PM Mike Palmiotto\n> > <mike.palmiotto@crunchydata.com> wrote:\n> > >\n> <snip>\n> > Okay, here is an updated and rebased patch that passes all regression\n> > tests with and without EXEC_BACKEND. This also treats more of the\n> > issues raised by Andres.\n> >\n> > I still need to do the following:\n> > - split giant patch out into separate functional commits\n> > - add translated process descriptions\n> > - fix some variable names here and there (notably the PgSubprocess\n> > struct \".progname\" member -- that's a misnomer.\n> > - address some valgrind findings\n> >\n> > I should be able to split this out into smaller commits sometime today\n> > and will continue iterating to scratch the other items off the list.\n>\n> I've addressed all of the points above (except splitting the base\n> patch up) and rebased on master. I didn't squash before generating the\n> patches so it may be easier to read the progress here, or annoying\n> that there are so many attachments.\n>\n> This patchset hits all of Andres' points with the exception of the\n> reshuffling of functions like AuxiliaryProcessMain, but it does get\n> rid of a lot of direct argv string comparison. I'll reorganize a bit\n> when I re-write the patchset and will likely move\n> AuxiliaryProcessMain, along with the Backend subprocess functions but\n> just wanted to get this up here in the meantime.\n>\n> There is quite a lot to be gained from this patchset in the form of\n> re-organization and removal of redundant code-paths/globals. I see\n> this as a good first step and am willing to continue to push it\n> forward with any new suggestions.\n\nThe patchset is now split out. I've just noticed that Peter Eisentraut\nincluded some changes for a generic MyBackendType, which I should have\nbeen aware of. I was unable to rebase due to these changes, but can\nfold these patches into that framework if others think it's\nworthwhile.\n\nThe resulting diffstat is as follows:\n\n% git diff 24d85952a57b16090ca8ad9cf800fbdd9ddd104f --shortstat\n 29 files changed, 1029 insertions(+), 1201 deletions(-)\n\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Tue, 17 Mar 2020 14:50:19 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Tue, Mar 17, 2020 at 02:50:19PM -0400, Mike Palmiotto wrote:\n> The patchset is now split out. I've just noticed that Peter Eisentraut\n> included some changes for a generic MyBackendType, which I should have\n> been aware of. I was unable to rebase due to these changes, but can\n> fold these patches into that framework if others think it's\n> worthwhile.\n\nI don't have many comments on the patch, but it's easy enough to rebase.\n\nI think maybe you'll want to do something more with this new function:\nGetBackendTypeDesc()\n\n+ /* Don't panic. */\n\n+1\n\n-- \nJustin", "msg_date": "Tue, 17 Mar 2020 16:52:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2020-Mar-17, Justin Pryzby wrote:\n\n> +static PgSubprocess process_types[] = {\n> +\t{\n> +\t\t.desc = \"checker\",\n> +\t\t.entrypoint = CheckerModeMain\n> +\t},\n> +\t{\n> +\t\t.desc = \"bootstrap\",\n> +\t\t.entrypoint = BootstrapModeMain\n> +\t},\n\nMaybe this stuff can be resolved using a technique like rmgrlist.h or\ncmdtaglist.h. That way it's not in two separate files.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Mar 2020 22:04:33 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Tue, Mar 17, 2020 at 9:04 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Mar-17, Justin Pryzby wrote:\n>\n> > +static PgSubprocess process_types[] = {\n> > + {\n> > + .desc = \"checker\",\n> > + .entrypoint = CheckerModeMain\n> > + },\n> > + {\n> > + .desc = \"bootstrap\",\n> > + .entrypoint = BootstrapModeMain\n> > + },\n>\n> Maybe this stuff can be resolved using a technique like rmgrlist.h or\n> cmdtaglist.h. That way it's not in two separate files.\n\nGreat suggestion, thanks! I'll try this out in the next version.\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Wed, 18 Mar 2020 09:22:58 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Wed, Mar 18, 2020 at 09:22:58AM -0400, Mike Palmiotto wrote:\n> On Tue, Mar 17, 2020 at 9:04 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2020-Mar-17, Justin Pryzby wrote:\n> >\n> > > +static PgSubprocess process_types[] = {\n> > > + {\n> > > + .desc = \"checker\",\n> > > + .entrypoint = CheckerModeMain\n> > > + },\n> > > + {\n> > > + .desc = \"bootstrap\",\n> > > + .entrypoint = BootstrapModeMain\n> > > + },\n> >\n> > Maybe this stuff can be resolved using a technique like rmgrlist.h or\n> > cmdtaglist.h. That way it's not in two separate files.\n> \n> Great suggestion, thanks! I'll try this out in the next version.\n\nAlso, I saw this was failing tests both before and after my rebase.\n\nhttp://cfbot.cputube.org/\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 18 Mar 2020 09:17:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Mar 18, 2020 at 09:22:58AM -0400, Mike Palmiotto wrote:\n> > On Tue, Mar 17, 2020 at 9:04 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > >\n> > > On 2020-Mar-17, Justin Pryzby wrote:\n> > >\n> > > > +static PgSubprocess process_types[] = {\n> > > > + {\n> > > > + .desc = \"checker\",\n> > > > + .entrypoint = CheckerModeMain\n> > > > + },\n> > > > + {\n> > > > + .desc = \"bootstrap\",\n> > > > + .entrypoint = BootstrapModeMain\n> > > > + },\n> > >\n> > > Maybe this stuff can be resolved using a technique like rmgrlist.h or\n> > > cmdtaglist.h. That way it's not in two separate files.\n> >\n> > Great suggestion, thanks! I'll try this out in the next version.\n>\n> Also, I saw this was failing tests both before and after my rebase.\n>\n> http://cfbot.cputube.org/\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n\nGood catch, thanks. Will address this as well in the next round. Just\nneed to set up a Windows dev environment to see if I can\nreproduce/fix.\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Wed, 18 Mar 2020 11:26:27 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Wed, Mar 18, 2020 at 11:26 AM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Also, I saw this was failing tests both before and after my rebase.\n> >\n> > http://cfbot.cputube.org/\n> > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n> > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n>\n> Good catch, thanks. Will address this as well in the next round. Just\n> need to set up a Windows dev environment to see if I can\n> reproduce/fix.\n\nWhile I track this down, here is a rebased patchset, which drops\nMySubprocessType and makes use of the MyBackendType.\n\nOnce I get the fix for Windows builds, I'll see about making use of\nthe rmgrlist/cmdtaglist-style technique that Alvaro suggested.\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Wed, 18 Mar 2020 12:07:03 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2020-Mar-18, Mike Palmiotto wrote:\n\n> On Wed, Mar 18, 2020 at 11:26 AM Mike Palmiotto\n> <mike.palmiotto@crunchydata.com> wrote:\n> >\n> > On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Also, I saw this was failing tests both before and after my rebase.\n> > >\n> > > http://cfbot.cputube.org/\n> > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n> > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n> >\n> > Good catch, thanks. Will address this as well in the next round. Just\n> > need to set up a Windows dev environment to see if I can\n> > reproduce/fix.\n> \n> While I track this down, here is a rebased patchset, which drops\n> MySubprocessType and makes use of the MyBackendType.\n\nNote that you can compile with -DEXEC_BACKEND to use in a *nix build the\nsame technique used to spawn processes in Windows, which might be an\neasier way to discover some problems without a proper Windows build.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:54:09 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Wed, Mar 18, 2020 at 12:54 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Mar-18, Mike Palmiotto wrote:\n>\n> > On Wed, Mar 18, 2020 at 11:26 AM Mike Palmiotto\n> > <mike.palmiotto@crunchydata.com> wrote:\n> > >\n> > > On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > Also, I saw this was failing tests both before and after my rebase.\n> > > >\n> > > > http://cfbot.cputube.org/\n> > > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n> > > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n> > >\n> > > Good catch, thanks. Will address this as well in the next round. Just\n> > > need to set up a Windows dev environment to see if I can\n> > > reproduce/fix.\n> >\n> > While I track this down, here is a rebased patchset, which drops\n> > MySubprocessType and makes use of the MyBackendType.\n>\n> Note that you can compile with -DEXEC_BACKEND to use in a *nix build the\n> same technique used to spawn processes in Windows, which might be an\n> easier way to discover some problems without a proper Windows build.\n\nGood suggestion. Unfortunately,that's how I've been testing all along.\nI thought that would be sufficient, but seems like this might be more\nspecific to the Windows #ifdef's.\n\nI have another version on my devbox which fixes the (embarrassing)\nTravis failure for non-EXEC_BACKEND due to a dropped semicolon during\nrebase. I'm setting up my own appveyor instance with Tomas's config\nand will see if I can get to the bottom of this.\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Wed, 18 Mar 2020 12:59:51 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2020-03-18 17:07, Mike Palmiotto wrote:\n> On Wed, Mar 18, 2020 at 11:26 AM Mike Palmiotto\n> <mike.palmiotto@crunchydata.com> wrote:\n>>\n>> On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> Also, I saw this was failing tests both before and after my rebase.\n>>>\n>>> http://cfbot.cputube.org/\n>>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n>>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n>>\n>> Good catch, thanks. Will address this as well in the next round. Just\n>> need to set up a Windows dev environment to see if I can\n>> reproduce/fix.\n> \n> While I track this down, here is a rebased patchset, which drops\n> MySubprocessType and makes use of the MyBackendType.\n\nWhile working on (My)BackendType, I was attempting to get rid of \n(My)AuxProcType altogether. This would mostly work except that it's \nsort of wired into the pgstats subsystem (see NumBackendStatSlots). \nThis can probably be reorganized, but I didn't pursue it further.\n\nNow, I'm a sucker for refactoring, but I feel this proposal is going \ninto a direction I don't understand. I'd prefer that we focus around \nbuilding out background workers as the preferred subprocess mechanism. \nBuilding out a second generic mechanism, again, I don't understand the \ndirection. Are we hoping to add more of these processes? Make it \nextensible? The net lines added/removed by this patch series seems \npretty neutral. What are we getting at the end of this?\n\nMore specifically, I don't agree with the wholesale renaming of \nauxiliary process to subprocess. Besides the massive code churn, the \nold naming seemed pretty good to distinguish them from background \nworkers, the new naming is more ambiguous.\n\nBtw., if I had a lot of time I would attempt to rewrite walreceiver and \nperhaps the autovacuum system as background workers and thus further \nreduce the footprint of the aux process system.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 11:35:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Mar 19, 2020 at 6:35 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> While working on (My)BackendType, I was attempting to get rid of\n> (My)AuxProcType altogether. This would mostly work except that it's\n> sort of wired into the pgstats subsystem (see NumBackendStatSlots).\n> This can probably be reorganized, but I didn't pursue it further.\n\nThis patchset has a different goal: to remove redundant startup code\nand interspersed variants of fork/forkexec code so that we can\ncentralize the postmaster child startup.\n\nThe goal of centralizing postmaster startup stems from the desire to\nbe able to control the process security attributes immediately\nbefore/after fork/exec. This is simply not possible with the existing\ninfrastructure, since processes are identified in Main functions,\nwhich is too late (and again too scattered) to be able to do anything\nuseful.\n\nBy providing a mechanism to set child process metadata prior to\nspawning the subprocess, we gain the ability to identify the process\ntype and thus set security attributes on that process.\n\nIn an earlier spin of the patchset, I included a fork_process hook,\nwhich would be where an extension could set security attributes on a\nprocess. I have since dropped the fork (as advised), but now realize\nthat it actually demonstrates the main motivation of the patchset.\nPerhaps I should add that back in for the next version.\n\n>\n> Now, I'm a sucker for refactoring, but I feel this proposal is going\n> into a direction I don't understand. I'd prefer that we focus around\n> building out background workers as the preferred subprocess mechanism.\n> Building out a second generic mechanism, again, I don't understand the\n> direction. Are we hoping to add more of these processes? Make it\n> extensible? The net lines added/removed by this patch series seems\n> pretty neutral. What are we getting at the end of this?\n\nAs stated above, the primary goal is to centralize the startup code.\nOne nice side-effect is the introduction of a mechanism that is now\nboth extensible and provides the ability to remove a lot of redundant\ncode. I see no reason to have 5 different variants of process forkexec\nfunctions for the sole purpose of building up argv. This patchset\nintends to get rid of such an architecture.\n\nNote that this is not intended to be the complete product here -- it\nis just a first step at swapping in and making use of a new\ninfrastructure. There will be follow-up work required to really get\nthe most out of this infrastructure. For instance, we could drop a\nlarge portion of the SubPostmasterMain switch logic. There are a\nnumber of other areas throughout the codebase (including the example\nprovided in the last commit, which changes the way we retrieve process\ndescriptions), that can utilize this new infrastructure to get rid of\ncode.\n\n>\n> More specifically, I don't agree with the wholesale renaming of\n> auxiliary process to subprocess. Besides the massive code churn, the\n\nI'm not sure I understand the argument here. Where do you see\nwholesale renaming of AuxProc to Subprocess? Subprocess is the name\nfor postmaster subprocesses, whereas Auxiliary Processes are a subset\nof those processes.\n\n> old naming seemed pretty good to distinguish them from background\n> workers, the new naming is more ambiguous.\n\nI do not see any conflation between Background Workers and Auxiliary\nProcesses in this patchset. Since Auxiliary Processes are included in\nthe full set of subprocesses and are delineated with a boolean:\nneeds_aux_proc, it seems fairly straightforward to me which\nsubprocesses are in fact Auxiliary Processes.\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Thu, 19 Mar 2020 09:29:53 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "Hi,\n\nOn 2020-03-19 11:35:41 +0100, Peter Eisentraut wrote:\n> On 2020-03-18 17:07, Mike Palmiotto wrote:\n> > On Wed, Mar 18, 2020 at 11:26 AM Mike Palmiotto\n> > <mike.palmiotto@crunchydata.com> wrote:\n> > >\n> > > On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > Also, I saw this was failing tests both before and after my rebase.\n> > > >\n> > > > http://cfbot.cputube.org/\n> > > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n> > > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n> > >\n> > > Good catch, thanks. Will address this as well in the next round. Just\n> > > need to set up a Windows dev environment to see if I can\n> > > reproduce/fix.\n> >\n> > While I track this down, here is a rebased patchset, which drops\n> > MySubprocessType and makes use of the MyBackendType.\n>\n> While working on (My)BackendType, I was attempting to get rid of\n> (My)AuxProcType altogether. This would mostly work except that it's sort of\n> wired into the pgstats subsystem (see NumBackendStatSlots). This can\n> probably be reorganized, but I didn't pursue it further.\n\nHm. Why does the number of stat slots prevent dropping AuxProcType?\n\n\n> Now, I'm a sucker for refactoring, but I feel this proposal is going into a\n> direction I don't understand. I'd prefer that we focus around building out\n> background workers as the preferred subprocess mechanism. Building out a\n> second generic mechanism, again, I don't understand the direction. Are we\n> hoping to add more of these processes? Make it extensible? The net lines\n> added/removed by this patch series seems pretty neutral. What are we\n> getting at the end of this?\n\nI think background workers for internal processes are the *wrong*\ndirection to go. They were used as a shortcut for parallelism, and then\nthat was extended for logical replication. In my opinion that was done,\nto a significant degree, because the aux proc stuff is/was too painful\nto deal with, but that's something that should be fixed, instead of\nbuilding more and more parallel infrastructure.\n\n\nBgworkers are imo not actually a very good fit for internal\nprocesses. We have to be able to guarantee that there's a free \"slot\" to\nstart internal processes, we should be able to efficiently reference\ntheir pids (especially from postmaster, but also other processes), we\nwant to precisely know which shared PGPROC is being used, etc.\n\nWe now have somewhat different systems for at least: non-shmem\npostmaster children, aux processes, autovacuum workers, internal\nbgworkers, extension bgworkers. That's just insane.\n\nWe should merge those as much as possible. There's obviously going to be\nsome differences, but it needs to be less than now. I think we're\nmostly on the same page on that, I just don't see bgworkers getting us\nthere.\n\n\nThe worst part about the current situation imo is that there's way too\nmany places that one needs to modify / check to create / understand a\nprocess. Moving towards having a single c file w/ associated header that\ndefines 95% of that seems like a good direction. I've not looked at\nrecent versions of the patch, but there was some movement towards that\nin earlier versions.\n\nOn a green field, I would say that there should be one or two C\narrays-of-structs defining subprocesses. And most behaviour should be\nchanneled through that.\n\nstruct PgProcessType\n{\n const char* name;\n PgProcessBeforeShmem before_shmem;\n PgProcessEntry entrypoint;\n uint8:1 only_one_exists;\n uint8:1 uses_shared_memory;\n uint8:1 client_connected;\n uint8:1 database_connected;\n ...\n};\n\nPgProcessType ProcessTypes[] = {\n [StartupType] = {\n .name = \"startup\",\n .entrypoint = StartupProcessMain,\n .only_one_exists = true,\n .uses_shared_memory = true,\n .client_connected = false,\n .database_connected = false,\n ...\n },\n ...\n [UserBackendType] = {\n .name = \"backend\",\n .before_shmem = BackendInitialize,\n .entrypoint = BackendRun, // fixme\n .only_one_exists = false,\n .uses_shared_memory = true,\n .client_connected = true,\n .database_connected = true,\n ...\n }\n ...\n\nThen there should be a single startup routine for all postmaster\nchildren. Since most of the startup is actually shared between all the\ndifferent types of processes, we can declutter it a lot.\n\n\nWhen starting a process postmaster would just specify the process type,\nand if relevant, an argument (struct Port for backends, whatever\nrelevant for bgworkers etc) . Generic code should handle all the work\nuntil the process type entry point - and likely we should move more work\nfrom the individual process types into generic code.\n\n\nIf a process is 'only_one_exists' (to be renamed), the generic code\nwould also (in postmaster) register the pid as\nsubprocess_pids[type] = pid;\nwhich would make it easy to only have per-type code in the few locations\nthat need to be aware, instead of many locations in\npostmaster.c. Perhaps also some shared memory location.\n\n\nComing back to earth, from green field imagination land: I think the\npatchset does go in that direction (to be fair, I think I outlined\nsomething like the above elsewhere in this thread in a discussion with\nMike). And that's good.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Mar 2020 13:57:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Mar 19, 2020 at 4:57 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-03-19 11:35:41 +0100, Peter Eisentraut wrote:\n> > On 2020-03-18 17:07, Mike Palmiotto wrote:\n> > > On Wed, Mar 18, 2020 at 11:26 AM Mike Palmiotto\n> > > <mike.palmiotto@crunchydata.com> wrote:\n> > > >\n> > > > On Wed, Mar 18, 2020 at 10:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > Also, I saw this was failing tests both before and after my rebase.\n> > > > >\n> > > > > http://cfbot.cputube.org/\n> > > > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31535161\n> > > > > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/31386446\n> > > >\n> > > > Good catch, thanks. Will address this as well in the next round. Just\n> > > > need to set up a Windows dev environment to see if I can\n> > > > reproduce/fix.\n\nI'm still working on wiring up an AppVeyor instance, as seemingly\nbuilds don't work on any of the default Azure/Visual Studio images. In\nthe meantime, I've fixed some spurious whitespace changes and the\ncompile error for non-EXEC_BACKEND. I'm posting a new version to keep\nTravis happy at least while I keep working on that. Sorry for the\ndelay.\n\n> On a green field, I would say that there should be one or two C\n> arrays-of-structs defining subprocesses. And most behaviour should be\n> channeled through that.\n>\n> struct PgProcessType\n> {\n> const char* name;\n> PgProcessBeforeShmem before_shmem;\n> PgProcessEntry entrypoint;\n> uint8:1 only_one_exists;\n> uint8:1 uses_shared_memory;\n> uint8:1 client_connected;\n> uint8:1 database_connected;\n> ...\n> };\n\nOnly some of these are currently included in the process_types struct,\nbut this demonstrates the extensibility of the architecture and room\nfor future centralization. Do you see any items in this set that\naren't currently included but are must-haves for this round?\n\n> Then there should be a single startup routine for all postmaster\n> children. Since most of the startup is actually shared between all the\n> different types of processes, we can declutter it a lot.\n\nAgreed.\n\n> When starting a process postmaster would just specify the process type,\n> and if relevant, an argument (struct Port for backends, whatever\n> relevant for bgworkers etc) . Generic code should handle all the work\n> until the process type entry point - and likely we should move more work\n> from the individual process types into generic code.\n>\n> If a process is 'only_one_exists' (to be renamed), the generic code\n> would also (in postmaster) register the pid as\n> subprocess_pids[type] = pid;\n> which would make it easy to only have per-type code in the few locations\n> that need to be aware, instead of many locations in\n> postmaster.c. Perhaps also some shared memory location.\n\nAll of this sounds good.\n\n> Coming back to earth, from green field imagination land: I think the\n> patchset does go in that direction (to be fair, I think I outlined\n> something like the above elsewhere in this thread in a discussion with\n> Mike). And that's good.\n\nThanks for chiming in. Is there anything else you think needs to go in\nthis version to push things along -- besides fixing Windows builds, of\ncourse?\n\nRegards,\n-- \nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Fri, 20 Mar 2020 18:17:36 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2020-03-19 14:29, Mike Palmiotto wrote:\n>> More specifically, I don't agree with the wholesale renaming of\n>> auxiliary process to subprocess. Besides the massive code churn, the\n> \n> I'm not sure I understand the argument here. Where do you see\n> wholesale renaming of AuxProc to Subprocess? Subprocess is the name\n> for postmaster subprocesses, whereas Auxiliary Processes are a subset\n> of those processes.\n\nSpecifically renaming AuxProcType to SubprocessType and everything that \nfollows from that, including the name of new files.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 21 Mar 2020 18:40:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Fri, Mar 20, 2020 at 6:17 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Thu, Mar 19, 2020 at 4:57 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> I'm still working on wiring up an AppVeyor instance, as seemingly\n> builds don't work on any of the default Azure/Visual Studio images. In\n> the meantime, I've fixed some spurious whitespace changes and the\n> compile error for non-EXEC_BACKEND. I'm posting a new version to keep\n> Travis happy at least while I keep working on that. Sorry for the\n> delay.\n\nThe attached patchset should be fixed. I rebased on master and tested\nwith TAP tests enabled (fork/EXEC_BACKEND/and Windows).\n\nThe AppVeyor configs that Peter posted in a separate thread were\nextremely helpful. Thanks, Peter!\n\n> > When starting a process postmaster would just specify the process type,\n> > and if relevant, an argument (struct Port for backends, whatever\n> > relevant for bgworkers etc) . Generic code should handle all the work\n> > until the process type entry point - and likely we should move more work\n> > from the individual process types into generic code.\n> >\n> > If a process is 'only_one_exists' (to be renamed), the generic code\n> > would also (in postmaster) register the pid as\n> > subprocess_pids[type] = pid;\n> > which would make it easy to only have per-type code in the few locations\n> > that need to be aware, instead of many locations in\n> > postmaster.c. Perhaps also some shared memory location.\n\nI played around with this a bit last weekend and have a local/untested\npatch to move all subprocess_pids into the array. I think\n'is_singleton_process' would be a decent name for 'only_one_exists'.\nWe can also probably add a field to the subprocess array to tell which\nsignals each gets from postmaster.\n\nAre these pieces required to make this patchset committable? Is there\nanything else needed at this point to make it committable?\n\nThanks again for everyone's feedback and comments.\n\nRegards,\n--\nMike Palmiotto\nhttps://crunchydata.com", "msg_date": "Thu, 26 Mar 2020 19:30:02 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "> On 27 Mar 2020, at 00:30, Mike Palmiotto <mike.palmiotto@crunchydata.com> wrote:\n\n> Are these pieces required to make this patchset committable? Is there\n> anything else needed at this point to make it committable?\n\nThe submitted version no longer applies, the 0009 patch has conflicts in\npostmaster.c. Can you please submit a new version of the patch?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 2 Jul 2020 16:26:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2020-Jul-02, Daniel Gustafsson wrote:\n\n> > On 27 Mar 2020, at 00:30, Mike Palmiotto <mike.palmiotto@crunchydata.com> wrote:\n> \n> > Are these pieces required to make this patchset committable? Is there\n> > anything else needed at this point to make it committable?\n> \n> The submitted version no longer applies, the 0009 patch has conflicts in\n> postmaster.c. Can you please submit a new version of the patch?\n\nIf the first 8 patches still apply, please do keep it as needs-review\nthough, since we can still give advice on those first parts.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 2 Jul 2020 11:03:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "> On 2 Jul 2020, at 17:03, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Jul-02, Daniel Gustafsson wrote:\n> \n>>> On 27 Mar 2020, at 00:30, Mike Palmiotto <mike.palmiotto@crunchydata.com> wrote:\n>> \n>>> Are these pieces required to make this patchset committable? Is there\n>>> anything else needed at this point to make it committable?\n>> \n>> The submitted version no longer applies, the 0009 patch has conflicts in\n>> postmaster.c. Can you please submit a new version of the patch?\n> \n> If the first 8 patches still apply, please do keep it as needs-review\n> though, since we can still give advice on those first parts.\n\nDone.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 2 Jul 2020 17:08:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On 2020-Mar-26, Mike Palmiotto wrote:\n\nRegarding 0001:\n\n> diff --git a/src/backend/postmaster/subprocess.c b/src/backend/postmaster/subprocess.c\n> new file mode 100644\n> index 0000000000..3e7a45bf10\n> --- /dev/null\n> +++ b/src/backend/postmaster/subprocess.c\n> @@ -0,0 +1,62 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * subprocess.c\n> + *\n> + * Copyright (c) 2004-2020, PostgreSQL Global Development Group\n> + *\n> + *\n> + * IDENTIFICATION\n> + *\t src/backend/postmaster/syslogger.c\n\nWrong file identification.\n\n> +static PgSubprocess process_types[] = {\n> +\t{\n> +\t\t.desc = \"checker\",\n> +\t\t.entrypoint = CheckerModeMain\n> +\t},\n> +\t{\n...\n\nThis array has to match the order in subprocess.h:\n\n> +typedef enum\n> +{\n> +\tNoProcessType = -1,\n> +\tCheckerType = 0,\n> +\tBootstrapType,\n> +\tStartupType,\n> +\tBgWriterType,\n> +\tCheckpointerType,\n> +\tWalWriterType,\n> +\tWalReceiverType,\t/* end of Auxiliary Process Forks */\n> +\n> +\tNUMSUBPROCESSTYPES\t\t\t/* Must be last! */\n> +} SubprocessType;\n\nThis sort of thing is messy and unfriendly to maintain. I suggest we\nuse the same trick as in cmdtaglist.h and rmgrlist.h; see commits\n2f9661311b83 and 5a1cd89f8f4a for examples.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 2 Jul 2020 11:11:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Jul 2, 2020 at 11:11 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Mar-26, Mike Palmiotto wrote:\n>\n> Regarding 0001:\n>\n> > diff --git a/src/backend/postmaster/subprocess.c b/src/backend/postmaster/subprocess.c\n> > new file mode 100644\n> > index 0000000000..3e7a45bf10\n> > --- /dev/null\n> > +++ b/src/backend/postmaster/subprocess.c\n> > @@ -0,0 +1,62 @@\n> > +/*-------------------------------------------------------------------------\n> > + *\n> > + * subprocess.c\n> > + *\n> > + * Copyright (c) 2004-2020, PostgreSQL Global Development Group\n> > + *\n> > + *\n> > + * IDENTIFICATION\n> > + * src/backend/postmaster/syslogger.c\n>\n> Wrong file identification.\n\nThanks, I'll fix that.\n\n<snip>\n\n> > + WalReceiverType, /* end of Auxiliary Process Forks */\n> > +\n> > + NUMSUBPROCESSTYPES /* Must be last! */\n> > +} SubprocessType;\n>\n> This sort of thing is messy and unfriendly to maintain. I suggest we\n> use the same trick as in cmdtaglist.h and rmgrlist.h; see commits\n> 2f9661311b83 and 5a1cd89f8f4a for examples.\n\nThanks for the reviews. I'm hoping to get to this next week (hopefully\nsooner). It was on my TODO list to use this approach (from the last\nround of reviews), so I'll make sure to do it first.\n\nOther dangling items were the subprocess pids array and obviously the\nrebase. I've got a decent AppVeyor/tap test workflow now, so it\nshould be a little less painful this time around.\n\nRegards,\n\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 16:17:59 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" }, { "msg_contents": "On Thu, Jul 09, 2020 at 04:17:59PM -0400, Mike Palmiotto wrote:\n> Thanks for the reviews. I'm hoping to get to this next week (hopefully\n> sooner). It was on my TODO list to use this approach (from the last\n> round of reviews), so I'll make sure to do it first.\n\nNothing has happened here in two months, so I am marking this stuff as\nreturned with feedback.\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 14:26:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Auxiliary Processes and MyAuxProc" } ]
[ { "msg_contents": "Greetings,\n\nAttached is a patch which attempts to solve a few problems:\n\n1) Filtering out partitions flexibly based on the results of an\nexternal function call (supplied by an extension).\n2) Filtering out partitions from pg_inherits based on the same function call.\n3) Filtering out partitions from a partitioned table BEFORE the partition\nis actually opened on-disk.\n\nThe name \"partitionChildAccess_hook\" comes from the fact that the\nbackend may not have access to a particular partition within the\npartitioned table. The idea would be to silently filter out the\npartition from queries to the parent table, which means also adjusting\nthe returned contents of find_inheritance_children based on the\nexternal function.\n\nI am curious how the community feels about these patches and if there\nis an alternative approach to solve the above issues (perhaps\nanother existing hook).\n\nThanks for your time.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com", "msg_date": "Mon, 25 Feb 2019 16:22:17 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "[RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "From: Mike Palmiotto [mailto:mike.palmiotto@crunchydata.com]\r\n> Attached is a patch which attempts to solve a few problems:\r\n> \r\n> 1) Filtering out partitions flexibly based on the results of an external\r\n> function call (supplied by an extension).\r\n> 2) Filtering out partitions from pg_inherits based on the same function\r\n> call.\r\n> 3) Filtering out partitions from a partitioned table BEFORE the partition\r\n> is actually opened on-disk.\r\n\r\nWhat concrete problems would you expect this patch to solve? What kind of extensions do you imagine? I'd like to hear about the examples. For example, \"PostgreSQL 12 will not be able to filter out enough partitions when planning/executing SELECT ... WHERE ... statement. But an extension like this can extract just one partition early.\"\r\n\r\nWould this help the following issues with PostgreSQL 12?\r\n\r\n* UPDATE/DELETE planning takes time in proportion to the number of partitions, even when the actually accessed partition during query execution is only one.\r\n\r\n* Making a generic plan takes prohibitably long time (IIRC, about 12 seconds when the number of partitoons is 1,000 or 8,000.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 26 Feb 2019 06:55:30 +0000", "msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Tue, Feb 26, 2019 at 06:55:30AM +0000, Tsunakawa, Takayuki wrote:\n> What concrete problems would you expect this patch to solve? What\n> kind of extensions do you imagine? I'd like to hear about the\n> examples. For example, \"PostgreSQL 12 will not be able to filter\n> out enough partitions when planning/executing SELECT ... WHERE\n> ... statement. But an extension like this can extract just one\n> partition early.\"\n\nIndeed. Hooks should be defined so as their use is as generic and\npossible depending on their context, particularly since there is a\nplanner hook.. It is also important to consider first if the existing\ncore code can be made better depending on the requirements, removing\nthe need for a hook at the end.\n--\nMichael", "msg_date": "Tue, 26 Feb 2019 16:37:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Tue, Feb 26, 2019 at 1:55 AM Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: Mike Palmiotto [mailto:mike.palmiotto@crunchydata.com]\n> > Attached is a patch which attempts to solve a few problems:\n> >\n> > 1) Filtering out partitions flexibly based on the results of an external\n> > function call (supplied by an extension).\n> > 2) Filtering out partitions from pg_inherits based on the same function\n> > call.\n> > 3) Filtering out partitions from a partitioned table BEFORE the partition\n> > is actually opened on-disk.\n>\n> What concrete problems would you expect this patch to solve? What kind of extensions do you imagine? I'd like to hear about the examples. For example, \"PostgreSQL 12 will not be able to filter out enough partitions when planning/executing SELECT ... WHERE ... statement. But an extension like this can extract just one partition early.\"\n\nMy only application of the patch thus far has been to apply an RLS\npolicy driven by the extension's results. For example:\n\nCREATE TABLE test.partpar\n(\n a int,\n b text DEFAULT (extension_default_bfield('test.partpar'::regclass::oid))\n) PARTITION BY LIST (extension_translate_bfield(b));\n\nCREATE POLICY filter_select on test.partpar for SELECT\nUSING (extension_filter_by_bfield(b));\n\nCREATE POLICY filter_select on test.partpar for INSERT\nWITH CHECK (extension_generate_insert_bfield('test.partpar'::regclass::oid)\n= b);\n\nCREATE POLICY filter_update on test.partpar for UPDATE\nUSING (extension_filter_by_bfield(b))\nWITH CHECK (extension_filter_by_bfield(b));\n\nCREATE POLICY filter_delete on test.partpar for DELETE\nUSING (extension_filter_by_bfield(b));\n\nThe function would filter based on some external criteria relating to\nthe username and the contents of the b column.\n\nThe desired effect would be to have `SELECT * from test.partpar;`\nreturn check only the partitions where username can see any row in the\ntable based on column b. This is applicable, for instance, when a\npartition of test.partpar (say test.partpar_b2) is given a label with\n`SECURITY LABEL on TABLE test.partpar_b2 IS 'foo';` which is exactly\nthe same as the b column for every row in said partition. Using this\nhook, we can simply check the table label and kick the entire\npartition out early on. This should greatly improve performance for\nthe case where you can enforce that the partition SECURITY LABEL and\nthe b column are the same.\n\n>\n> Would this help the following issues with PostgreSQL 12?\n>\n> * UPDATE/DELETE planning takes time in proportion to the number of partitions, even when the actually accessed partition during query execution is only one.\n>\n> * Making a generic plan takes prohibitably long time (IIRC, about 12 seconds when the number of partitoons is 1,000 or 8,000.)\n\nIn theory, we'd be checking fewer items (the labels of the partitions,\ninstead of the b column for every row), so it may indeed help with\nperformance in these cases.\n\nAdmittedly, I haven't looked at either of these very closely. Do you\nhave any specific test cases I can try out on my end to verify?\n--\nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n", "msg_date": "Tue, 26 Feb 2019 13:06:31 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Tue, Feb 26, 2019 at 1:06 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Tue, Feb 26, 2019 at 1:55 AM Tsunakawa, Takayuki\n> <tsunakawa.takay@jp.fujitsu.com> wrote:\n> >\n> > From: Mike Palmiotto [mailto:mike.palmiotto@crunchydata.com]\n> > > Attached is a patch which attempts to solve a few problems:\n\nUpdated patch attached.\n\n> > > <snip>\n> > What concrete problems would you expect this patch to solve? What kind of extensions do you imagine? I'd like to hear about the examples. For example, \"PostgreSQL 12 will not be able to filter out enough partitions when planning/executing SELECT ... WHERE ... statement. But an extension like this can extract just one partition early.\"\n>\n> My only application of the patch thus far has been to apply an RLS\n> policy driven by the extension's results. For example:\n>\n> CREATE TABLE test.partpar\n> (\n> a int,\n> b text DEFAULT (extension_default_bfield('test.partpar'::regclass::oid))\n> ) PARTITION BY LIST (extension_translate_bfield(b));\n>\n> CREATE POLICY filter_select on test.partpar for SELECT\n> USING (extension_filter_by_bfield(b));\n>\n> CREATE POLICY filter_select on test.partpar for INSERT\n> WITH CHECK (extension_generate_insert_bfield('test.partpar'::regclass::oid)\n> = b);\n>\n> CREATE POLICY filter_update on test.partpar for UPDATE\n> USING (extension_filter_by_bfield(b))\n> WITH CHECK (extension_filter_by_bfield(b));\n>\n> CREATE POLICY filter_delete on test.partpar for DELETE\n> USING (extension_filter_by_bfield(b));\n>\n> The function would filter based on some external criteria relating to\n> the username and the contents of the b column.\n>\n> The desired effect would be to have `SELECT * from test.partpar;`\n> return check only the partitions where username can see any row in the\n> table based on column b. This is applicable, for instance, when a\n> partition of test.partpar (say test.partpar_b2) is given a label with\n> `SECURITY LABEL on TABLE test.partpar_b2 IS 'foo';` which is exactly\n> the same as the b column for every row in said partition. Using this\n> hook, we can simply check the table label and kick the entire\n> partition out early on. This should greatly improve performance for\n> the case where you can enforce that the partition SECURITY LABEL and\n> the b column are the same.\n\nIs this explanation suitable, or is more information required?\n\nThanks,\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com", "msg_date": "Wed, 27 Feb 2019 10:27:12 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On 2019-02-26 19:06, Mike Palmiotto wrote:\n> The desired effect would be to have `SELECT * from test.partpar;`\n> return check only the partitions where username can see any row in the\n> table based on column b. This is applicable, for instance, when a\n> partition of test.partpar (say test.partpar_b2) is given a label with\n> `SECURITY LABEL on TABLE test.partpar_b2 IS 'foo';` which is exactly\n> the same as the b column for every row in said partition. Using this\n> hook, we can simply check the table label and kick the entire\n> partition out early on. This should greatly improve performance for\n> the case where you can enforce that the partition SECURITY LABEL and\n> the b column are the same.\n\nTo rephrase this: You have a partitioned table, and you have a RLS\npolicy that hides certain rows, and you know based on your business\nlogic that under certain circumstances entire partitions will be hidden,\nso they don't need to be scanned. So you want a planner hook that would\nallow you to prune those partitions manually.\n\nThat sounds pretty hackish to me. We should give the planner and\nexecutor the appropriate information to make these decisions, like an\nadditional partition constraint. If this information is hidden in\nuser-defined functions in a way that cannot be reasoned about, who is\nenforcing these constraints and how do we know they are actually correct?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 27 Feb 2019 18:36:50 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Wed, Feb 27, 2019 at 12:36 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> <snip>\n> To rephrase this: You have a partitioned table, and you have a RLS\n> policy that hides certain rows, and you know based on your business\n> logic that under certain circumstances entire partitions will be hidden,\n> so they don't need to be scanned. So you want a planner hook that would\n> allow you to prune those partitions manually.\n>\n> That sounds pretty hackish to me. We should give the planner and\n> executor the appropriate information to make these decisions, like an\n> additional partition constraint.\n\nAre you thinking of a partition pruning step for FuncExpr's or\nsomething else? I was considering an implementation where FuncExpr's\nwere marked for execution-time pruning, but wanted to see if this\npatch got any traction first.\n\n> If this information is hidden in\n> user-defined functions in a way that cannot be reasoned about, who is\n> enforcing these constraints and how do we know they are actually correct?\n\nThe author of the extension which utilizes the hook would have to be\nsure they use the hook correctly. This is not a new or different\nconcept to any other existing hook. This hook in particular would be\nused by security extensions that have some understanding of the\nunderlying security model being implemented by RLS.\n\nThanks,\n\n--\nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n", "msg_date": "Thu, 28 Feb 2019 13:36:32 -0500", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "Hi Peter,\n\nOn 2/28/19 10:36 PM, Mike Palmiotto wrote:\n> On Wed, Feb 27, 2019 at 12:36 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> <snip>\n>> To rephrase this: You have a partitioned table, and you have a RLS\n>> policy that hides certain rows, and you know based on your business\n>> logic that under certain circumstances entire partitions will be hidden,\n>> so they don't need to be scanned. So you want a planner hook that would\n>> allow you to prune those partitions manually.\n>>\n>> That sounds pretty hackish to me. We should give the planner and\n>> executor the appropriate information to make these decisions, like an\n>> additional partition constraint.\n> \n> Are you thinking of a partition pruning step for FuncExpr's or\n> something else? I was considering an implementation where FuncExpr's\n> were marked for execution-time pruning, but wanted to see if this\n> patch got any traction first.\n> \n>> If this information is hidden in\n>> user-defined functions in a way that cannot be reasoned about, who is\n>> enforcing these constraints and how do we know they are actually correct?\n> \n> The author of the extension which utilizes the hook would have to be\n> sure they use the hook correctly. This is not a new or different\n> concept to any other existing hook. This hook in particular would be\n> used by security extensions that have some understanding of the\n> underlying security model being implemented by RLS.\n\nThoughts on Mike's response?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Wed, 20 Mar 2019 14:34:13 +0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "Hi,\n\nOn 2019-02-28 13:36:32 -0500, Mike Palmiotto wrote:\n> On Wed, Feb 27, 2019 at 12:36 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > <snip>\n> > To rephrase this: You have a partitioned table, and you have a RLS\n> > policy that hides certain rows, and you know based on your business\n> > logic that under certain circumstances entire partitions will be hidden,\n> > so they don't need to be scanned. So you want a planner hook that would\n> > allow you to prune those partitions manually.\n> >\n> > That sounds pretty hackish to me. We should give the planner and\n> > executor the appropriate information to make these decisions, like an\n> > additional partition constraint.\n> \n> Are you thinking of a partition pruning step for FuncExpr's or\n> something else? I was considering an implementation where FuncExpr's\n> were marked for execution-time pruning, but wanted to see if this\n> patch got any traction first.\n> \n> > If this information is hidden in\n> > user-defined functions in a way that cannot be reasoned about, who is\n> > enforcing these constraints and how do we know they are actually correct?\n> \n> The author of the extension which utilizes the hook would have to be\n> sure they use the hook correctly. This is not a new or different\n> concept to any other existing hook. This hook in particular would be\n> used by security extensions that have some understanding of the\n> underlying security model being implemented by RLS.\n\nI noticed that the CF entry for this patch is marked as targeting\nv12. Even if we had agreement on the design - which we pretty clearly\ndon't - I don't see that being realistic for a patch submitted a few\ndays before the final v12 CF. That really only should contain simple new\npatches, or, obviously, patches that have been longer in development.\n\nI've moved this to the next CF, and marked it as targeting v13.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Apr 2019 19:06:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Sat, Apr 6, 2019 at 3:06 PM Andres Freund <andres@anarazel.de> wrote:\n> I've moved this to the next CF, and marked it as targeting v13.\n\nHi Mike,\n\nCommifest 1 for PostgreSQL 13 is here. I was just going through the\nautomated build results for the 'fest and noticed that your patch\ncauses a segfault in the regression tests (possibly due to other\nchanges that have been made in master since February). You can see\nthe complete backtrace on the second link below, but it looks like\nthis is happening every time, so hopefully not hard to track down\nlocally.\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46412\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555380617\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2019 13:46:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Sun, Jul 7, 2019 at 9:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sat, Apr 6, 2019 at 3:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > I've moved this to the next CF, and marked it as targeting v13.\n>\n> Hi Mike,\n>\n> Commifest 1 for PostgreSQL 13 is here. I was just going through the\n> automated build results for the 'fest and noticed that your patch\n> causes a segfault in the regression tests (possibly due to other\n> changes that have been made in master since February). You can see\n> the complete backtrace on the second link below, but it looks like\n> this is happening every time, so hopefully not hard to track down\n> locally.\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46412\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/555380617\n\n\nThanks, Thomas, I'll look into this today.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\nOn Sun, Jul 7, 2019 at 9:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, Apr 6, 2019 at 3:06 PM Andres Freund <andres@anarazel.de> wrote:\n> I've moved this to the next CF, and marked it as targeting v13.\n\nHi Mike,\n\nCommifest 1 for PostgreSQL 13 is here.  I was just going through the\nautomated build results for the 'fest and noticed that your patch\ncauses a segfault in the regression tests (possibly due to other\nchanges that have been made in master since February).  You can see\nthe complete backtrace on the second link below, but it looks like\nthis is happening every time, so hopefully not hard to track down\nlocally.\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46412\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555380617Thanks, Thomas, I'll look into this today.-- Mike PalmiottoSoftware EngineerCrunchy Data Solutionshttps://crunchydata.com", "msg_date": "Mon, 8 Jul 2019 10:59:24 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Mon, Jul 8, 2019 at 10:59 AM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n>\n> On Sun, Jul 7, 2019 at 9:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>\n>> On Sat, Apr 6, 2019 at 3:06 PM Andres Freund <andres@anarazel.de> wrote:\n>> > I've moved this to the next CF, and marked it as targeting v13.\n>>\n>> Hi Mike,\n>>\n>> Commifest 1 for PostgreSQL 13 is here. I was just going through the\n>> automated build results for the 'fest and noticed that your patch\n>> causes a segfault in the regression tests (possibly due to other\n>> changes that have been made in master since February). You can see\n>> the complete backtrace on the second link below, but it looks like\n>> this is happening every time, so hopefully not hard to track down\n>> locally.\n>>\n>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46412\n>> https://travis-ci.org/postgresql-cfbot/postgresql/builds/555380617\n\nAttached you will find a patch (rebased on master) that passes all\ntests on my local CentOS 7 box. Thanks again for the catch!\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com", "msg_date": "Mon, 8 Jul 2019 23:30:53 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Tue, Jul 9, 2019 at 3:31 PM Mike Palmiotto\n<mike.palmiotto@crunchydata.com> wrote:\n> Attached you will find a patch (rebased on master) that passes all\n> tests on my local CentOS 7 box. Thanks again for the catch!\n\nHi Mike,\n\nHere are some comments on superficial aspects of the patch:\n\n+/* Custom partition child access hook. Provides further partition pruning given\n+ * child OID.\n+ */\n\nShould be like:\n\n/*\n * Multi-line comment...\n */\n\nWhy \"child\"? Don't you really mean \"Partition pruning hook. Provides\ncustom pruning given partition OID.\" or something?\n\n+typedef bool (*partitionChildAccess_hook_type) (Oid childOID);\n+PGDLLIMPORT partitionChildAccess_hook_type partitionChildAccess_hook;\n\nHmm, I wonder if this could better evoke the job that it's doing...\npartition_filter_hook?\npartition_access_filter_hook? partition_prune_hook?\n\n+/* Macro to use partitionChildAccess_hook. Handles NULL-checking. */\n\nIt's not a macro, it's a function.\n\n+static inline bool InvokePartitionChildAccessHook (Oid childOID)\n+{\n+ if (partitionChildAccess_hook && enable_partition_pruning && childOID)\n+ {\n\nNormally we write OidIsValid(childOID) rather than comparing with 0.\nI wonder if you should call the variable relId? Single line if\nbranches don't usually get curly braces.\n\n+ return (*partitionChildAccess_hook) (childOID);\n\nThe syntax we usually use for calling function pointers is just\npartitionChildAccess_hook(childOID).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jul 2019 00:49:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Thu, Jul 11, 2019 at 8:49 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> <snip>\n>\n> Here are some comments on superficial aspects of the patch:\n>\n> +/* Custom partition child access hook. Provides further partition pruning given\n> + * child OID.\n> + */\n>\n> Should be like:\n>\n> /*\n> * Multi-line comment...\n> */\n\nFixed in attached patch.\n\n>\n> Why \"child\"? Don't you really mean \"Partition pruning hook. Provides\n> custom pruning given partition OID.\" or something?\n>\n> +typedef bool (*partitionChildAccess_hook_type) (Oid childOID);\n> +PGDLLIMPORT partitionChildAccess_hook_type partitionChildAccess_hook;\n>\n> Hmm, I wonder if this could better evoke the job that it's doing...\n> partition_filter_hook?\n> partition_access_filter_hook? partition_prune_hook?\n\nEnded up going with partition_prune_hook. Good call.\n\n>\n> +/* Macro to use partitionChildAccess_hook. Handles NULL-checking. */\n>\n> It's not a macro, it's a function.\n\nCopy-pasta. Fixed.\n\n>\n> +static inline bool InvokePartitionChildAccessHook (Oid childOID)\n> +{\n> + if (partitionChildAccess_hook && enable_partition_pruning && childOID)\n> + {\n>\n> Normally we write OidIsValid(childOID) rather than comparing with 0.\n> I wonder if you should call the variable relId? Single line if\n> branches don't usually get curly braces.\n\nFixed.\n\n>\n> + return (*partitionChildAccess_hook) (childOID);\n>\n> The syntax we usually use for calling function pointers is just\n> partitionChildAccess_hook(childOID).\n\nFixed.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com", "msg_date": "Thu, 11 Jul 2019 16:54:26 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "Hello.\n\nI'm on Peter's side. Unlike other similar hooks, this hook is\nprovided just to introduce arbitrary inconsistency in partition\npruning.\n\n# By the way, InvokePartitionPruningHook seems useless if the\n# reason for it is to avoid duplicate if()'s . \n\nAdding check constraint on children works as far as the RLSish\nfunction is immutable. Do you have any concrete example or\npicture of what you want to achieve?\n\n\nBy the way, while considering this, I noticed the following table\ndefinition passes.\n\n> create table t (id serial, a text check (a = '' or a = CURRENT_USER::text));\n\nI don't think it is the right behavior.\n\n> grant all on t to public;\n> grant all on t_id_seq to public;\n> \\c postgres u1\n> insert into t(a) values ('u1');\n> \\c postgres u2\n> insert into t(a) values ('u2');\n> \\c postgres horiguti\n> insert into t(a) values ('horiguti');\n\n> select * from t;\n> id | a \n> ----+----------\n> 1 | u1\n> 2 | u2\n> 3 | horiguti\n\nBroken... The attached patch make parser reject that but I'm not\nsure how much it affects existing users.\n\n> =# create table t (id serial, a text check (a = '' or a = CURRENT_USER::text));\n> ERROR: mutable functions are not allowed in constraint expression\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 12 Jul 2019 15:05:27 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "Sorry for jumping into this thread pretty late. I have read the\nmessages on this thread a couple of times and...\n\nOn Fri, Jul 12, 2019 at 3:05 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I'm on Peter's side. Unlike other similar hooks, this hook is\n> provided just to introduce arbitrary inconsistency in partition\n> pruning.\n\n...I have questions about the patch as proposed, such as what Peter\nand Horiguchi-san might.\n\nWhy does this hook need to be specific to partitioning, that is, why\nis this a \"partition pruning\" hook? IOW, why does the functionality of\nthis hook apply only to partitions as opposed to *any* table? I may\nbe missing something, but the two things -- a table's partition\nconstraint and security properties (or any properties that the\nproposed hook's suppliers will provide) -- seem unrelated, so making\nthis specific to partitioning seems odd. If this was proposed as\napplying to any table, then it might be easier to choose the point\nfrom which hook will be called and there will be fewer such points to\nconsider. Needless to say, since the planner sees partitions as\ntables, the partitioning case will be covered too.\n\nLooking at the patch, it seems that the hook will be called *after*\nopening the partition (provided it survived partition pruning); I'm\nlooking at this hunk:\n\n@@ -1038,6 +1038,13 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,\n continue;\n }\n\n+ if (!InvokePartitionPruneHook(childRTE->relid))\n+ {\n+ /* Implement custom partition pruning filter*/\n+ set_dummy_rel_pathlist(childrel);\n+ continue;\n+ }\n\nset_append_rel_size() is called *after* partitions are opened and\ntheir RelOptInfos formed. So it's not following the design you\nintended whereby the hook will avoid uselessly opening partition\nfiles.\n\nAlso, I suspect you don't want to call the hook from here:\n\n@@ -92,6 +93,10 @@ find_inheritance_children(Oid parentrelId, LOCKMODE lockmode)\n while ((inheritsTuple = systable_getnext(scan)) != NULL)\n {\n inhrelid = ((Form_pg_inherits) GETSTRUCT(inheritsTuple))->inhrelid;\n+\n+ if (!InvokePartitionPruneHook(inhrelid))\n+ continue;\n+\n\n...as you might see unintended consequences, because\nfind_inheritance_children() is called from many places that wouldn't\nwant arbitrary extension code to drop some children from the list. For\nexample, it is called when adding a column to a parent table to find\nthe children that will need to get the same column added. You\nwouldn't want some children getting the column added while others not.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 12 Jul 2019 17:25:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" }, { "msg_contents": "On Fri, Jul 12, 2019 at 4:25 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Sorry for jumping into this thread pretty late. I have read the\n> messages on this thread a couple of times and...\n>\n> On Fri, Jul 12, 2019 at 3:05 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I'm on Peter's side. Unlike other similar hooks, this hook is\n> > provided just to introduce arbitrary inconsistency in partition\n> > pruning.\n>\n> ...I have questions about the patch as proposed, such as what Peter\n> and Horiguchi-san might.\n\nThank you for taking a look at this. I need a bit of time to research\nand affirm my previous assumptions, but will address all of these\npoints as soon as I can.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n\n", "msg_date": "Fri, 12 Jul 2019 14:34:05 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] [PATCH] Flexible \"partition pruning\" hook" } ]
[ { "msg_contents": "Hi, kirk-san.\n\n> From: Nagaura, Ryohei <nagaura.ryohei@jp.fujitsu.com>\n> This is my bad. I'll remake it.\n> Very sorry for the same mistake.\nI remade the patches and attached in this mail.\nIn socket_timeout patch, I replaced \"atoi\" to \"parse_int_param\" and inserted spaces just after some comma.\nThere are a few changes about documentation for the following reason:\n\n> From: Jamison, Kirk <k.jamison@jp.fujitsu.com>\n> Got the doc fix. I wonder if we need to document what effect the parameter does:\n> terminating the connection. How about:\nI also don't know, but...\n\n> Controls the number of seconds of client-server communication inactivity before\n> forcibly closing the connection in order to prevent client from infinite waiting for\n> individual socket read/write operations. This can be used both as a force global\n> query timeout and network problems detector, i.e. hardware failure and dead\n> connection. A value of zero (the default) turns this off.\n\"communication inactivity\" seems to be a little extreme.\nIf the communication layer is truly dead you will use keepalive.\nThis use case is when socket option is not available for some reason.\nSo it would be better \"terminating the connection\" in my thought.\nAnd...\n\n> Well, you may remove the \"i.e. hardware failure and dead connection\" if that's not\n> necessary.\nI don't think it is necessary because you can use this parameter other than that situations.\nNot \"i.e.\" but \"e.g.\" have a few chance to be documented.\n\nAbout TCP_USER_TIMEOUT patches, there are only miscellaneous changes: removing trailing spaces\nand making comments of parameters lower case as you pointed out.\n\nBest regards,\n---------------------\nRyohei Nagaura", "msg_date": "Tue, 26 Feb 2019 07:19:06 +0000", "msg_from": "\"Nagaura, Ryohei\" <nagaura.ryohei@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "inted RE: Timeout parameters" } ]
[ { "msg_contents": "pgbench has a strange restriction to only allow 10 arguments, which is too\nlow for some real world uses.\n\nSince MAX_ARGS is only used for an array of pointers and an array of\nintegers increasing this should not be a problem, so increase value to 255.\n\nWhile there, correct an off by one error in the error message when you hit\nthe limit. The highest argument id is MAX_ARGS - 1, but the max number of\narguments is MAX_ARGS.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 Feb 2019 09:41:02 +0000", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pgbench MAX_ARGS" }, { "msg_contents": "\nHello Simon,\n\n> pgbench has a strange restriction to only allow 10 arguments, which is too\n> low for some real world uses.\n>\n> Since MAX_ARGS is only used for an array of pointers and an array of\n> integers increasing this should not be a problem, so increase value to 255.\n\nFine with me on principle.\n\nPatch applies cleanly, compiles.\n\nHowever, 3 TAP tests fails on the \"too many argument\" test case, what a \nsurprise:-)\n\nProbably I would have chosen a smaller value, eg 32 or 64, but I have not \nseen your use case.\n\n> While there, correct an off by one error in the error message when you hit\n> the limit. The highest argument id is MAX_ARGS - 1, but the max number of\n> arguments is MAX_ARGS.\n\nArgh! Indeed.\n\nI notice that there is no documentation update, which just shows that \nindeed there are no documentation about the number of arguments, maybe the \npatch could add a sentence somewhere? There is no limit discussed in the \nPREPARE documentation, I tested up to 20. I'd sugggest to add something at \nthe end of the paragraph about variable substitution in the \"Custom \nScript\" section, eg \"A maximum of XX variable references can appear within \nan SQL command.\".\n\n-- \nFabien.\n\n", "msg_date": "Tue, 26 Feb 2019 13:19:50 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "On Tue, 26 Feb 2019 at 12:19, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Simon,\n>\n\nThanks for reviewing Fabien,\n\n\n> > pgbench has a strange restriction to only allow 10 arguments, which is\n> too\n> > low for some real world uses.\n> >\n> > Since MAX_ARGS is only used for an array of pointers and an array of\n> > integers increasing this should not be a problem, so increase value to\n> 255.\n>\n\n\n> Probably I would have chosen a smaller value, eg 32 or 64, but I have not\n> seen your use case.\n>\n\nI've put it as 256 args now.\n\nThe overhead of that is about 2kB, so not really an issue.\n\nPeople are using pgbench for real testing, so no need for setting it too\nlow.\n\nI notice that there is no documentation update, which just shows that\n> indeed there are no documentation about the number of arguments, maybe the\n> patch could add a sentence somewhere? There is no limit discussed in the\n> PREPARE documentation, I tested up to 20. I'd sugggest to add something at\n> the end of the paragraph about variable substitution in the \"Custom\n> Script\" section, eg \"A maximum of XX variable references can appear within\n> an SQL command.\".\n>\n\nAdded as you suggest.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 Feb 2019 12:57:14 +0000", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n\n> diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl\n> index ad15ba66ea..2e4957c09a 100644\n> --- a/src/bin/pgbench/t/001_pgbench_with_server.pl\n> +++ b/src/bin/pgbench/t/001_pgbench_with_server.pl\n> @@ -587,10 +587,19 @@ my @errors = (\n> }\n> \t],\n> \t[\n> -\t\t'sql too many args', 1, [qr{statement has too many arguments.*\\b9\\b}],\n> -\t\tq{-- MAX_ARGS=10 for prepared\n> +\t\t'sql too many args', 1, [qr{statement has too many arguments.*\\b256\\b}],\n> +\t\tq{-- MAX_ARGS=256 for prepared\n> \\set i 0\n> -SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i);\n> +SELECT LEAST(\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i,\n> +:i);\n> }\n> \t],\n\nInstead of that wall of :i's, would it not be clearer to use the\nrepetition operator?\n\n \t[\n-\t\t'sql too many args', 1, [qr{statement has too many arguments.*\\b9\\b}],\n-\t\tq{-- MAX_ARGS=10 for prepared\n+\t\t'sql too many args', 1, [qr{statement has too many arguments.*\\b256\\b}],\n+\t\tq{-- MAX_ARGS=256 for prepared\n \\set i 0\n-SELECT LEAST(:i, :i, :i, :i, :i, :i, :i, :i, :i, :i, :i);\n+SELECT LEAST(}.join(', ', (':i') x 257).q{);\n }\n \t],\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n", "msg_date": "Tue, 26 Feb 2019 14:02:39 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "Hi,\n\nOn 2019-02-26 12:57:14 +0000, Simon Riggs wrote:\n> On Tue, 26 Feb 2019 at 12:19, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> I've put it as 256 args now.\n> \n> The overhead of that is about 2kB, so not really an issue.\n\nWhy not just allocate it dynamically? Seems weird to have all these\nMAX_ARGS, MAX_SCRIPTS ... commands. The eighties want their constants\nback ;)\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Tue, 26 Feb 2019 09:37:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "On Tue, 26 Feb 2019 at 17:38, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-02-26 12:57:14 +0000, Simon Riggs wrote:\n> > On Tue, 26 Feb 2019 at 12:19, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I've put it as 256 args now.\n> >\n> > The overhead of that is about 2kB, so not really an issue.\n>\n> Why not just allocate it dynamically? Seems weird to have all these\n> MAX_ARGS, MAX_SCRIPTS ... commands.\n\n\nFor me, its a few minutes work to correct a problem and report to the\ncommunity.\n\nDynamic allocation, run-time errors is all getting too time consuming for a\nsmall thing.\n\n\n> The eighties want their constants back ;)\n>\n\nMade me smile, thanks. ;-)\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Tue, 26 Feb 2019 at 17:38, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-02-26 12:57:14 +0000, Simon Riggs wrote:\n> On Tue, 26 Feb 2019 at 12:19, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> I've put it as 256 args now.\n> \n> The overhead of that is about 2kB, so not really an issue.\n\nWhy not just allocate it dynamically? Seems weird to have all these\nMAX_ARGS, MAX_SCRIPTS ... commands.For me, its a few minutes work to correct a problem and report to the community.Dynamic allocation, run-time errors is all getting too time consuming for a small thing. The eighties want their constants back ;)Made me smile, thanks. ;-)-- Simon Riggs                http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 Feb 2019 17:45:04 +0000", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> On Tue, 26 Feb 2019 at 17:38, Andres Freund <andres@anarazel.de> wrote:\n>> Why not just allocate it dynamically? Seems weird to have all these\n>> MAX_ARGS, MAX_SCRIPTS ... commands.\n\n> For me, its a few minutes work to correct a problem and report to the\n> community.\n> Dynamic allocation, run-time errors is all getting too time consuming for a\n> small thing.\n\nFWIW, I agree --- that's moving the goalposts further than seems\njustified.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Feb 2019 12:51:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "On 2019-02-26 12:51:23 -0500, Tom Lane wrote:\n> Simon Riggs <simon@2ndquadrant.com> writes:\n> > On Tue, 26 Feb 2019 at 17:38, Andres Freund <andres@anarazel.de> wrote:\n> >> Why not just allocate it dynamically? Seems weird to have all these\n> >> MAX_ARGS, MAX_SCRIPTS ... commands.\n> \n> > For me, its a few minutes work to correct a problem and report to the\n> > community.\n> > Dynamic allocation, run-time errors is all getting too time consuming for a\n> > small thing.\n> \n> FWIW, I agree --- that's moving the goalposts further than seems\n> justified.\n\nI'm fine with applying a patch to just adjust the constant, but I'd also\nappreciate somebody just being motivated by my message to remove the\nconstants ;)\n\n", "msg_date": "Tue, 26 Feb 2019 09:56:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "On Wed, 27 Feb 2019 at 01:57, Simon Riggs <simon@2ndquadrant.com> wrote:\n> I've put it as 256 args now.\n\nI had a look at this and I see you've added some docs to mention the\nnumber of parameters that are allowed; good.\n\n+ <application>pgbench</application> supports up to 256 variables in one\n+ statement.\n\nHowever, the code does not allow 256 variables as the documents claim.\nPer >= in:\n\n if (cmd->argc >= MAX_ARGS)\n {\n fprintf(stderr, \"statement has too many arguments (maximum is %d): %s\\n\",\n\nFor it to be 256 that would have to be > MAX_ARGS.\n\nI also don't agree with this change:\n\n- MAX_ARGS - 1, cmd->lines.data);\n+ MAX_ARGS, cmd->lines.data);\n\nThe 0th element of the argv array was for the sql, per:\n\ncmd->argv[0] = sql;\n\nthen the 9 others were for the variables, so the MAX_ARGS - 1 was\ncorrect originally. I think some comments in the area to explain the\n0th is for the sql would be a good idea too, that might stop any\nconfusion in the future. I see that's documented in the struct header\ncomment, but maybe worth a small note around that error message just\nto confirm the - 1 is not a mistake, and neither is the >= MAX_ARGS.\n\nProbably it's fine to define MAX_ARGS to 256 then put back the\nMAX_ARGS - 1 code so that we complain if we get more than 255....\nunless 256 is really needed, of course, in which case MAX_ARGS will\nneed to be 257.\n\nThe test also seems to test that 256 variables in a statement gives an\nerror. That contradicts the documents that have been added, which say\n256 is the maximum allowed.\n\nSetting to WoA\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 1 Mar 2019 02:20:26 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n\n> On Wed, 27 Feb 2019 at 01:57, Simon Riggs <simon@2ndquadrant.com> wrote:\n>> I've put it as 256 args now.\n>\n> I had a look at this and I see you've added some docs to mention the\n> number of parameters that are allowed; good.\n>\n> + <application>pgbench</application> supports up to 256 variables in one\n> + statement.\n>\n> However, the code does not allow 256 variables as the documents claim.\n> Per >= in:\n>\n> if (cmd->argc >= MAX_ARGS)\n> {\n> fprintf(stderr, \"statement has too many arguments (maximum is %d): %s\\n\",\n>\n> For it to be 256 that would have to be > MAX_ARGS.\n\nWhich would overflow 'char *argv[MAX_ARGS];'.\n\n> I also don't agree with this change:\n>\n> - MAX_ARGS - 1, cmd->lines.data);\n> + MAX_ARGS, cmd->lines.data);\n>\n> The 0th element of the argv array was for the sql, per:\n>\n> cmd->argv[0] = sql;\n>\n> then the 9 others were for the variables, so the MAX_ARGS - 1 was\n> correct originally.\n\nThe same goes for backslash commands, which use argv[] for each argument\nword in the comand, and argv[0] for the command word itself.\n\n> I think some comments in the area to explain the 0th is for the sql\n> would be a good idea too, that might stop any confusion in the\n> future. I see that's documented in the struct header comment, but\n> maybe worth a small note around that error message just to confirm the\n> - 1 is not a mistake, and neither is the >= MAX_ARGS.\n\nI have done this in the updated version of the patch, attached.\n\n> Probably it's fine to define MAX_ARGS to 256 then put back the\n> MAX_ARGS - 1 code so that we complain if we get more than 255....\n> unless 256 is really needed, of course, in which case MAX_ARGS will\n> need to be 257.\n\nI've kept it at 256, and adjusted the docs to say 255.\n\n> The test also seems to test that 256 variables in a statement gives an\n> error. That contradicts the documents that have been added, which say\n> 256 is the maximum allowed.\n\nI've adjusted the test (and the \\shell test) to check for 256 variables\n(arguments) exactly, and manually verified that it doesn't error on 255.\n\n> Setting to WoA\n\nSetting back to NR.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law", "msg_date": "Sun, 10 Mar 2019 23:37:30 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "On Mon, 11 Mar 2019 at 12:37, Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I think some comments in the area to explain the 0th is for the sql\n> > would be a good idea too, that might stop any confusion in the\n> > future. I see that's documented in the struct header comment, but\n> > maybe worth a small note around that error message just to confirm the\n> > - 1 is not a mistake, and neither is the >= MAX_ARGS.\n>\n> I have done this in the updated version of the patch, attached.\n\n> Setting back to NR.\n\nThe patch looks good to me. I'm happy for it to be marked as ready for\ncommitter. Fabien, do you want to have another look?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Mon, 11 Mar 2019 23:07:25 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "\nOn 3/11/19 6:07 AM, David Rowley wrote:\n> On Mon, 11 Mar 2019 at 12:37, Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> David Rowley <david.rowley@2ndquadrant.com> writes:\n>>> I think some comments in the area to explain the 0th is for the sql\n>>> would be a good idea too, that might stop any confusion in the\n>>> future. I see that's documented in the struct header comment, but\n>>> maybe worth a small note around that error message just to confirm the\n>>> - 1 is not a mistake, and neither is the >= MAX_ARGS.\n>> I have done this in the updated version of the patch, attached.\n>> Setting back to NR.\n> The patch looks good to me. I'm happy for it to be marked as ready for\n> committer. Fabien, do you want to have another look?\n>\n\n\n\nI think we've spent enough time on this. Committed with minor changes.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 Mar 2019 11:59:50 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n\n> I think we've spent enough time on this. Committed with minor changes.\n\nThanks for committing it. However, I can't see it in git. Did you forget\nto push?\n\n> cheers\n>\n>\n> andrew\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n", "msg_date": "Mon, 11 Mar 2019 17:04:03 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" }, { "msg_contents": "\nOn 3/11/19 1:04 PM, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>\n>> I think we've spent enough time on this. Committed with minor changes.\n> Thanks for committing it. However, I can't see it in git. Did you forget\n> to push?\n>\n>\n\n\nOoops, yes, done now.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 Mar 2019 13:20:21 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgbench MAX_ARGS" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working on [1], I noticed $Subject, that is:\n\n /*\n * If we have grouping or aggregation to do, the topmost scan/join\n * plan node must emit what the grouping step wants; otherwise, it\n * should emit grouping_target.\n */\n have_grouping = (parse->groupClause || parse->groupingSets ||\n parse->hasAggs || root->hasHavingQual);\n if (have_grouping)\n {\n scanjoin_target = make_group_input_target(root, final_target);\n--> scanjoin_target_parallel_safe =\n is_parallel_safe(root, (Node *) grouping_target->exprs);\n }\n else\n {\n scanjoin_target = grouping_target;\n scanjoin_target_parallel_safe = grouping_target_parallel_safe;\n }\n\nThe parallel safety of the final scan/join target is determined from the\ngrouping target, not that target, which would cause to generate inferior\nparallel plans as shown below:\n\npgbench=# explain verbose select aid+bid, sum(abalance), random() from\npgbench_accounts group by aid+bid;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=137403.01..159903.01 rows=1000000 width=20)\n Output: ((aid + bid)), sum(abalance), random()\n Group Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n -> Sort (cost=137403.01..139903.01 rows=1000000 width=8)\n Output: ((aid + bid)), abalance\n Sort Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n -> Gather (cost=10.00..24070.67 rows=1000000 width=8)\n Output: (aid + bid), abalance\n Workers Planned: 2\n -> Parallel Seq Scan on public.pgbench_accounts\n(cost=0.00..20560.67 rows=416667 width=12)\n Output: aid, bid, abalance\n(11 rows)\n\nThe final scan/join target {(aid + bid), abalance} is definitely\nparallel safe, but the target evaluation isn't parallelized across\nworkers, which is not good. Attached is a patch for fixing this.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://commitfest.postgresql.org/22/1950/", "msg_date": "Tue, 26 Feb 2019 21:25:34 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "(2019/02/26 21:25), Etsuro Fujita wrote:\n> While working on [1], I noticed $Subject, that is:\n> \n> /*\n> * If we have grouping or aggregation to do, the topmost scan/join\n> * plan node must emit what the grouping step wants; otherwise, it\n> * should emit grouping_target.\n> */\n> have_grouping = (parse->groupClause || parse->groupingSets ||\n> parse->hasAggs || root->hasHavingQual);\n> if (have_grouping)\n> {\n> scanjoin_target = make_group_input_target(root, final_target);\n> --> scanjoin_target_parallel_safe =\n> is_parallel_safe(root, (Node *) grouping_target->exprs);\n> }\n> else\n> {\n> scanjoin_target = grouping_target;\n> scanjoin_target_parallel_safe = grouping_target_parallel_safe;\n> }\n> \n> The parallel safety of the final scan/join target is determined from the\n> grouping target, not that target, which would cause to generate inferior\n> parallel plans as shown below:\n> \n> pgbench=# explain verbose select aid+bid, sum(abalance), random() from\n> pgbench_accounts group by aid+bid;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=137403.01..159903.01 rows=1000000 width=20)\n> Output: ((aid + bid)), sum(abalance), random()\n> Group Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n> -> Sort (cost=137403.01..139903.01 rows=1000000 width=8)\n> Output: ((aid + bid)), abalance\n> Sort Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n> -> Gather (cost=10.00..24070.67 rows=1000000 width=8)\n> Output: (aid + bid), abalance\n> Workers Planned: 2\n> -> Parallel Seq Scan on public.pgbench_accounts\n> (cost=0.00..20560.67 rows=416667 width=12)\n> Output: aid, bid, abalance\n> (11 rows)\n> \n> The final scan/join target {(aid + bid), abalance} is definitely\n> parallel safe, but the target evaluation isn't parallelized across\n> workers, which is not good. Attached is a patch for fixing this.\n\nI added this to the upcoming commitfest.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 27 Feb 2019 18:55:37 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "On Tue, Feb 26, 2019 at 7:26 AM Etsuro Fujita\n<fujita.etsuro@lab.ntt.co.jp> wrote:\n> The parallel safety of the final scan/join target is determined from the\n> grouping target, not that target, which [ is wrong ]\n\nOOPS. That's pretty embarrassing.\n\nYour patch looks right to me. I will now go look for a bag to put over my head.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 27 Feb 2019 10:52:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "(2019/02/28 0:52), Robert Haas wrote:\n> On Tue, Feb 26, 2019 at 7:26 AM Etsuro Fujita\n> <fujita.etsuro@lab.ntt.co.jp> wrote:\n>> The parallel safety of the final scan/join target is determined from the\n>> grouping target, not that target, which [ is wrong ]\n\n> Your patch looks right to me.\n\nI think it would be better for you to take this one. Could you?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 01 Mar 2019 19:26:01 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> writes:\n> (2019/02/28 0:52), Robert Haas wrote:\n>> On Tue, Feb 26, 2019 at 7:26 AM Etsuro Fujita\n>> <fujita.etsuro@lab.ntt.co.jp> wrote:\n>>> The parallel safety of the final scan/join target is determined from the\n>>> grouping target, not that target, which [ is wrong ]\n\n>> Your patch looks right to me.\n\n> I think it would be better for you to take this one. Could you?\n\nI concur with Robert that this is a brown-paper-bag-grade bug in\n960df2a97. Please feel free to push (and don't forget to back-patch).\n\nThe only really interesting question is whether it's worth adding\na regression test for. I doubt it; the issue seems much too narrow.\nUsually the point of a regression test is to prevent re-introduction\nof the same/similar bug, but what class of bugs would you argue\nwe'd be protecting against?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 08 Mar 2019 15:36:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "(2019/03/09 5:36), Tom Lane wrote:\n> Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> writes:\n>> (2019/02/28 0:52), Robert Haas wrote:\n>>> On Tue, Feb 26, 2019 at 7:26 AM Etsuro Fujita\n>>> <fujita.etsuro@lab.ntt.co.jp> wrote:\n>>>> The parallel safety of the final scan/join target is determined from the\n>>>> grouping target, not that target, which [ is wrong ]\n>\n>>> Your patch looks right to me.\n>\n>> I think it would be better for you to take this one. Could you?\n>\n> I concur with Robert that this is a brown-paper-bag-grade bug in\n> 960df2a97. Please feel free to push (and don't forget to back-patch).\n\nOK, will do.\n\n> The only really interesting question is whether it's worth adding\n> a regression test for. I doubt it; the issue seems much too narrow.\n> Usually the point of a regression test is to prevent re-introduction\n> of the same/similar bug, but what class of bugs would you argue\n> we'd be protecting against?\n\nPlan degradation; without the fix, we would have this on data created by \n\"pgbench -i -s 10 postgres\", as shown in [1]:\n\npostgres=# set parallel_setup_cost = 10;\npostgres=# set parallel_tuple_cost = 0.001;\n\npostgres=# explain verbose select aid+bid, sum(abalance), random() from \npgbench_accounts group by aid+bid;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=137403.01..159903.01 rows=1000000 width=20)\n Output: ((aid + bid)), sum(abalance), random()\n Group Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n -> Sort (cost=137403.01..139903.01 rows=1000000 width=8)\n Output: ((aid + bid)), abalance\n Sort Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n -> Gather (cost=10.00..24070.67 rows=1000000 width=8)\n--> Output: (aid + bid), abalance\n Workers Planned: 2\n -> Parallel Seq Scan on public.pgbench_accounts \n(cost=0.00..20560.67 rows=416667 width=12)\n Output: aid, bid, abalance\n(11 rows)\n\nThe final scan/join target {(aid + bid), abalance} would be parallel \nsafe, but in the plan the target is not parallelized across workers. \nThe reason for that is because the parallel-safety of the target is \nassessed incorrectly using the grouping target {((aid + bid)), \nsum(abalance), random()}, which would not be parallel safe. By the fix \nwe would have this:\n\npostgres=# explain verbose select aid+bid, sum(abalance), random() from \npgbench_accounts group by aid+bid;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=135944.68..158444.68 rows=1000000 width=20)\n Output: ((aid + bid)), sum(abalance), random()\n Group Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n -> Sort (cost=135944.68..138444.68 rows=1000000 width=8)\n Output: ((aid + bid)), abalance\n Sort Key: ((pgbench_accounts.aid + pgbench_accounts.bid))\n -> Gather (cost=10.00..22612.33 rows=1000000 width=8)\n Output: ((aid + bid)), abalance\n Workers Planned: 2\n -> Parallel Seq Scan on public.pgbench_accounts \n(cost=0.00..21602.33 rows=416667 width=8)\n--> Output: (aid + bid), abalance\n(11 rows)\n\nNote that the final scan/join target is parallelized in the plan.\n\nThis would only affect plan quality a little bit, so I don't think we \nneed a regression test for this, either, but the fix might destabilize \nexisting plan choices, so we should apply it to HEAD only?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 11 Mar 2019 12:56:21 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> writes:\n>>> The parallel safety of the final scan/join target is determined from the\n>>> grouping target, not that target, which [ is wrong ]\n\n> This would only affect plan quality a little bit, so I don't think we \n> need a regression test for this, either, but the fix might destabilize \n> existing plan choices, so we should apply it to HEAD only?\n\nIs that the only possible outcome? Per Robert's summary quoted above,\nit seems like it might be possible for the code to decide that the final\nscan/join target to be parallel safe when it is not, leading to outright\nwrong answers or query failures. If the wrong answer can only be wrong\nin the safe direction, it's not very clear why.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 11 Mar 2019 00:06:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "(2019/03/11 13:06), Tom Lane wrote:\n> Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> writes:\n>>>> The parallel safety of the final scan/join target is determined from the\n>>>> grouping target, not that target, which [ is wrong ]\n>\n>> This would only affect plan quality a little bit, so I don't think we\n>> need a regression test for this, either, but the fix might destabilize\n>> existing plan choices, so we should apply it to HEAD only?\n>\n> Is that the only possible outcome? Per Robert's summary quoted above,\n> it seems like it might be possible for the code to decide that the final\n> scan/join target to be parallel safe when it is not, leading to outright\n> wrong answers or query failures.\n\nMaybe I'm missing something, but I think that if the final scan/join \ntarget is not parallel-safe, then the grouping target would not be \nparallel-safe either, by the construction of the two targets, so I don't \nthink that we have such a correctness issue.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 11 Mar 2019 14:08:51 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> writes:\n> (2019/03/11 13:06), Tom Lane wrote:\n>> Is that the only possible outcome? Per Robert's summary quoted above,\n>> it seems like it might be possible for the code to decide that the final\n>> scan/join target to be parallel safe when it is not, leading to outright\n>> wrong answers or query failures.\n\n> Maybe I'm missing something, but I think that if the final scan/join \n> target is not parallel-safe, then the grouping target would not be \n> parallel-safe either, by the construction of the two targets, so I don't \n> think that we have such a correctness issue.\n\nSeems to me it's the other way around: the final target would include\nall functions invoked in the grouping target plus maybe some more.\nSo a non-parallel-safe grouping target implies a non-parallel-safe\nfinal target, but not vice versa.\n\nPossibly the summary had it backwards and the actual code bug is\ninferring things in the safe direction, but I'm too tired to double\ncheck that right now ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 11 Mar 2019 01:14:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "(2019/03/11 14:14), Tom Lane wrote:\n> Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> writes:\n>> (2019/03/11 13:06), Tom Lane wrote:\n>>> Is that the only possible outcome? Per Robert's summary quoted above,\n>>> it seems like it might be possible for the code to decide that the final\n>>> scan/join target to be parallel safe when it is not, leading to outright\n>>> wrong answers or query failures.\n>\n>> Maybe I'm missing something, but I think that if the final scan/join\n>> target is not parallel-safe, then the grouping target would not be\n>> parallel-safe either, by the construction of the two targets, so I don't\n>> think that we have such a correctness issue.\n>\n> Seems to me it's the other way around: the final target would include\n> all functions invoked in the grouping target plus maybe some more.\n> So a non-parallel-safe grouping target implies a non-parallel-safe\n> final target, but not vice versa.\n\nI mean the final *scan/join* target, not the final target.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 11 Mar 2019 14:29:50 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> writes:\n> (2019/03/11 14:14), Tom Lane wrote:\n>> Seems to me it's the other way around: the final target would include\n>> all functions invoked in the grouping target plus maybe some more.\n>> So a non-parallel-safe grouping target implies a non-parallel-safe\n>> final target, but not vice versa.\n\n> I mean the final *scan/join* target, not the final target.\n\nOh, of course. Yup, I was too tired last night :-(. So this is\njust a plan-quality problem not a wrong-answer problem.\n\nHowever, I'd still argue for back-patching into v11, on the grounds\nthat this is a regression from v10. The example you just gave does\nproduce the desired plan in v10, and I think it's more likely that\npeople would complain about a regression from v10 than that they'd\nbe unhappy because we changed it between 11.2 and 11.3.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 11 Mar 2019 10:46:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" }, { "msg_contents": "(2019/03/11 23:46), Tom Lane wrote:\n> So this is\n> just a plan-quality problem not a wrong-answer problem.\n>\n> However, I'd still argue for back-patching into v11, on the grounds\n> that this is a regression from v10. The example you just gave does\n> produce the desired plan in v10, and I think it's more likely that\n> people would complain about a regression from v10 than that they'd\n> be unhappy because we changed it between 11.2 and 11.3.\n\nAgreed. I committed the patch to v11 and HEAD. Thanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 12 Mar 2019 16:44:20 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Oddity with parallel safety test for scan/join target in\n grouping_planner" } ]
[ { "msg_contents": "Hi hackers,\n\nI am trying to figure out current cursors/portals management and life \ncycle in Postgres. There are two if conditions for autoHeld portals:\n\n- 'if (portal->autoHeld)' inside AtAbort_Portals at portalmem.c:802;\n- '|| portal->autoHeld' inside AtCleanup_Portals at portalmem.c:871.\n\nTheir removal does not seem to affect anything, make check-world is \npassed. I have tried configure --with-perl/--with-python, which should \nbe a case for autoHeld portals, but nothing changed.\n\nFor me it seems to be expectable, since autoHeld flag is always set \nalong with createSubid=InvalidSubTransactionId inside HoldPinnedPortals, \nso the only one check 'createSubid == InvalidSubTransactionId' should be \nenough. However, comment sections are rather misleading:\n\n(1) portal.h:126 confirms my guess 'If the portal is held over from a \nprevious transaction, both subxids are InvalidSubTransactionId';\n\n(2) while portalmem.c:797 states 'This is similar to the case of a \ncursor from a previous transaction, but it could also be that the cursor \nwas auto-held in this transaction, so it wants to live on'.\n\nI have tried, but could not build an example of valid query for the case \ndescribed in (2), and it is definitely absent in regression tests.\n\nAm I missing something?\n\nAdded Peter to cc, since he is a commiter of 056a5a3, where autoHeld has \nbeen introduced. Maybe it will be easier for him to recall the context. \nAnyway, sorry for disturb if this question is actually trivial.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Tue, 26 Feb 2019 16:17:11 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Probably misleading comments or lack of tests in autoHeld portals\n management" } ]
[ { "msg_contents": "Since my commits 9e138a401 et al on Saturday, buildfarm members\nblobfish, brotula, and wunderpus have been showing core dumps\nin the ecpg preprocessor. This seemed inexplicable given what\nthe commits changed, and even more so seeing that only HEAD is\nfailing, while the change was back-patched into all branches.\n\nMark Wong and I poked into this off-list, and what we find is that\nthis seems to be a compiler bug. Those animals are all running\nnearly the same version of clang (3.8.x / ppc64le). Looking into\nthe assembly code for preproc.y, the crash is occurring at a branch\nthat is supposed to jump forward exactly 32768 bytes, but according\nto gdb's disassembler it's jumping backwards exactly -32768 bytes,\ninto invalid memory. It will come as no surprise to hear that the\nbranch displacement field in PPC conditional branches is 16 bits\nwide, so that positive 32768 doesn't fit but negative 32768 does.\nEvidently what is happening is that either the compiler or the\nassembler is failing to detect the edge-case field overflow and\nswitch to different coding. So the apparent dependency on 9e138a401\nis because that happened to insert exactly the right number of\ninstructions in-between to trigger this scenario. It's pure luck we\ndidn't trip over it before, although none of those buildfarm animals\nhave been around for all that long.\n\nMoral: don't use clang 3.8.x on ppc64. I think Mark is going\nto upgrade those animals to some more recent compiler version.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Feb 2019 13:25:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "A note about recent ecpg buildfarm failures" }, { "msg_contents": "On Tue, Feb 26, 2019 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Since my commits 9e138a401 et al on Saturday, buildfarm members\n> blobfish, brotula, and wunderpus have been showing core dumps\n> in the ecpg preprocessor. This seemed inexplicable given what\n> the commits changed, and even more so seeing that only HEAD is\n> failing, while the change was back-patched into all branches.\n>\n> Mark Wong and I poked into this off-list, and what we find is that\n> this seems to be a compiler bug. Those animals are all running\n> nearly the same version of clang (3.8.x / ppc64le). Looking into\n> the assembly code for preproc.y, the crash is occurring at a branch\n> that is supposed to jump forward exactly 32768 bytes, but according\n> to gdb's disassembler it's jumping backwards exactly -32768 bytes,\n> into invalid memory. It will come as no surprise to hear that the\n> branch displacement field in PPC conditional branches is 16 bits\n> wide, so that positive 32768 doesn't fit but negative 32768 does.\n> Evidently what is happening is that either the compiler or the\n> assembler is failing to detect the edge-case field overflow and\n> switch to different coding. So the apparent dependency on 9e138a401\n> is because that happened to insert exactly the right number of\n> instructions in-between to trigger this scenario. It's pure luck we\n> didn't trip over it before, although none of those buildfarm animals\n> have been around for all that long.\n>\n> Moral: don't use clang 3.8.x on ppc64. I think Mark is going\n> to upgrade those animals to some more recent compiler version.\n\nWow, that's some pretty impressive debugging!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 26 Feb 2019 14:05:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A note about recent ecpg buildfarm failures" }, { "msg_contents": "On Tue, Feb 26, 2019 at 01:25:29PM -0500, Tom Lane wrote:\n> Since my commits 9e138a401 et al on Saturday, buildfarm members\n> blobfish, brotula, and wunderpus have been showing core dumps\n> in the ecpg preprocessor. This seemed inexplicable given what\n> the commits changed, and even more so seeing that only HEAD is\n> failing, while the change was back-patched into all branches.\n> \n> Mark Wong and I poked into this off-list, and what we find is that\n> this seems to be a compiler bug. Those animals are all running\n> nearly the same version of clang (3.8.x / ppc64le). Looking into\n> the assembly code for preproc.y, the crash is occurring at a branch\n> that is supposed to jump forward exactly 32768 bytes, but according\n> to gdb's disassembler it's jumping backwards exactly -32768 bytes,\n> into invalid memory. It will come as no surprise to hear that the\n> branch displacement field in PPC conditional branches is 16 bits\n> wide, so that positive 32768 doesn't fit but negative 32768 does.\n> Evidently what is happening is that either the compiler or the\n> assembler is failing to detect the edge-case field overflow and\n> switch to different coding. So the apparent dependency on 9e138a401\n> is because that happened to insert exactly the right number of\n> instructions in-between to trigger this scenario. It's pure luck we\n> didn't trip over it before, although none of those buildfarm animals\n> have been around for all that long.\n> \n> Moral: don't use clang 3.8.x on ppc64. I think Mark is going\n> to upgrade those animals to some more recent compiler version.\n\nI've tried clang 3.9 and 4.0 by hand and they seem to be ok. These were\nthe other two readily available versions on Debian stretch.\n\nI'll stop those other clang-3.8 animals...\n\nRegards,\nMark\n\n--\nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n", "msg_date": "Wed, 27 Feb 2019 19:21:43 -0800", "msg_from": "Mark Wong <mark@2ndQuadrant.com>", "msg_from_op": false, "msg_subject": "Re: A note about recent ecpg buildfarm failures" } ]
[ { "msg_contents": "Hi!\n\nI think Bitmap Index Scan should take advantage of B-tree INCLUDE columns, which it doesn’t at the moment (tested on master as of yesterday).\n\nConsider this (find the setup at the bottom of this mail).\n\nCREATE INDEX idx ON tbl (a, b) INCLUDE (c);\n\nEXPLAIN (analyze, buffers)\nSELECT *\n FROM tbl\n WHERE a = 1\n AND c = 1;\n\nThe following plan could be produced:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=7616) (actual time=0.027..0.029 rows=1 loops=1)\n Recheck Cond: (a = 1)\n Filter: (c = 1)\n Rows Removed by Filter: 7\n Heap Blocks: exact=2\n Buffers: shared hit=2 read=1\n -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.018..0.018 rows=8 loops=1)\n Index Cond: (a = 1)\n Buffers: shared read=1\n Planning Time: 0.615 ms\n Execution Time: 0.060 ms\n\n\nThe Bitmap Index Scan does not filter on column C, which is INCLUDEd in the index leaf nodes.\n\nInstead, the Bitmap Heap Scan fetches unnecessary blocks and then applies the filter.\n\nI would expect a similar execution as with this index:\n\nDROP INDEX idx;\nCREATE INDEX idx ON tbl (a, b, c);\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=7616) (actual time=0.021..0.021 rows=1 loops=1)\n Recheck Cond: ((a = 1) AND (c = 1))\n Heap Blocks: exact=1\n Buffers: shared hit=1 read=1\n -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: ((a = 1) AND (c = 1))\n Buffers: shared read=1\n Planning Time: 0.123 ms\n Execution Time: 0.037 ms\n\n(As a side node: I also dislike it how Bitmap Index Scan mixes search conditions and filters in “Index Cond”)\n\nDue to the not-filtered column B in the index, the use of column C is here pretty much the same as it is for the first index, which has C in INCLUDE.\n\nAm I missing anything or is it just an oversight because INCLUDE was primarily done for Index Only Scan?\n\nAs a background, here is the use case I see for this scenario:\nFilters that can not be used for searching the tree can be put in INCLUDE instead of the key-part of the index even if there is no intend to run the query as Index Only Scan (e.g. SELECT *). Columns in INCLUDE can be used for filtering (works fine for Index Only Scan, btw).\n\nThe most prominent example are post-fix or in-fix LIKE ‘%term%’ filters. The benefit of putting such columns in INCLUDE is that it is clearly documented that the index column is neither used for searching in the tree nor for ordering. It is either used for filtering, or for fetching. Knowing this makes it easier to extend existing index.\n\nImagine you have this query to optimize:\n\nSELECT *\n FROM …\n WHERE a = ?\n AND b = ?\n ORDER BY ts DESC\n LIMIT 10\n\nAnd find this existing index on that table:\n\nCREATE INDEX … ON … (a, b) INCLUDE (c);\n\nIn this case it is a rather safe option to replace the index with the following:\n\nCREATE INDEX … ON … (a, b, ts) INCLUDE (c);\n\nOn the other hand, if the original index had all column in the key part, changing it to (A, B, TS, C) is a rather dangerous option.\n\n-markus\n\n— Setup for the demo\n\n-- demo table: 4 rows per block\n-- the row if interest is the first one, the others spill to a second block so the problem can also seen in “Buffers\"\n\nCREATE TABLE tbl (a INTEGER, b INTEGER, c INTEGER, junk CHAR(1900));\n\nINSERT INTO tbl VALUES (1, 1, 1, 'junk'),\n (1, 1, 9, 'junk'),\n (1, 1, 9, 'junk'),\n (1, 1, 9, 'junk'),\n (1, 1, 9, 'junk'),\n (1, 1, 9, 'junk'),\n (1, 1, 9, 'junk'),\n (1, 1, 9, 'junk');\n\n-- prevent unwanted plans for demo of Bitmap Index+Heap Scan\nSET ENABLE_INDEXSCAN = OFF;\nSET ENABLE_SEQSCAN = OFF;\n\n", "msg_date": "Tue, 26 Feb 2019 21:07:01 +0100", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": true, "msg_subject": "Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "Markus Winand <markus.winand@winand.at> writes:\n> I think Bitmap Index Scan should take advantage of B-tree INCLUDE columns, which it doesn’t at the moment (tested on master as of yesterday).\n\nRegular index scans don't do what you're imagining either (i.e., check\nfilter conditions in advance of visiting the heap). There's a roadblock\nto implementing such behavior, which is that we might end up applying\nfilter expressions to dead rows. That could make users unhappy.\nFor example, given a filter condition like \"1.0/c > 0.1\", people\nwould complain if it still got zero-divide failures even after they'd\ndeleted all rows with c=0 from their table.\n\nGenerally speaking, we expect indexable operators not to throw\nerrors on any input values, which is why this problem isn't serious\nfor the index conditions proper. But we can't make that assumption\nfor arbitrary filter conditions.\n\n> (As a side node: I also dislike it how Bitmap Index Scan mixes search conditions and filters in “Index Cond”)\n\nWhat do you think is being mixed exactly?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 26 Feb 2019 18:22:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "Hi,\n\nOn 2019-02-26 18:22:41 -0500, Tom Lane wrote:\n> Markus Winand <markus.winand@winand.at> writes:\n> > I think Bitmap Index Scan should take advantage of B-tree INCLUDE columns, which it doesn’t at the moment (tested on master as of yesterday).\n> \n> Regular index scans don't do what you're imagining either (i.e., check\n> filter conditions in advance of visiting the heap). There's a roadblock\n> to implementing such behavior, which is that we might end up applying\n> filter expressions to dead rows. That could make users unhappy.\n> For example, given a filter condition like \"1.0/c > 0.1\", people\n> would complain if it still got zero-divide failures even after they'd\n> deleted all rows with c=0 from their table.\n> \n> Generally speaking, we expect indexable operators not to throw\n> errors on any input values, which is why this problem isn't serious\n> for the index conditions proper. But we can't make that assumption\n> for arbitrary filter conditions.\n\nWe could probably work around that, by only doing such pre-checks on\nindex tuples only for tuples known to be visible according to the\nVM. Would quite possibly be too hard to understand for users\nthough. Also not sure if it'd actually perform that well due to the\nadded random IO, as we'd have to do such checks before entering a tuple\ninto the bitmap (as obviously we don't have a tuple afterwards).\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Tue, 26 Feb 2019 15:29:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "On Tue, Feb 26, 2019 at 09:07:01PM +0100, Markus Winand wrote:\n> CREATE INDEX idx ON tbl (a, b, c);\n> Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=7616) (actual time=0.021..0.021 rows=1 loops=1)\n> Recheck Cond: ((a = 1) AND (c = 1))\n> -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.018..0.018 rows=1 loops=1)\n> Index Cond: ((a = 1) AND (c = 1))\n> \n> (As a side node: I also dislike it how Bitmap Index Scan mixes search conditions and filters in “Index Cond”)\n\nI don't think it's mixing them; it's using index scan on leading *and*\nnonleading column. That's possible even if frequently not efficient.\n\nJustin\n\n", "msg_date": "Tue, 26 Feb 2019 19:00:59 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "\n\n> On 27.02.2019, at 00:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Markus Winand <markus.winand@winand.at> writes:\n>> I think Bitmap Index Scan should take advantage of B-tree INCLUDE columns, which it doesn’t at the moment (tested on master as of yesterday).\n> \n> Regular index scans don't do what you're imagining either (i.e., check\n> filter conditions in advance of visiting the heap). There's a roadblock\n> to implementing such behavior, which is that we might end up applying\n> filter expressions to dead rows. That could make users unhappy.\n> For example, given a filter condition like \"1.0/c > 0.1\", people\n> would complain if it still got zero-divide failures even after they'd\n> deleted all rows with c=0 from their table.\n\nOk, but I don’t see how this case different for key columns vs. INCLUDE columns.\n\nWhen I test this with the (a, b, c) index (no INCLUDE), different plans are produced for \"c=1\" (my original example) vs. \"1.0/c > 0.1”.\n\nThe second one postpones this condition to the Bitmap Heap Scan.\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=4) (actual time=0.023..0.028 rows=8 loops=1)\n Recheck Cond: (a = 1)\n Filter: ((1.0 / (c)::numeric) > 0.1)\n Heap Blocks: exact=2\n Buffers: shared hit=3\n -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.007..0.007 rows=8 loops=1)\n Index Cond: (a = 1)\n Buffers: shared hit=1\n Planning Time: 0.053 ms\n Execution Time: 0.044 ms\n\nI’ve never noticed that behaviour before, but if it is there to prevent the exception-on-dead-tuple problem, the same could be applied to INCLUDE columns?\n\nI realise that this will not cover all use cases I can imagine but it would be consistent for key and non-key columns. \n\n-markus\n\n\n", "msg_date": "Wed, 27 Feb 2019 06:36:43 +0100", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": true, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "\n\n> On 27.02.2019, at 02:00, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Tue, Feb 26, 2019 at 09:07:01PM +0100, Markus Winand wrote:\n>> CREATE INDEX idx ON tbl (a, b, c);\n>> Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=7616) (actual time=0.021..0.021 rows=1 loops=1)\n>> Recheck Cond: ((a = 1) AND (c = 1))\n>> -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.018..0.018 rows=1 loops=1)\n>> Index Cond: ((a = 1) AND (c = 1))\n>> \n>> (As a side node: I also dislike it how Bitmap Index Scan mixes search conditions and filters in “Index Cond”)\n> \n> I don't think it's mixing them; it's using index scan on leading *and*\n> nonleading column. That's possible even if frequently not efficient.\n\nThe distinction leading / non-leading is very important for performance. Other database products use different names in the execution plan so that it is immediately visible (without knowing the index definition).\n\n- Oracle: access vs. filter predicates\n- SQL Server: “seek predicates” vs. “predicates”\n- Db2: START/STOP vs. SARG\n- MySQL/MariaDB show how many leading columns of the index are used — the rest is just “filtering\"\n\nPostgreSQL: no difference visible in the execution plan.\n\nCREATE INDEX idx ON tbl (a,b,c);\n\nEXPLAIN (analyze, buffers)\nSELECT *\n FROM tbl\nWHERE a = 1\n AND c = 1;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=7616) (actual time=0.017..0.018 rows=1 loops=1)\n Recheck Cond: ((a = 1) AND (c = 1))\n Heap Blocks: exact=1\n Buffers: shared hit=1 read=1\n -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.014..0.014 rows=1 loops=1)\n Index Cond: ((a = 1) AND (c = 1))\n Buffers: shared read=1\n Planning Time: 0.149 ms\n Execution Time: 0.035 ms\n\n\nDROP INDEX idx;\nCREATE INDEX idx ON tbl (a, c, b); -- NOTE: column “c” is second\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl (cost=4.14..8.16 rows=1 width=7616) (actual time=0.013..0.013 rows=1 loops=1)\n Recheck Cond: ((a = 1) AND (c = 1))\n Heap Blocks: exact=1\n Buffers: shared hit=1 read=1\n -> Bitmap Index Scan on idx (cost=0.00..4.14 rows=1 width=0) (actual time=0.009..0.009 rows=1 loops=1)\n Index Cond: ((a = 1) AND (c = 1))\n Buffers: shared read=1\n Planning Time: 0.262 ms\n Execution Time: 0.036 ms\n(9 rows)\n\n-markus\n\n\n", "msg_date": "Wed, 27 Feb 2019 06:50:08 +0100", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": true, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "Markus Winand <markus.winand@winand.at> writes:\n>> On 27.02.2019, at 02:00, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> On Tue, Feb 26, 2019 at 09:07:01PM +0100, Markus Winand wrote:\n>>> (As a side node: I also dislike it how Bitmap Index Scan mixes search conditions and filters in “Index Cond”)\n\n>> I don't think it's mixing them; it's using index scan on leading *and*\n>> nonleading column. That's possible even if frequently not efficient.\n\n> The distinction leading / non-leading is very important for performance. Other database products use different names in the execution plan so that it is immediately visible (without knowing the index definition).\n\nOther database products don't have the wide range of index types that\nwe do. The concepts you propose using are pretty much meaningless\nfor non-btree indexes. EXPLAIN doesn't really know which of the index\nconditions will usefully cut down the index search space for the\nparticular type, so it just lists everything that has the right form\nto be passed to the index AM.\n\nNote that passing a condition to the AM, rather than executing it as\na filter, is generally a win when possible even if it fails to cut\nthe portion of the index searched at all. That's because it can save\nvisits to the heap (tying back to the original point in this thread,\nthat we test index conditions, then heap liveness check, then filter\nconditions). So the planner is aggressive about pushing things into\nthat category when it can.\n\nIt might help to point out that to be an index condition, a WHERE\nclause has to meet tighter conditions than just whether it mentions\nan index column. It generally has to be of the form \"index_column\nindexable_operator pseudo_constant\" (though some index types support\nsome other cases like \"index_column IS NULL\" as index conditions too).\nClauses mentioning INCLUDE columns fail this test a priori, because\nthere are no indexable operators associated with an INCLUDE column.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 27 Feb 2019 11:54:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "\nOn 2/27/19 6:36 AM, Markus Winand wrote:\n> \n> \n>> On 27.02.2019, at 00:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Markus Winand <markus.winand@winand.at> writes:\n>>> I think Bitmap Index Scan should take advantage of B-tree INCLUDE columns, which it doesn’t at the moment (tested on master as of yesterday).\n>>\n>> Regular index scans don't do what you're imagining either (i.e., check\n>> filter conditions in advance of visiting the heap). There's a roadblock\n>> to implementing such behavior, which is that we might end up applying\n>> filter expressions to dead rows. That could make users unhappy.\n>> For example, given a filter condition like \"1.0/c > 0.1\", people\n>> would complain if it still got zero-divide failures even after they'd\n>> deleted all rows with c=0 from their table.\n> \n> Ok, but I don’t see how this case different for key columns vs. INCLUDE columns.\n> \n\nYeah, I'm a bit puzzled by this difference too - why would it be safe\nfor keys and not the other included columns?\n\n> When I test this with the (a, b, c) index (no INCLUDE), different\n> plans are produced for \"c=1\" (my original example) vs. \"1.0/c > 0.1”.\n\nI do recall a discussion about executing expressions on index tuples\nduring IOS (before/without inspecting the heap tuple). I'm too lazy to\nsearch for the thread now, but I recall it was somehow related to\nleak-proof-ness. And AFAICS none of the \"div\" procs are marked as\nleak-proof, so perhaps that's one of the reasons?\n\nOf course, this does not explain why equality conditions and such (which\nare generally leak-proof) couldn't be moved to the bitmap index scan.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 27 Feb 2019 18:41:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On 2/27/19 6:36 AM, Markus Winand wrote:\n> On 27.02.2019, at 00:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> For example, given a filter condition like \"1.0/c > 0.1\", people\n>>> would complain if it still got zero-divide failures even after they'd\n>>> deleted all rows with c=0 from their table.\n\n>> Ok, but I don’t see how this case different for key columns vs. INCLUDE columns.\n\n> Yeah, I'm a bit puzzled by this difference too - why would it be safe\n> for keys and not the other included columns?\n\nIt's not about the column, it's about the operator. We assume that\noperators appearing in opclasses are safe to execute even for\nindex entries that correspond to dead rows. INCLUDE columns don't\nhave any associated opclass, hence no assumed-usable operators.\n\n> I do recall a discussion about executing expressions on index tuples\n> during IOS (before/without inspecting the heap tuple). I'm too lazy to\n> search for the thread now, but I recall it was somehow related to\n> leak-proof-ness. And AFAICS none of the \"div\" procs are marked as\n> leak-proof, so perhaps that's one of the reasons?\n\nLeak-proof-ness is kind of related, perhaps, but it's not quite the property\nwe're after here --- at least not to my mind. It might be an interesting\ndiscussion exactly what the relationship is. Meanwhile, we don't have any\nseparate concept of functions that are safe to apply to any index entry;\nopclass membership is it.\n\nYou could probably argue that any clause containing only indexed variables\nand operators that are members of *some* opclass could be used as a filter\nin advance of heap liveness checking. But we lack any infrastructure for\nthat, in either the planner or executor.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 27 Feb 2019 14:23:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" }, { "msg_contents": "I wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I do recall a discussion about executing expressions on index tuples\n>> during IOS (before/without inspecting the heap tuple). I'm too lazy to\n>> search for the thread now, but I recall it was somehow related to\n>> leak-proof-ness. And AFAICS none of the \"div\" procs are marked as\n>> leak-proof, so perhaps that's one of the reasons?\n\n> Leak-proof-ness is kind of related, perhaps, but it's not quite the property\n> we're after here --- at least not to my mind. It might be an interesting\n> discussion exactly what the relationship is. Meanwhile, we don't have any\n> separate concept of functions that are safe to apply to any index entry;\n> opclass membership is it.\n\nThe other thread about RLS helped me to crystallize the vague feelings\nI had about this. I now think that this is what we're actually assuming:\nan indexable operator must be leakproof *with respect to its index-key\ninput*. If it is not, it might throw errors or otherwise reveal the\nexistence of index entries for dead rows, which would be a usability fail\nwhether or not you are excited about security as such.\n\nOn the other hand, it's okay to throw errors that only reveal information\nabout the non-index input. For instance, it's not a problem for pg_trgm\nto treat regex-match operators as indexable, even though those will throw\nerror for a malformed pattern input.\n\nSo this is indeed related to leakproof-ness, but our current definition\nof \"leakproof\" is too simple to capture the property exactly.\n\nGetting back to the question of this thread, I think we'd have to restrict\nany filtering done in advance of heap liveness checks to fully leakproof\nfunctions, since we don't want the filter expression to possibly throw\nan error, regardless of which input(s) came from the index.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 11:26:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index INCLUDE vs. Bitmap Index Scan" } ]
[ { "msg_contents": "Hi,\n\nI'm currently using a (very rough) scheme to retrieve relation names from a\nPlannerInfo, and a RelOptInfo struct:\n\nPlannerInfo *root\nRelOptInfo *inner_rel\n\n//...\n\nRangeTblEntry *rte;\nint x = -1;\nwhile ((x = bms_next_member(inner_rel->relids, x)) >= 0)\n{\n rte = root->simple_rte_array[x];\n if (rte->rtekind == RTE_RELATION)\n {\n char *rel_name = get_rel_name(rte->relid);\n // do stuff...\n }\n}\n\nHowever, I now realize it would be better to access aliases as they appear\nin the SQL query. For instance, if the query contains \"... FROM rel_name AS\nrel_alias ...\" I would like to retrieve `rel_alias` instead of `rel_name`.\n\nIs it possible to derive the alias in a similar way?\n\nFor context: this code is being inserted into\nsrc/backend/optimizer/path/costsize.c and specifically in the\ncalc_joinrel_size_estimate method.\n\nThanks in advance,\nWalter\n\nHi,I'm currently using a (very rough) scheme to retrieve relation names from a PlannerInfo, and a RelOptInfo struct:PlannerInfo *rootRelOptInfo *inner_rel//...RangeTblEntry *rte;int x = -1;while ((x = bms_next_member(inner_rel->relids, x)) >= 0){    rte = root->simple_rte_array[x];    if (rte->rtekind == RTE_RELATION)    {        char *rel_name = get_rel_name(rte->relid);        // do stuff...    }}However, I now realize it would be better to access aliases as they appear in the SQL query. For instance, if the query contains \"... FROM rel_name AS rel_alias ...\" I would like to retrieve `rel_alias` instead of `rel_name`.Is it possible to derive the alias in a similar way?For context: this code is being inserted into src/backend/optimizer/path/costsize.c and specifically in the calc_joinrel_size_estimate method.Thanks in advance,Walter", "msg_date": "Tue, 26 Feb 2019 13:48:32 -0800", "msg_from": "Walter Cai <walter@cs.washington.edu>", "msg_from_op": true, "msg_subject": "Retrieving Alias Name" }, { "msg_contents": "On Tue, Feb 26, 2019 at 10:48 PM Walter Cai <walter@cs.washington.edu> wrote:\n>\n> I'm currently using a (very rough) scheme to retrieve relation names from a PlannerInfo, and a RelOptInfo struct:\n>\n> PlannerInfo *root\n> RelOptInfo *inner_rel\n>\n> //...\n>\n> RangeTblEntry *rte;\n> int x = -1;\n> while ((x = bms_next_member(inner_rel->relids, x)) >= 0)\n> {\n> rte = root->simple_rte_array[x];\n> if (rte->rtekind == RTE_RELATION)\n> {\n> char *rel_name = get_rel_name(rte->relid);\n> // do stuff...\n> }\n> }\n>\n> However, I now realize it would be better to access aliases as they appear in the SQL query. For instance, if the query contains \"... FROM rel_name AS rel_alias ...\" I would like to retrieve `rel_alias` instead of `rel_name`.\n>\n> Is it possible to derive the alias in a similar way?\n\nYou can use rte->eref->aliasname, which will contain the alias is one\nwas provided, otherwise the original name. You can see RangeTblEntry\ncomment in include/nodes/parsenodes.h for more details.\n\n", "msg_date": "Tue, 26 Feb 2019 23:10:11 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Retrieving Alias Name" } ]
[ { "msg_contents": "Hi :\n I run a query like \"select * from t\" and set the break like this:\n\nbreak exec_simple_query\nbreak MemoryContextDelete\n commands\n p context->name\n c\n end\n\nI can see most of the MemoryContext is relased, but never MessageContext,\nwhen will it be released?\n\n/*\n* Create the memory context we will use in the main loop.\n*\n* MessageContext is reset once per iteration of the main loop, ie, upon\n* completion of processing of each command message from the client.\n*/\n\nI'm hacking the code with the latest commit.\nhttps://github.com/postgres/postgres/commit/414a9d3cf34c7aff1c63533df4c40ebb63bd0840\n\n\nThanks!\n\nHi :  I run a query like \"select * from t\" and set the break like this:break exec_simple_querybreak MemoryContextDelete  commands   p context->name   c  endI can see most of the MemoryContext is relased, but never MessageContext,  when will it be released? /* * Create the memory context we will use in the main loop. * * MessageContext is reset once per iteration of the main loop, ie, upon * completion of processing of each command message from the client. */I'm hacking the code with the latest commit.  https://github.com/postgres/postgres/commit/414a9d3cf34c7aff1c63533df4c40ebb63bd0840Thanks!", "msg_date": "Wed, 27 Feb 2019 14:08:47 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "When is the MessageContext released?" }, { "msg_contents": "On 2019-02-27 14:08:47 +0800, Andy Fan wrote:\n> Hi :\n> I run a query like \"select * from t\" and set the break like this:\n> \n> break exec_simple_query\n> break MemoryContextDelete\n> commands\n> p context->name\n> c\n> end\n> \n> I can see most of the MemoryContext is relased, but never MessageContext,\n> when will it be released?\n\nIt's released above exec_simple_query, as it actually contains the data\nthat lead us to call exec_simple_query(). See the main for loop in\nPostgresMain():\n\n\t\t/*\n\t\t * Release storage left over from prior query cycle, and create a new\n\t\t * query input buffer in the cleared MessageContext.\n\t\t */\n\t\tMemoryContextSwitchTo(MessageContext);\n\t\tMemoryContextResetAndDeleteChildren(MessageContext);\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Tue, 26 Feb 2019 22:15:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: When is the MessageContext released?" }, { "msg_contents": "Thanks you Andres for your time! this context is free with AllocSetReset\nrather than AllocSetDelete, that makes my breakpoint doesn't catch it.\n\nOn Wed, Feb 27, 2019 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-02-27 14:08:47 +0800, Andy Fan wrote:\n> > Hi :\n> > I run a query like \"select * from t\" and set the break like this:\n> >\n> > break exec_simple_query\n> > break MemoryContextDelete\n> > commands\n> > p context->name\n> > c\n> > end\n> >\n> > I can see most of the MemoryContext is relased, but never MessageContext,\n> > when will it be released?\n>\n> It's released above exec_simple_query, as it actually contains the data\n> that lead us to call exec_simple_query(). See the main for loop in\n> PostgresMain():\n>\n> /*\n> * Release storage left over from prior query cycle, and\n> create a new\n> * query input buffer in the cleared MessageContext.\n> */\n> MemoryContextSwitchTo(MessageContext);\n> MemoryContextResetAndDeleteChildren(MessageContext);\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThanks you Andres for your time!  this context is free with AllocSetReset rather than AllocSetDelete, that makes my breakpoint doesn't catch it.  On Wed, Feb 27, 2019 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:On 2019-02-27 14:08:47 +0800, Andy Fan wrote:\n> Hi :\n>   I run a query like \"select * from t\" and set the break like this:\n> \n> break exec_simple_query\n> break MemoryContextDelete\n>   commands\n>    p context->name\n>    c\n>   end\n> \n> I can see most of the MemoryContext is relased, but never MessageContext,\n> when will it be released?\n\nIt's released above exec_simple_query, as it actually contains the data\nthat lead us to call exec_simple_query().  See the main for loop in\nPostgresMain():\n\n                /*\n                 * Release storage left over from prior query cycle, and create a new\n                 * query input buffer in the cleared MessageContext.\n                 */\n                MemoryContextSwitchTo(MessageContext);\n                MemoryContextResetAndDeleteChildren(MessageContext);\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 27 Feb 2019 18:59:13 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: When is the MessageContext released?" } ]
[ { "msg_contents": "Hi all,\n(CC-ing Joe as of dc7d70e)\n\nI was just looking at the offline checksum patch, and noticed some\nsloppy coding in controldata_utils.c. The control file gets opened in\nget_controlfile(), and if it generates an error then the file\ndescriptor remains open. As basic code rule in the backend we should\nonly use BasicOpenFile() when opening files, so I think that the issue\nshould be fixed as attached, even if this requires including fd.h for\nthe backend compilation which is kind of ugly.\n\nAnother possibility would be to just close the file descriptor before\nany error, saving appropriately errno before issuing any %m portion,\nbut that still does not respect the backend policy regarding open().\n\nWe also do not have a wait event for the read() call, maybe we should\nhave one, but knowing that this gets called only for the SQL-level\nfunctions accessing the control file, I don't really think that's\nworth it.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 27 Feb 2019 16:47:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "get_controlfile() can leak fds in the backend" }, { "msg_contents": "Bonjour Michaᅵl,\n\n> Thoughts?\n\nNone.\n\nHowever, while at it, there is also the question of whether the control \nfile should be locked when updated, eg with flock(2) to avoid race \nconditions between concurrent commands. ISTM that there is currently not \nsuch thing in the code, but that it would be desirable.\n\n-- \nFabie.", "msg_date": "Wed, 27 Feb 2019 10:23:45 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "Hi,\n\nOn 2019-02-27 10:23:45 +0100, Fabien COELHO wrote:\n> However, while at it, there is also the question of whether the control file\n> should be locked when updated, eg with flock(2) to avoid race conditions\n> between concurrent commands. ISTM that there is currently not such thing in\n> the code, but that it would be desirable.\n\nShouldn't be necessary - the control file fits into a single page, and\nwrites of that size ought to always be atomic. And I also think\nintroducing flock usage for this would be quite disproportional.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 27 Feb 2019 01:32:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "\n>> However, while at it, there is also the question of whether the control file\n>> should be locked when updated, eg with flock(2) to avoid race conditions\n>> between concurrent commands. ISTM that there is currently not such thing in\n>> the code, but that it would be desirable.\n>\n> Shouldn't be necessary - the control file fits into a single page, and\n> writes of that size ought to always be atomic. And I also think\n> introducing flock usage for this would be quite disproportional.\n\nOk, fine.\n\nNote that my concern is not about the page size, but rather that as more \ncommands may change the cluster status by editing the control file, it \nwould be better that a postmaster does not start while a pg_rewind or \nenable checksum or whatever is in progress, and currently there is a \npossible race condition between the read and write that can induce an \nissue, at least theoretically.\n\n-- \nFabien.\n\n", "msg_date": "Wed, 27 Feb 2019 11:50:17 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On 2/27/19 2:47 AM, Michael Paquier wrote:\n> Hi all,\n> (CC-ing Joe as of dc7d70e)\n> \n> I was just looking at the offline checksum patch, and noticed some\n> sloppy coding in controldata_utils.c. The control file gets opened in\n> get_controlfile(), and if it generates an error then the file\n> descriptor remains open. As basic code rule in the backend we should\n> only use BasicOpenFile() when opening files, so I think that the issue\n> should be fixed as attached, even if this requires including fd.h for\n> the backend compilation which is kind of ugly.\n> \n> Another possibility would be to just close the file descriptor before\n> any error, saving appropriately errno before issuing any %m portion,\n> but that still does not respect the backend policy regarding open().\n\nIn fd.c I see:\n8<--------------------\n* AllocateFile, AllocateDir, OpenPipeStream and OpenTransientFile are\n* wrappers around fopen(3), opendir(3), popen(3) and open(2),\n* respectively. They behave like the corresponding native functions,\n* except that the handle is registered with the current subtransaction,\n* and will be automatically closed at abort. These are intended mainly\n* for short operations like reading a configuration file; there is a\n* limit on the number of files that can be opened using these functions\n* at any one time.\n*\n* Finally, BasicOpenFile is just a thin wrapper around open() that can\n* release file descriptors in use by the virtual file descriptors if\n* necessary. There is no automatic cleanup of file descriptors returned\n* by BasicOpenFile, it is solely the caller's responsibility to close\n* the file descriptor by calling close(2).\n8<--------------------\n\nAccording to that comment BasicOpenFile does not seem to solve the issue\nyou are pointing out (leaking of file descriptor on ERROR). Perhaps\nOpenTransientFile() is more appropriate? I am on the road at the moment\nso have not looked very deeply at this yet though.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Wed, 27 Feb 2019 10:26:58 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On 2/27/19 10:26 AM, Joe Conway wrote:\n> On 2/27/19 2:47 AM, Michael Paquier wrote:\n>> Hi all,\n>> (CC-ing Joe as of dc7d70e)\n\n> According to that comment BasicOpenFile does not seem to solve the issue\n> you are pointing out (leaking of file descriptor on ERROR). Perhaps\n> OpenTransientFile() is more appropriate? I am on the road at the moment\n> so have not looked very deeply at this yet though.\n\n\nIt seems to me that OpenTransientFile() is more appropriate. Patch done\nthat way attached.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Wed, 27 Feb 2019 19:45:11 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On Wed, Feb 27, 2019 at 11:50:17AM +0100, Fabien COELHO wrote:\n>> Shouldn't be necessary - the control file fits into a single page, and\n>> writes of that size ought to always be atomic. And I also think\n>> introducing flock usage for this would be quite disproportional.\n\nThere are static assertions to make sure that the side of control file\ndata never gets higher than 512 bytes for this purpose.\n\n> Note that my concern is not about the page size, but rather that as more\n> commands may change the cluster status by editing the control file, it would\n> be better that a postmaster does not start while a pg_rewind or enable\n> checksum or whatever is in progress, and currently there is a possible race\n> condition between the read and write that can induce an issue, at least\n> theoretically.\n\nSomething that I think we could live instead is a special flag in the\ncontrol file to mark the postmaster as in maintenance mode. This\nwould be useful to prevent the postmaster to start if seeing this flag\nin the control file, as well to find out that a host has crashed in\nthe middle of a maintenance operation. We don't give this insurance\nnow when running pg_rewind, which is bad. That's also separate from\nthe checksum-related patches and pg_rewind.\n\nflock() can be something hard to live with for cross-platform\ncompatibility like Windows (LockFileEx) or fancy platforms. And note\nthat we don't use it yet in the tree. And flock() would help in the\nfirst case I am mentioning, but not in the second.\n--\nMichael", "msg_date": "Thu, 28 Feb 2019 09:49:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "Hi,\n\nOn 2019-02-27 11:50:17 +0100, Fabien COELHO wrote:\n> Note that my concern is not about the page size, but rather that as more\n> commands may change the cluster status by editing the control file, it would\n> be better that a postmaster does not start while a pg_rewind or enable\n> checksum or whatever is in progress, and currently there is a possible race\n> condition between the read and write that can induce an issue, at least\n> theoretically.\n\nSeems odd to bring this up in this thread, it really has nothing to do\nwith the topic. If we were to want to do more here, ISTM the right\napproach would use the postmaster pid file, not the control file.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 27 Feb 2019 16:50:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On Wed, Feb 27, 2019 at 07:45:11PM -0500, Joe Conway wrote:\n> It seems to me that OpenTransientFile() is more appropriate. Patch done\n> that way attached.\n\nWorks for me, thanks for sending a patch! While on it, could you\nclean up the comment on top of get_controlfile()? \"char\" is mentioned\ninstead of \"const char\" for DataDir which is incorrect. I would\nremove the complete set of arguments from the description and just\nkeep the routine name.\n--\nMichael", "msg_date": "Thu, 28 Feb 2019 09:54:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "Hello Andres,\n\n>> Note that my concern is not about the page size, but rather that as more\n>> commands may change the cluster status by editing the control file, it would\n>> be better that a postmaster does not start while a pg_rewind or enable\n>> checksum or whatever is in progress, and currently there is a possible race\n>> condition between the read and write that can induce an issue, at least\n>> theoretically.\n>\n> Seems odd to bring this up in this thread, it really has nothing to do\n> with the topic.\n\nIndeed. I raised it here because it is in the same area of code and \nMichaᅵl was looking at it.\n\n> If we were to want to do more here, ISTM the right approach would use \n> the postmaster pid file, not the control file.\n\nISTM that this just means re-inventing a manual poor-featured \nrace-condition-prone lock API around another file, which seems to be \ncreated more or less only by \"pg_ctl\", while some other commands use the \ncontrol file (eg pg_rewind, AFAICS).\n\n-- \nFabien.", "msg_date": "Thu, 28 Feb 2019 09:54:48 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On 2/27/19 7:54 PM, Michael Paquier wrote:\n> On Wed, Feb 27, 2019 at 07:45:11PM -0500, Joe Conway wrote:\n>> It seems to me that OpenTransientFile() is more appropriate. Patch done\n>> that way attached.\n> \n> Works for me, thanks for sending a patch! While on it, could you\n> clean up the comment on top of get_controlfile()? \"char\" is mentioned\n> instead of \"const char\" for DataDir which is incorrect. I would\n> remove the complete set of arguments from the description and just\n> keep the routine name.\n\nSure, will do. What are your thoughts on backpatching? This seems\nunlikely to be a practical concern in the field, so my inclination is a\nmaster only fix.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 28 Feb 2019 07:11:04 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On Thu, Feb 28, 2019 at 07:11:04AM -0500, Joe Conway wrote:\n> Sure, will do. What are your thoughts on backpatching? This seems\n> unlikely to be a practical concern in the field, so my inclination is a\n> master only fix.\n\nI agree that this would unlikely become an issue as an error on the\ncontrol file would most likely be a PANIC when updating it, so fixing\nonly HEAD sounds fine to me. Thanks for asking!\n--\nMichael", "msg_date": "Thu, 28 Feb 2019 21:20:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "Hi,\n\nOn 2019-02-28 09:54:48 +0100, Fabien COELHO wrote:\n> > If we were to want to do more here, ISTM the right approach would use\n> > the postmaster pid file, not the control file.\n> \n> ISTM that this just means re-inventing a manual poor-featured\n> race-condition-prone lock API around another file, which seems to be created\n> more or less only by \"pg_ctl\", while some other commands use the control\n> file (eg pg_rewind, AFAICS).\n\nHuh? Postmaster.pid is written by the backend, pg_ctl just checks it to\nsee if the backend has finished starting up etc. It's precisely what the\nbackend uses to prevent two postmasters to start etc. It's also what say\npg_resetwal checks to protect against a concurrently running lcuster\n(albeit in a racy way). If we want to make things more bulletproof,\nthat's the place. The control file is constantly written to, sometimes\nby different processes, it'd just not be a good file for such lockout\nmechanisms.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 28 Feb 2019 13:07:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On 2/28/19 7:20 AM, Michael Paquier wrote:\n> On Thu, Feb 28, 2019 at 07:11:04AM -0500, Joe Conway wrote:\n>> Sure, will do. What are your thoughts on backpatching? This seems\n>> unlikely to be a practical concern in the field, so my inclination is a\n>> master only fix.\n> \n> I agree that this would unlikely become an issue as an error on the\n> control file would most likely be a PANIC when updating it, so fixing\n> only HEAD sounds fine to me. Thanks for asking!\n\n\nCommitted and push that way.\n\nBy the way, while looking at this, I noted at least a couple of places\nwhere OpenTransientFile() is being passed O_RDWR when the usage is\npretty clearly intended to be read-only. For example at least two\ninstances in slru.c -- SlruPhysicalReadPage() and\nSimpleLruDoesPhysicalPageExist(). Is it worth while searching for and\nfixing those instances?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 28 Feb 2019 16:09:32 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On Thu, Feb 28, 2019 at 04:09:32PM -0500, Joe Conway wrote:\n> Committed and push that way.\n\nThanks for committing a fix.\n\n> By the way, while looking at this, I noted at least a couple of places\n> where OpenTransientFile() is being passed O_RDWR when the usage is\n> pretty clearly intended to be read-only. For example at least two\n> instances in slru.c -- SlruPhysicalReadPage() and\n> SimpleLruDoesPhysicalPageExist(). Is it worth while searching for and\n> fixing those instances?\n\nThere are roughly 40~42 callers of OpenTransientFile(). Looking at\nthem I can see that RestoreSlotFromDisk() could also switch to RDONLY\ninstead of RDWR. I am also a bit tired of the lack error handling\naround CloseTransientFile(). While in some code paths the file\ndescriptors are closed for an error, in some others we should report\nsomething. I am going to send a patch after a lookup. Let's see all\nthat on a separate thread.\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 10:00:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "On Thu, Feb 28, 2019 at 01:07:23PM -0800, Andres Freund wrote:\n> Huh? Postmaster.pid is written by the backend, pg_ctl just checks it to\n> see if the backend has finished starting up etc. It's precisely what the\n> backend uses to prevent two postmasters to start etc. It's also what say\n> pg_resetwal checks to protect against a concurrently running lcuster\n> (albeit in a racy way). If we want to make things more bulletproof,\n> that's the place. The control file is constantly written to, sometimes\n> by different processes, it'd just not be a good file for such lockout\n> mechanisms.\n\nHijacking more my own thread... I can do that, right? Or not.\n\nOne thing is that we don't protect a data folder to be started when it\nis in the process of being treated by an external tool, like\npg_rewind, or pg_checksums. So having an extra flag in the control\nfile, which can be used by external tools to tell that the data folder \nis being treated for something does not sound that crazy to me.\nHaving a tool write a fake postmaster.pid for this kind of task does\nnot sound right from a correctness point of view, because the instance\nis not started.\n\nAnd a lock API won't protect much if the host is unplugged as well if\nits state is made durable...\n\nLet's keep the discussions where they are by the way. Joe has just\nclosed the report of this thread with 4598a99, so I am moving on to\nthe correct places.\n\nMy apologies for the digressions.\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 10:11:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "Hi,\n\nOn 2019-03-01 10:11:53 +0900, Michael Paquier wrote:\n> One thing is that we don't protect a data folder to be started when it\n> is in the process of being treated by an external tool, like\n> pg_rewind, or pg_checksums. So having an extra flag in the control\n> file, which can be used by external tools to tell that the data folder\n> is being treated for something does not sound that crazy to me.\n> Having a tool write a fake postmaster.pid for this kind of task does\n> not sound right from a correctness point of view, because the instance\n> is not started.\n\nI think putting this into the control file is a seriously bad\nidea. Postmaster interlocks against other postmasters running via\npostmaster.pid. Having a second interlock mechanism, in a different\nfile, doesn't make any sort of sense. Nor does it seem sane to have\nexternal tool write over data as INTENSELY critical as the control file,\nwhen they then have to understand CRCs etc.\n\n\n> Let's keep the discussions where they are by the way. Joe has just\n> closed the report of this thread with 4598a99, so I am moving on to\n> the correct places.\n\nI don't know what that means, given you replied to the above in this\nthread?\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 28 Feb 2019 17:19:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" }, { "msg_contents": "\nHello Andres,\n\n> I think putting this into the control file is a seriously bad\n> idea. Postmaster interlocks against other postmasters running via\n> postmaster.pid.\n\n> Having a second interlock mechanism, in a different file, doesn't make \n> any sort of sense. Nor does it seem sane to have external tool write \n> over data as INTENSELY critical as the control file, when they then have \n> to understand CRCs etc.\n\nOn this point, there are functions for that, get/update_controlfile, so \nthis should be factored out.\n\nThe initial insentive for raising the issue, probably in the wrong thread \nand without a clear understanding of the matter, is that I've been \nreviewing a patch to enable/disable checksums on a stopped cluster.\n\nThe patch updates all the checksums in all the files, and changes the \ncontrol file to tell that there are checksums. Currently it checks and \ncreates a fake \"posmaster.pid\" file to attempt to prevent another tool to \nrun concurrently to this operation, with ISTM a procedure prone to race \nconditions thus does not warrant that it would be the only tool running on \nthe cluster. This looked to me as a bad hack. Given that other command \nthat take on a cluster seemed to use the controlfile to signal that they \nare doing something, I'd thought that it would be the way to go, but then \nI noticed that the control file read/write procedure looks as bad as the \npostmaster.pid hack to ensure that only one command is active.\n\nNevertheless, I'm ranting in the wrong thread and it seems that these is \nno real problem to solve, so I'll say fine to the to-me-unconvincing \n\"postmaster.pid\" hack proposed in the patch.\n\n-- \nFabien.\n\n", "msg_date": "Fri, 1 Mar 2019 08:08:13 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: get_controlfile() can leak fds in the backend" } ]
[ { "msg_contents": "actually I'm hacking pg for a function like :\n1. define a select query.\n2. client ask for some data. and server reply some data. server will do\nNOTHING if client doesn't ask any more..\n3. client ask some data more data with a batch and SERVER reply some data\nthen. then do NOTHING.\n\ncurrently the simple \"select * from t\", the server will try to send the\ndata to client at one time which is not something I want.\n\nby looking into the plsql, looks it has some api like:\n\nfetch 10 from cursor_1;\nfetch 10 from cursor_1;\n\nI'm lacking of the experience to hack plsql. so my question are:\n1. Does pg has some codes which act like the \"ask -> reply -> ask again\n-> reply again\" on the server code? currently I'm not sure if the above\n\"fetch\" really work like this.\n2. any resources or hint or suggestion to understand the \"fetch\"\nstatement?\n\nThanks\n\nactually I'm hacking pg for a function like :1. define a select query. 2. client ask for some data. and server reply some data.  server will do NOTHING if client doesn't ask any more.. 3.  client ask some data more data with a batch and SERVER reply some data then. then do NOTHING. currently the simple \"select * from t\",  the server will try to send the data to client at one time which is not something I want. by looking into the plsql,  looks it has some api like:fetch 10 from cursor_1;fetch 10 from cursor_1; I'm lacking of the experience to hack plsql.   so my question are:1.   Does pg has some codes which act like the \"ask -> reply -> ask again -> reply again\" on the server code?    currently I'm not sure if the above \"fetch\" really work like this. 2.  any resources or hint or suggestion to understand the \"fetch\"  statement?Thanks", "msg_date": "Wed, 27 Feb 2019 19:11:54 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "some hints to understand the plsql cursor." }, { "msg_contents": "On Wed, Feb 27, 2019 at 4:42 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> actually I'm hacking pg for a function like :\n> 1. define a select query.\n> 2. client ask for some data. and server reply some data. server will do NOTHING if client doesn't ask any more..\n> 3. client ask some data more data with a batch and SERVER reply some data then. then do NOTHING.\n>\n> currently the simple \"select * from t\", the server will try to send the data to client at one time which is not something I want.\n>\n> by looking into the plsql, looks it has some api like:\n>\n> fetch 10 from cursor_1;\n> fetch 10 from cursor_1;\n>\n> I'm lacking of the experience to hack plsql. so my question are:\n> 1. Does pg has some codes which act like the \"ask -> reply -> ask again -> reply again\" on the server code? currently I'm not sure if the above \"fetch\" really work like this.\n> 2. any resources or hint or suggestion to understand the \"fetch\" statement?\n\nI guess you are looking for these syntax?\n\npostgres=# BEGIN;\nBEGIN\npostgres=# DECLARE cur CURSOR FOR SELECT * FROM t;\nDECLARE CURSOR\npostgres=# FETCH NEXT cur;\n a\n---\n 1\n(1 row)\n\npostgres=# FETCH 10 cur;\n a\n---\n 2\n 3\n 4\n 5\n 1\n 2\n 3\n 4\n 5\n 6\n(10 rows)\n\npostgres=# FETCH NEXT cur;\n a\n---\n 7\n(1 row)\n\npostgres=# CLOSE cur;\nCLOSE CURSOR\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n", "msg_date": "Wed, 27 Feb 2019 21:05:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: some hints to understand the plsql cursor." }, { "msg_contents": "Thanks Kumar. actually I was asking what the the cursor did in the\nserver. By looking the code, looks it cache the previous Portal with the\nname is the cursor name, whenever we run the fetch from the portal, it\nwill restore the previous Portal and run it.\n\nBut your minimized and interactive code definitely makes the debug much\nquicker. Thank you very much!\n\n\nOn Wed, Feb 27, 2019 at 11:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, Feb 27, 2019 at 4:42 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > actually I'm hacking pg for a function like :\n> > 1. define a select query.\n> > 2. client ask for some data. and server reply some data. server will do\n> NOTHING if client doesn't ask any more..\n> > 3. client ask some data more data with a batch and SERVER reply some\n> data then. then do NOTHING.\n> >\n> > currently the simple \"select * from t\", the server will try to send the\n> data to client at one time which is not something I want.\n> >\n> > by looking into the plsql, looks it has some api like:\n> >\n> > fetch 10 from cursor_1;\n> > fetch 10 from cursor_1;\n> >\n> > I'm lacking of the experience to hack plsql. so my question are:\n> > 1. Does pg has some codes which act like the \"ask -> reply -> ask\n> again -> reply again\" on the server code? currently I'm not sure if the\n> above \"fetch\" really work like this.\n> > 2. any resources or hint or suggestion to understand the \"fetch\"\n> statement?\n>\n> I guess you are looking for these syntax?\n>\n> postgres=# BEGIN;\n> BEGIN\n> postgres=# DECLARE cur CURSOR FOR SELECT * FROM t;\n> DECLARE CURSOR\n> postgres=# FETCH NEXT cur;\n> a\n> ---\n> 1\n> (1 row)\n>\n> postgres=# FETCH 10 cur;\n> a\n> ---\n> 2\n> 3\n> 4\n> 5\n> 1\n> 2\n> 3\n> 4\n> 5\n> 6\n> (10 rows)\n>\n> postgres=# FETCH NEXT cur;\n> a\n> ---\n> 7\n> (1 row)\n>\n> postgres=# CLOSE cur;\n> CLOSE CURSOR\n>\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks Kumar.  actually I was asking what the the cursor did in the server.   By looking the code, looks it cache the previous Portal with the name is the cursor name,  whenever we run the fetch from the portal,  it will restore the previous Portal and run it. But your minimized and interactive code definitely makes the debug much quicker.  Thank you very much!On Wed, Feb 27, 2019 at 11:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Wed, Feb 27, 2019 at 4:42 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> actually I'm hacking pg for a function like :\n> 1. define a select query.\n> 2. client ask for some data. and server reply some data.  server will do NOTHING if client doesn't ask any more..\n> 3.  client ask some data more data with a batch and SERVER reply some data then. then do NOTHING.\n>\n> currently the simple \"select * from t\",  the server will try to send the data to client at one time which is not something I want.\n>\n> by looking into the plsql,  looks it has some api like:\n>\n> fetch 10 from cursor_1;\n> fetch 10 from cursor_1;\n>\n> I'm lacking of the experience to hack plsql.   so my question are:\n> 1.   Does pg has some codes which act like the \"ask -> reply -> ask again -> reply again\" on the server code?    currently I'm not sure if the above \"fetch\" really work like this.\n> 2.  any resources or hint or suggestion to understand the \"fetch\"  statement?\n\nI guess you are looking for these syntax?\n\npostgres=# BEGIN;\nBEGIN\npostgres=# DECLARE cur CURSOR FOR SELECT * FROM t;\nDECLARE CURSOR\npostgres=# FETCH NEXT cur;\n a\n---\n 1\n(1 row)\n\npostgres=# FETCH 10 cur;\n a\n---\n 2\n 3\n 4\n 5\n 1\n 2\n 3\n 4\n 5\n 6\n(10 rows)\n\npostgres=# FETCH NEXT cur;\n a\n---\n 7\n(1 row)\n\npostgres=# CLOSE cur;\nCLOSE CURSOR\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 28 Feb 2019 07:35:53 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: some hints to understand the plsql cursor." } ]
[ { "msg_contents": "Hi,\n\nYet another thing I noticed while working on [1] is this in\ngrouping_planner:\n\n /*\n * If the input rel is marked consider_parallel and there's nothing\nthat's\n * not parallel-safe in the LIMIT clause, then the final_rel can be\nmarked\n * consider_parallel as well. Note that if the query has rowMarks or is\n * not a SELECT, consider_parallel will be false for every relation\nin the\n * query.\n */\n if (current_rel->consider_parallel &&\n is_parallel_safe(root, parse->limitOffset) &&\n is_parallel_safe(root, parse->limitCount))\n final_rel->consider_parallel = true;\n\nIf there is a need to add a LIMIT node, we don't consider generating\npartial paths for the final relation below (see commit\n0927d2f46ddd4cf7d6bf2cc84b3be923e0aedc52), so it seems unnecessary\nanymore to assess the parallel-safety of the LIMIT and OFFSET clauses.\nTo save cycles, why not remove those tests from that function like the\nattached?\n\nBest regards,\nEtsuro Fujita\n\n[1] https://commitfest.postgresql.org/22/1950/", "msg_date": "Wed, 27 Feb 2019 21:45:29 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Unneeded parallel safety tests in grouping_planner" }, { "msg_contents": "On Wed, Feb 27, 2019 at 7:46 AM Etsuro Fujita\n<fujita.etsuro@lab.ntt.co.jp> wrote:\n> Yet another thing I noticed while working on [1] is this in\n> grouping_planner:\n>\n> /*\n> * If the input rel is marked consider_parallel and there's nothing\n> that's\n> * not parallel-safe in the LIMIT clause, then the final_rel can be\n> marked\n> * consider_parallel as well. Note that if the query has rowMarks or is\n> * not a SELECT, consider_parallel will be false for every relation\n> in the\n> * query.\n> */\n> if (current_rel->consider_parallel &&\n> is_parallel_safe(root, parse->limitOffset) &&\n> is_parallel_safe(root, parse->limitCount))\n> final_rel->consider_parallel = true;\n>\n> If there is a need to add a LIMIT node, we don't consider generating\n> partial paths for the final relation below (see commit\n> 0927d2f46ddd4cf7d6bf2cc84b3be923e0aedc52), so it seems unnecessary\n> anymore to assess the parallel-safety of the LIMIT and OFFSET clauses.\n> To save cycles, why not remove those tests from that function like the\n> attached?\n\nBecause in the future we might want to consider generating\npartial_paths in cases where we don't do so today.\n\nI repeatedly made the mistake of believing that I could not bother\nsetting consider_parallel entirely correctly for one reason or\nanother, and I've gone through multiple iterations of fixing cases\nwhere I did that and it turned out to cause problems. I now believe\nthat we should try to get it right in every case, whether or not we\ncurrently think it's possible for it to matter. Sometimes it matters\nin ways that aren't obvious, and it complicates further development.\n\nI don't think we'd save much by changing this test anyway. Those\nis_parallel_safe() tests aren't entirely free, of course, but they\nshould be very cheap.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 27 Feb 2019 10:46:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unneeded parallel safety tests in grouping_planner" }, { "msg_contents": "(2019/02/28 0:46), Robert Haas wrote:\n> On Wed, Feb 27, 2019 at 7:46 AM Etsuro Fujita\n> <fujita.etsuro@lab.ntt.co.jp> wrote:\n>> Yet another thing I noticed while working on [1] is this in\n>> grouping_planner:\n>>\n>> /*\n>> * If the input rel is marked consider_parallel and there's nothing\n>> that's\n>> * not parallel-safe in the LIMIT clause, then the final_rel can be\n>> marked\n>> * consider_parallel as well. Note that if the query has rowMarks or is\n>> * not a SELECT, consider_parallel will be false for every relation\n>> in the\n>> * query.\n>> */\n>> if (current_rel->consider_parallel&&\n>> is_parallel_safe(root, parse->limitOffset)&&\n>> is_parallel_safe(root, parse->limitCount))\n>> final_rel->consider_parallel = true;\n>>\n>> If there is a need to add a LIMIT node, we don't consider generating\n>> partial paths for the final relation below (see commit\n>> 0927d2f46ddd4cf7d6bf2cc84b3be923e0aedc52), so it seems unnecessary\n>> anymore to assess the parallel-safety of the LIMIT and OFFSET clauses.\n>> To save cycles, why not remove those tests from that function like the\n>> attached?\n>\n> Because in the future we might want to consider generating\n> partial_paths in cases where we don't do so today.\n>\n> I repeatedly made the mistake of believing that I could not bother\n> setting consider_parallel entirely correctly for one reason or\n> another, and I've gone through multiple iterations of fixing cases\n> where I did that and it turned out to cause problems. I now believe\n> that we should try to get it right in every case, whether or not we\n> currently think it's possible for it to matter. Sometimes it matters\n> in ways that aren't obvious, and it complicates further development.\n>\n> I don't think we'd save much by changing this test anyway. Those\n> is_parallel_safe() tests aren't entirely free, of course, but they\n> should be very cheap.\n\nI got the point. Thanks for the explanation!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 01 Mar 2019 19:24:47 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Unneeded parallel safety tests in grouping_planner" } ]
[ { "msg_contents": "Hi,\n\nI got a failure in pg_dump/pg_restore as below:\npg_dump/pg_restore fails with 'ERROR: schema \"public\" already exists' for\nTAR_DUMP and CUSTOM_DUMP from v94/v95/v96 to v11/master.\n\n-- Take pg_dump in v94/v95/v96:\n[prabhat@localhost bin]$ ./pg_dump -f /tmp/*tar_dump_PG94.tar* -Ft postgres\n-p 9000\n[prabhat@localhost bin]$ ./pg_dump -f /tmp/*custom_dump_PG94.sql* -Fc\npostgres -p 9000\n\n-- Try to restore the above dump into v11/master:\n[prabhat@localhost bin]$ ./pg_restore -F t -U prabhat -d db3 -p 9001 /tmp/\n*tar_dump_PG94.tar*\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 6; 2615 2200 SCHEMA public\nprabhat\npg_restore: [archiver (db)] could not execute query: ERROR: schema\n\"public\" already exists\n Command was: CREATE SCHEMA public;\n\n\n\nWARNING: errors ignored on restore: 1\n\n[prabhat@localhost bin]$ ./pg_restore -F c -U prabhat -d db4 -p 9001 /tmp/\n*custom_dump_PG94.sql*\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 6; 2615 2200 SCHEMA public\nprabhat\npg_restore: [archiver (db)] could not execute query: ERROR: schema\n\"public\" already exists\n Command was: CREATE SCHEMA public;\n\n\n\nWARNING: errors ignored on restore: 1\n\nNote: I am able to perform \"Plain dump/restore\" across the branches.\n\n\n-- \n\n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Corporation\n\nThe Postgres Database Company\n\nHi,I got a failure in pg_dump/pg_restore as below:pg_dump/pg_restore fails with 'ERROR: schema \"public\" already exists' for TAR_DUMP and CUSTOM_DUMP from v94/v95/v96 to v11/master.-- Take pg_dump in v94/v95/v96:[prabhat@localhost bin]$ ./pg_dump -f /tmp/tar_dump_PG94.tar -Ft postgres -p 9000[prabhat@localhost bin]$ ./pg_dump -f /tmp/custom_dump_PG94.sql -Fc postgres -p 9000-- Try to restore the above dump into v11/master:[prabhat@localhost bin]$ ./pg_restore -F t -U prabhat -d db3 -p 9001 /tmp/tar_dump_PG94.tarpg_restore: [archiver (db)] Error while PROCESSING TOC:pg_restore: [archiver (db)] Error from TOC entry 6; 2615 2200 SCHEMA public prabhatpg_restore: [archiver (db)] could not execute query: ERROR:  schema \"public\" already exists    Command was: CREATE SCHEMA public;WARNING: errors ignored on restore: 1[prabhat@localhost bin]$ ./pg_restore -F c -U prabhat -d db4 -p 9001 /tmp/custom_dump_PG94.sqlpg_restore: [archiver (db)] Error while PROCESSING TOC:pg_restore: [archiver (db)] Error from TOC entry 6; 2615 2200 SCHEMA public prabhatpg_restore: [archiver (db)] could not execute query: ERROR:  schema \"public\" already exists    Command was: CREATE SCHEMA public;WARNING: errors ignored on restore: 1Note: I am able to perform \"Plain dump/restore\" across the branches.-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB CorporationThe Postgres Database Company", "msg_date": "Wed, 27 Feb 2019 19:08:53 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pg_dump/pg_restore fail for TAR_DUMP and CUSTOM_DUMP from v94/v95/v96\n to v11/master." }, { "msg_contents": "Hi,\n\nThe Commit 5955d934194c3888f30318209ade71b53d29777f has changed the logic\nto avoid dumping creation and comment commands for the public schema.\n From v11 onwards, we are using the DUMP_COMPONENT_ infrastructure in\nselectDumpableNamespace() to skip the public schema creation.\n\nAs reported by Prabhat, if we try to restore the custom/tar dump taken from\nv10 and earlier versions, we get the reported error for public schema.\nThe reason for this error is, when we take custom/tar dump from v10 and\nearlier version, it has \"CREATE SCHEMA public;\" statement and v11 failed to\nbypass that as per the current logic.\n\nThe plain format does not produces the error in this case, because in all\nversions, pg_dump in plain format does not generate that \"CREATE SCHEMA\npublic\". In v10 and earlier, we filter out that public schema creation in\n_printTocEntry() while pg_dump.\n\nIn custom/tar format, pg_dump in V10 and earlier versions generate the\nschema creation statement for public schema but again while pg_restore in\nsame or back branches, it get skipped through same _printTocEntry()\nfunction.\n\nI think we can write a logic in -\n1) BulidArchiveDependencies() to avoid dumping creation and comment\ncommands for the public schema since we do not have DUMP_COMPONENT_\ninfrastructure in all supported back-branches.\nor\n2) dumpNamespace() to not include public schema creation.\n\nThoughts?\n\nRegards,\nSuraj\n\nHi,The Commit 5955d934194c3888f30318209ade71b53d29777f has changed the logic to avoid dumping creation and comment commands for the public schema. From v11 onwards, we are using the DUMP_COMPONENT_ infrastructure in selectDumpableNamespace() to skip the public schema creation.As reported by Prabhat, if we try to restore the custom/tar dump taken from v10 and earlier versions, we get the reported error for public schema. The reason for this error is, when we take custom/tar dump from v10 and earlier version, it has \"CREATE SCHEMA public;\" statement and v11 failed to bypass that as per the current logic.The plain format does not produces the error in this case, because in all versions, pg_dump in plain format does not generate that \"CREATE SCHEMA public\". In v10 and earlier, we filter out that public schema creation in _printTocEntry() while pg_dump.In custom/tar format, pg_dump in V10 and earlier versions generate the schema creation statement for public schema but again while pg_restore in same or back branches, it get skipped through same _printTocEntry() function.I think we can write a logic in -1) BulidArchiveDependencies() to avoid dumping creation and comment commands for the public schema since we do not have DUMP_COMPONENT_ infrastructure in all supported back-branches.or2) dumpNamespace() to not include public schema creation.Thoughts?Regards,Suraj", "msg_date": "Fri, 1 Mar 2019 16:45:56 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump/pg_restore fail for TAR_DUMP and CUSTOM_DUMP from\n v94/v95/v96 to v11/master." }, { "msg_contents": "Suraj Kharage <suraj.kharage@enterprisedb.com> writes:\n> The Commit 5955d934194c3888f30318209ade71b53d29777f has changed the logic\n> to avoid dumping creation and comment commands for the public schema.\n\nYup.\n\n> As reported by Prabhat, if we try to restore the custom/tar dump taken from\n> v10 and earlier versions, we get the reported error for public schema.\n\nYes. We're not intending to do anything about that. The previous scheme\nalso caused pointless errors in some situations, so this isn't really a\nregression. The area is messy enough already that trying to avoid errors\neven with old (wrong) archives would almost certainly cause more problems\nthan it solves. In particular, it's *not* easy to fix things in a way\nthat works conveniently for both superuser and non-superuser restores.\nSee the mail thread referenced by 5955d9341.\n\n(Note that it's only been very recently that anyone had any expectation\nthat pg_dump scripts could be restored with zero errors in all cases;\nthe usual advice was just to ignore noncritical errors. I'm not that\nexcited about it if the old advice is still needed when dealing with old\narchives.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Mar 2019 09:43:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump/pg_restore fail for TAR_DUMP and CUSTOM_DUMP from\n v94/v95/v96 to v11/master." } ]
[ { "msg_contents": "Hello hackers,\n\nThe type smgr has only one value 'magnetic disk'. ~15 years ago it\nalso had a value 'main memory', and in Berkeley POSTGRES 4.2 there was\na third value 'sony jukebox'. Back then, all tables had an associated\nblock storage manager, and it was recorded as an attribute relsmgr of\npg_class (or pg_relation as it was known further back). This was the\ntype of that attribute, removed by Bruce in 3fa2bb31 (1997).\n\nNothing seems to break if you remove it (except for some tests using\nit in an incidental way). See attached.\n\nMotivation: A couple of projects propose to add new smgr\nimplementations alongside md.c in order to use bufmgr.c for more kinds\nof files, but it seems entirely bogus to extend the unused smgr type\nto cover those.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Thu, 28 Feb 2019 19:02:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Drop type \"smgr\"?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Motivation: A couple of projects propose to add new smgr\n> implementations alongside md.c in order to use bufmgr.c for more kinds\n> of files, but it seems entirely bogus to extend the unused smgr type\n> to cover those.\n\nI agree that smgrtype as it stands is pretty pointless, but what\nwill we be using instead to get to those other implementations?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 01:08:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Motivation: A couple of projects propose to add new smgr\n> > implementations alongside md.c in order to use bufmgr.c for more kinds\n> > of files, but it seems entirely bogus to extend the unused smgr type\n> > to cover those.\n>\n> I agree that smgrtype as it stands is pretty pointless, but what\n> will we be using instead to get to those other implementations?\n\nOur current thinking is that smgropen() should know how to map a small\nnumber of special database OIDs to different smgr implementations\n(where currently it hard-codes smgr_which = 0). For example there\nwould be a pseudo-database for undo logs, and another for the SLRUs\nthat Shawn Debnath and others have been proposing to move into shared\nbuffers. Another idea would be to widen the buffer tag to include the\nselector. Unlike the ancestral code, it wouldn't need to appear in\ncatalogs or ever be seen or typed in by users so there still wouldn't\nbe a use for this type.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Thu, 28 Feb 2019 19:21:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Feb 28, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I agree that smgrtype as it stands is pretty pointless, but what\n>> will we be using instead to get to those other implementations?\n\n> Our current thinking is that smgropen() should know how to map a small\n> number of special database OIDs to different smgr implementations\n\nHmm. Maybe mapping based on tablespaces would be a better idea?\n\n> Unlike the ancestral code, it wouldn't need to appear in\n> catalogs or ever be seen or typed in by users so there still wouldn't\n> be a use for this type.\n\nYeah, the $64 problem at this level is that you don't really want\nto depend on catalog contents, because you cannot do a lookup to\nfind out what to do. So I agree that we're pretty unlikely to\nresurrect an smgr type per se. But I'd been expecting an answer\nmentioning pg_am OIDs, and was wondering how that'd work exactly.\nProbably, it would still be down to some C code having hard-wired\nknowledge about some OIDs ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 01:37:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 7:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Feb 28, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I agree that smgrtype as it stands is pretty pointless, but what\n> >> will we be using instead to get to those other implementations?\n>\n> > Our current thinking is that smgropen() should know how to map a small\n> > number of special database OIDs to different smgr implementations\n>\n> Hmm. Maybe mapping based on tablespaces would be a better idea?\n\nIn the undo log proposal (about which more soon) we are using\ntablespaces for their real purpose, so we need that OID. If you SET\nundo_tablespaces = foo then future undo data created by your session\nwill be written there, which might be useful for putting that IO on\ndifferent storage. We also use the relation OID to chop up the undo\naddress space into separate numbered undo logs, so that different\nsessions get their own space to insert data without trampling on each\nother's buffer locks. That leaves only the database OID to mess with\n(unless we widen buffer tags and the smgropen() interface).\n\n> > Unlike the ancestral code, it wouldn't need to appear in\n> > catalogs or ever be seen or typed in by users so there still wouldn't\n> > be a use for this type.\n>\n> Yeah, the $64 problem at this level is that you don't really want\n> to depend on catalog contents, because you cannot do a lookup to\n> find out what to do.\n\nRight. It would be a circular dependency if you had to read a catalog\nbefore you could consult low level SLRUs like (say) the clog* via\nshared buffers, since you need the clog to read the catalog, or (to\npeer a long way down the road and around several corners) if the\ncatalogs themselves could optionally be stored in an undo-aware table\naccess manager like zheap, which might require reading undo. I think\nthis block storage stuff exists at a lower level than relations and\ntransactions and therefore also catalogs, and there will be a small\nfixed number of them and it makes sense to hard-code the knowledge of\nthem.\n\n*I'm at least a little bit aware of the history here: your 2001 commit\n2589735d moved clog out of shared buffers! That enabled you to\ndevelop the system of segment files truncated from the front. That's\nsort of what this new smgr work is about; putting things in shared\nbuffers, but just mapping the blocks to paths differently than md.c,\nas appropriate.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Thu, 28 Feb 2019 20:39:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Feb 28, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I agree that smgrtype as it stands is pretty pointless, but what\n> >> will we be using instead to get to those other implementations?\n>\n> > Our current thinking is that smgropen() should know how to map a small\n> > number of special database OIDs to different smgr implementations\n>\n> Hmm. Maybe mapping based on tablespaces would be a better idea?\n>\n\nThanks to bringing up this idea of mutliple smgr implementations. I also\nthought of implementing our own smgr implementation to support transparent\ndata encryption on the disk based on tablespace mapping.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Feb 28, 2019 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Feb 28, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I agree that smgrtype as it stands is pretty pointless, but what\n>> will we be using instead to get to those other implementations?\n\n> Our current thinking is that smgropen() should know how to map a small\n> number of special database OIDs to different smgr implementations\n\nHmm.  Maybe mapping based on tablespaces would be a better idea?Thanks to bringing up this idea of mutliple smgr implementations. I alsothought of implementing our own smgr implementation to support transparentdata encryption on the disk based on tablespace mapping.Regards,Haribabu KommiFujitsu Australia", "msg_date": "Thu, 28 Feb 2019 18:44:21 +1100", "msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 1:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The type smgr has only one value 'magnetic disk'. ~15 years ago it\n> also had a value 'main memory', and in Berkeley POSTGRES 4.2 there was\n> a third value 'sony jukebox'. Back then, all tables had an associated\n> block storage manager, and it was recorded as an attribute relsmgr of\n> pg_class (or pg_relation as it was known further back). This was the\n> type of that attribute, removed by Bruce in 3fa2bb31 (1997).\n>\n> Nothing seems to break if you remove it (except for some tests using\n> it in an incidental way). See attached.\n\nFWIW, +1 from me. I thought about arguing to remove this a number of\nyears ago when I was poking around in this area for some reason, but\nit didn't seem important enough to be worth arguing about then. Now,\nbecause we're actually going to maybe-hopefully get some more smgrs\nthat do interesting things, it seems worth the arguing...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 28 Feb 2019 09:15:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Feb 28, 2019 at 7:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>> Our current thinking is that smgropen() should know how to map a small\n>>> number of special database OIDs to different smgr implementations\n\n>> Hmm. Maybe mapping based on tablespaces would be a better idea?\n\n> In the undo log proposal (about which more soon) we are using\n> tablespaces for their real purpose, so we need that OID. If you SET\n> undo_tablespaces = foo then future undo data created by your session\n> will be written there, which might be useful for putting that IO on\n> different storage.\n\nMeh. That's a point, but it doesn't exactly seem like a killer argument.\nJust in the abstract, it seems much more likely to me that people would\nwant per-database special rels than per-tablespace special rels. And\nI think your notion of a GUC that can control this is probably pie in\nthe sky anyway: if we can't afford to look into the catalogs to resolve\nnames at this code level, how are we going to handle a GUC?\n\nThe real reason I'm concerned about this, though, is that for either\na database or a tablespace, you can *not* get away with having a magic\nOID just hanging in space with no actual catalog row matching it.\nIf nothing else, you need an entry there to prevent someone from\nreusing the OID for another purpose. And a pg_database row that\ndoesn't correspond to a real database is going to break all kinds of\ncode, starting with pg_upgrade and the autovacuum launcher. Special\nrows in pg_tablespace are much less likely to cause issues, because\nof the precedent of pg_global and pg_default.\n\nIn short, I think you're better off equating smgrs to magic\ntablespaces, and if you need some way of letting users control\nwhere those map to storage-wise, control it in some other way.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 10:09:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The real reason I'm concerned about this, though, is that for either\n> a database or a tablespace, you can *not* get away with having a magic\n> OID just hanging in space with no actual catalog row matching it.\n> If nothing else, you need an entry there to prevent someone from\n> reusing the OID for another purpose. And a pg_database row that\n> doesn't correspond to a real database is going to break all kinds of\n> code, starting with pg_upgrade and the autovacuum launcher. Special\n> rows in pg_tablespace are much less likely to cause issues, because\n> of the precedent of pg_global and pg_default.\n\nMy first intuition was the same as yours -- that we should use the\ntablespace to decide which smgr is relevant -- but I now think that\nintuition was wrong. Even if you use the tablespace OID to select the\nsmgr, it doesn't completely solve the problem you're worried about\nhere. You still have to put SOMETHING in the database OID field, and\nthat's going to be just as fake as it was before. I guess you could\nuse the OID for pg_global for things like undo and SLRU data, but then\nhow is GetRelationPath() going to work? You don't have enough bits\nleft anywhere to specify both a tablespace and a location within that\ntablespace in a reasonable way, and I think it's far from obvious how\nyou would build a side channel that could carry that information.\n\nAlso, I don't see why we'd need a fake pg_database row in the first\nplace. IIUC, the OID counter wraps around to FirstNormalObjectId, so\nnobody should have a database with an OID less than that value.\n\nIf we were using smgrs to represent other kinds of ways of storing\nuser tables, e.g. network.c, cloud.c, magtape.c, punchcard.c, etc. -\nthen I think we'd have to find some way to let each user-defined\ntablespace pick an smgr, and I don't really know how that would work\ngiven the fact that we can't really use catalog lookups to figure out\nwhich one to use, but actually the direction here is to store files\ninternal to the system using special-purpose smgrs so that we can pull\nmore things into shared_buffers, and for that purpose the database OID\nseems cleaner.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 28 Feb 2019 10:35:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> My first intuition was the same as yours -- that we should use the\n> tablespace to decide which smgr is relevant -- but I now think that\n> intuition was wrong. Even if you use the tablespace OID to select the\n> smgr, it doesn't completely solve the problem you're worried about\n> here. You still have to put SOMETHING in the database OID field, and\n> that's going to be just as fake as it was before.\n\nMy thought was that that could be zero if not relevant ... isn't that\nwhat we do now for buffer tags for shared rels?\n\n> I guess you could\n> use the OID for pg_global for things like undo and SLRU data, but then\n> how is GetRelationPath() going to work? You don't have enough bits\n> left anywhere to specify both a tablespace and a location within that\n> tablespace in a reasonable way, and I think it's far from obvious how\n> you would build a side channel that could carry that information.\n\nIt's certainly possible/likely that we're going to end up needing to\nwiden buffer tags to represent the smgr explicitly, because some use\ncases are going to need a real database spec, some are going to need\na real tablespace spec, and some might need both. Maybe we should\njust bite that bullet.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 11:06:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 11:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's certainly possible/likely that we're going to end up needing to\n> widen buffer tags to represent the smgr explicitly, because some use\n> cases are going to need a real database spec, some are going to need\n> a real tablespace spec, and some might need both. Maybe we should\n> just bite that bullet.\n\nWell, Andres will probably complain about that. He thinks, IIUC, that\nthe buffer tags are too wide already and that it's significantly\nhurting performance on very very common operations - like buffer\nlookups. I haven't verified that myself, but I tend to think he knows\nwhat he's talking about.\n\nAnyway, given that your argument started off from the premise that we\nhad to have a pg_database row, I think we'd better look a little\nharder at whether that premise is correct before getting too excited\nhere. As I said in my earlier reply, I think that we probably don't\nneed to have a pg_database row given that we wrap around to\nFirstNormalObjectId; any value we hard-code would be less than that.\nIf there are other reasons why we'd need that, it might be useful to\nhear about them.\n\nHowever, all we really need to decide on this thread is whether we\nneed 'smgr' exposed as an SQL type. And I can't see why we need that\nno matter how all of the rest of this turns out. Nobody is currently\nproposing to give users a choice of smgrs, just to use them for\ninternal stuff. Even if that changed later, it doesn't necessarily\nmean we'd add back an SQL type, or that if we did it would look like\nthe one we have today.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 28 Feb 2019 12:36:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Feb 28, 2019 at 1:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Nothing seems to break if you remove it (except for some tests using\n>> it in an incidental way). See attached.\n\n> FWIW, +1 from me.\n\nTo be clear, I'm not objecting to the proposed patch either. I was\njust wondering where we plan to go from here, given that smgr.c wasn't\ngetting removed.\n\nBTW, there is stuff in src/backend/storage/smgr/README that is\nalready obsoleted by this patch, and more that might be obsoleted\nif development proceeds as discussed here. So that needs a look.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 12:39:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 10:35:50AM -0500, Robert Haas wrote:\n> On Thu, Feb 28, 2019 at 10:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The real reason I'm concerned about this, though, is that for either\n> > a database or a tablespace, you can *not* get away with having a magic\n> > OID just hanging in space with no actual catalog row matching it.\n> > If nothing else, you need an entry there to prevent someone from\n> > reusing the OID for another purpose. And a pg_database row that\n> > doesn't correspond to a real database is going to break all kinds of\n> > code, starting with pg_upgrade and the autovacuum launcher. Special\n> > rows in pg_tablespace are much less likely to cause issues, because\n> > of the precedent of pg_global and pg_default.\n> \n> Also, I don't see why we'd need a fake pg_database row in the first\n> place. IIUC, the OID counter wraps around to FirstNormalObjectId, so\n> nobody should have a database with an OID less than that value.\n\nWe have scripts under catalog directory that can check to ensure OIDs \naren't re-used accidentally. However, we still have to define an entry \nin a catalog somewhere and I was proposing creating a new one, \npg_storage_managers?, to track these entries. See [1] for previous\ndiscussion on this topic. We wouldn't need to do catalog lookups for\nbeing able to use the smgrs as the OIDs will be hardcoded in C, but the\ndata will be available for posterity and OID reservation.\n\n> It's certainly possible/likely that we're going to end up needing to\n> widen buffer tags to represent the smgr explicitly, because some use\n> cases are going to need a real database spec, some are going to need\n> a real tablespace spec, and some might need both. Maybe we should\n> just bite that bullet.\n\nFor now, the two projects that require the new smgrs, undo and slru, can \nget away with using the database OID as the smgr differentiator. We have \nenough left over bits to get our work done (Thomas please correct me if \nI am mistaken). The only issue is pg_buffercache would present DB OIDs \nthat wouldn't map correctly (along with the rest of relfilenode). We \ncan modify the output to include more columns to give caching specific \ninformation when these special buffers are encountered.\n\n> Well, Andres will probably complain about that. He thinks, IIUC, that\n> the buffer tags are too wide already and that it's significantly\n> hurting performance on very very common operations - like buffer\n> lookups. I haven't verified that myself, but I tend to think he knows\n> what he's talking about.\n\nI can imagine this would. I am curious about the pef impact, going to at\nleast create a patch and do some testing.\n\nAnother thought: my colleague Anton Shyrabokau suggested potentially\nre-using forknumber to differentiate smgrs. We are using 32 bits to\nmap 5 entries today. This approach would be similar to how we split up\nthe segment numbers and use the higher bits to identify forget or\nunlink requests for checkpointer.\n\n> Anyway, given that your argument started off from the premise that we\n> had to have a pg_database row, I think we'd better look a little\n> harder at whether that premise is correct before getting too excited\n> here. As I said in my earlier reply, I think that we probably don't\n> need to have a pg_database row given that we wrap around to\n> FirstNormalObjectId; any value we hard-code would be less than that.\n> If there are other reasons why we'd need that, it might be useful to\n> hear about them.\n\nSee above, I think a new catalog, instead of pg_database, can resolve\nthe issue while providing the ability to reserve the OIDs.\n\n> However, all we really need to decide on this thread is whether we\n> need 'smgr' exposed as an SQL type. And I can't see why we need that\n> no matter how all of the rest of this turns out. Nobody is currently\n> proposing to give users a choice of smgrs, just to use them for\n> internal stuff. Even if that changed later, it doesn't necessarily\n> mean we'd add back an SQL type, or that if we did it would look like\n> the one we have today.\n\n+1 smgr types can be removed.\n\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n", "msg_date": "Thu, 28 Feb 2019 10:02:46 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 10:02:46AM -0800, Shawn Debnath wrote:\n> in a catalog somewhere and I was proposing creating a new one, \n> pg_storage_managers?, to track these entries. See [1] for previous\n> discussion on this topic. We wouldn't need to do catalog lookups for\n\n*ahem*, here's the footnote:\n \n[1] \nhttps://www.postgresql.org/message-id/20180821184835.GA1032%4060f81dc409fc.ant.amazon.com\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n", "msg_date": "Thu, 28 Feb 2019 10:06:20 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Hi,\n\nOn 2019-02-28 12:36:50 -0500, Robert Haas wrote:\n> Well, Andres will probably complain about that. He thinks, IIUC, that\n> the buffer tags are too wide already and that it's significantly\n> hurting performance on very very common operations - like buffer\n> lookups.\n\nCorrect. Turns out especially comparing the keys after the hash match is\npretty expensive. It also is a significant factor influencing the size\nof the hashtable, which influences how much of it can be in cache.\n\nMy plan is still to move to a two tiered system, where we have one\nunordered datastructure to map from (db, tablespace, oid) to a secondary\nordered datastructure that then maps from (block number) to an actual\noffset. With the first being cached somewhere in RelationData, therefore\nnot being performance critical. But while I hope to work for that in 13,\nI don't think making other large projects depend on it would be smart.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 28 Feb 2019 10:11:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Hi,\n\nOn 2019-02-28 10:02:46 -0800, Shawn Debnath wrote:\n> We have scripts under catalog directory that can check to ensure OIDs \n> aren't re-used accidentally. However, we still have to define an entry \n> in a catalog somewhere and I was proposing creating a new one, \n> pg_storage_managers?, to track these entries. See [1] for previous\n> discussion on this topic. We wouldn't need to do catalog lookups for\n> being able to use the smgrs as the OIDs will be hardcoded in C, but the\n> data will be available for posterity and OID reservation.\n\nI'm inclined to just put them in pg_am, with a new type 's' (we already\nhave amtype = i for indexes, I'm planning to add 't' for tables\nsoon). While not a perfect fit for storage managers, it seems to fit\nwell enough.\n\n\n> Another thought: my colleague Anton Shyrabokau suggested potentially\n> re-using forknumber to differentiate smgrs. We are using 32 bits to\n> map 5 entries today. This approach would be similar to how we split up\n> the segment numbers and use the higher bits to identify forget or\n> unlink requests for checkpointer.\n\nThat could probably be done, without incurring too much overhead\nhere. I'm not sure that the added complexity around the tree is worth it\nhowever.\n\nI personally would just go with the DB oid for the near future, where we\ndon't need per-database storage for those. And then plan for buffer\nmanager improvements.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 28 Feb 2019 10:15:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Shawn Debnath <sdn@amazon.com> writes:\n> On Thu, Feb 28, 2019 at 10:35:50AM -0500, Robert Haas wrote:\n>> Also, I don't see why we'd need a fake pg_database row in the first\n>> place. IIUC, the OID counter wraps around to FirstNormalObjectId, so\n>> nobody should have a database with an OID less than that value.\n\n> We have scripts under catalog directory that can check to ensure OIDs \n> aren't re-used accidentally. However, we still have to define an entry \n> in a catalog somewhere and I was proposing creating a new one, \n> pg_storage_managers?, to track these entries.\n\nThat would fail to capture the property that the smgr OIDs mustn't\nconflict with database OIDs, so the whole thing still seems like an\nugly kluge from here.\n\n> Another thought: my colleague Anton Shyrabokau suggested potentially\n> re-using forknumber to differentiate smgrs. We are using 32 bits to\n> map 5 entries today.\n\nYeah, that seems like it might be a workable answer.\n\nI really do think that relying on magic database OIDs is a bad idea;\nif you think you aren't going to need a real database ID in there in\nthe future, you're being short-sighted. And I guess the same argument\nestablishes that relying on magic tablespace OIDs would also be a bad\nidea, even if there weren't a near-term proposal on the table for\nneeding real tablespace IDs in an alternate smgr. So we need to find\nsome bits somewhere else, and the fork number field is the obvious\ncandidate. Since, per this discussion, the smgr IDs need not really\nbe OIDs at all, we just need a few bits for them.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 13:16:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On 2019-02-28 13:16:02 -0500, Tom Lane wrote:\n> Shawn Debnath <sdn@amazon.com> writes:\n> > On Thu, Feb 28, 2019 at 10:35:50AM -0500, Robert Haas wrote:\n> >> Also, I don't see why we'd need a fake pg_database row in the first\n> >> place. IIUC, the OID counter wraps around to FirstNormalObjectId, so\n> >> nobody should have a database with an OID less than that value.\n> \n> > We have scripts under catalog directory that can check to ensure OIDs \n> > aren't re-used accidentally. However, we still have to define an entry \n> > in a catalog somewhere and I was proposing creating a new one, \n> > pg_storage_managers?, to track these entries.\n> \n> That would fail to capture the property that the smgr OIDs mustn't\n> conflict with database OIDs, so the whole thing still seems like an\n> ugly kluge from here.\n\nIt's definitely a kludge, but it doesn't seem that bad for now. Delaying\na nice answer till we have a more efficient bufmgr representation\ndoesn't seem crazy to me.\n\nI don't think there's a real conflict risk given that we don't allow for\nduplicated oids across catalogs at bootstrap time, and this is\ndefinitely a bootstrap time issue.\n\n\n> > Another thought: my colleague Anton Shyrabokau suggested potentially\n> > re-using forknumber to differentiate smgrs. We are using 32 bits to\n> > map 5 entries today.\n> \n> Yeah, that seems like it might be a workable answer.\n\nYea, if we just split that into two 16bit entries, there'd not be much\nlost. Some mild mild performance regression due to more granular\nmemory->register reads/writes, but I can't even remotely see that\nmatter.\n\n\n> Since, per this discussion, the smgr IDs need not really be OIDs at\n> all, we just need a few bits for them.\n\nPersonally I find having them as oids more elegant than the distaste\nfrom misusing the database oid for a wihle, but I think it's fair to\ndisagree here.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 28 Feb 2019 10:24:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 7:24 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-02-28 13:16:02 -0500, Tom Lane wrote:\n> > Shawn Debnath <sdn@amazon.com> writes:\n> > > Another thought: my colleague Anton Shyrabokau suggested potentially\n> > > re-using forknumber to differentiate smgrs. We are using 32 bits to\n> > > map 5 entries today.\n> >\n> > Yeah, that seems like it might be a workable answer.\n>\n> Yea, if we just split that into two 16bit entries, there'd not be much\n> lost. Some mild mild performance regression due to more granular\n> memory->register reads/writes, but I can't even remotely see that\n> matter.\n\nOk, that's a interesting way to include it in BufferTag without making\nit wider. But then how about the SMGR interface? I think that value\nwould need to be added to the smgropen() interface, and all existing\ncallers would pass in (say) SGMR_RELATION (meaning \"use md.c\"), and\nnew code would use SMGR_UNDO, SMGR_SLRU etc. That seems OK to me.\nAncient POSTGRES had an extra argument like that and would say eg\nsmgropen(rd->rd_rel->relsmgr, rd), but in this new idea I think it'd\nalways be a constant or a value from a BufferTag, and the BufferTag\nwould have been set with a constant, since the code reading these\nbuffers would always be code that knows which it wants. We'd be using\nthis new argument not as a modifier to control storage location as\nthey did back then, but rather as a whole new namespace for\nRelFileNode values, that also happens to be stored differently; that\nmight be a hint that it could also go into RelFileNode (but I'm not\nsuggesting that).\n\n> > Since, per this discussion, the smgr IDs need not really be OIDs at\n> > all, we just need a few bits for them.\n>\n> Personally I find having them as oids more elegant than the distaste\n> from misusing the database oid for a wihle, but I think it's fair to\n> disagree here.\n\nIt sounds like your buffer mapping redesign would completely change\nthe economics and we could reconsider much of this later without too\nmuch drama; that was one of the things that made me feel that the\nmagic database OID approach was acceptable at least in the short term.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Fri, 1 Mar 2019 09:48:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Feb 28, 2019 at 7:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Thomas Munro <thomas.munro@gmail.com> writes:\n> >>> Our current thinking is that smgropen() should know how to map a small\n> >>> number of special database OIDs to different smgr implementations\n>\n> >> Hmm. Maybe mapping based on tablespaces would be a better idea?\n>\n> > In the undo log proposal (about which more soon) we are using\n> > tablespaces for their real purpose, so we need that OID. If you SET\n> > undo_tablespaces = foo then future undo data created by your session\n> > will be written there, which might be useful for putting that IO on\n> > different storage.\n>\n> Meh. That's a point, but it doesn't exactly seem like a killer argument.\n> Just in the abstract, it seems much more likely to me that people would\n> want per-database special rels than per-tablespace special rels. And\n> I think your notion of a GUC that can control this is probably pie in\n> the sky anyway: if we can't afford to look into the catalogs to resolve\n> names at this code level, how are we going to handle a GUC?\n\nI have this working like so:\n\n* undo logs have a small amount of meta-data in shared memory, stored\nin a file at checkpoint time, with all changes WAL logged, visible to\nusers in pg_stat_undo_logs view\n* one of the properties of an undo log is its tablespace (the point\nhere being that it's not in a catalog)\n* you don't need access to any catalogs to find the backing files for\na RelFileNode (the path via tablespace symlinks is derivable from\nspcNode)\n* therefore you can find your way from an UndoLogRecPtr in (say) a\nzheap page to the relevant blocks on disk without any catalog access;\nthis should work even in the apparently (but not actually) circular\ncase of a pg_tablespace catalog that is stored in zheap (not something\nwe can do right now, but hypothetically speaking), and has undo data\nthat is stored in some non-default tablespace that must be consulted\nwhile scanning the catalog (not that I'm suggesting that would\nnecessarily be a good idea to suppose catalogs in non-default\ntablespaces; I'm just addressing your theoretical point)\n* the GUC is used to resolve tablespace names to OIDs only by sessions\nthat are writing, when selecting (or creating) an undo log to attach\nto and begin writing into; those sessions have no trouble reading the\ncatalog to do so without problematic circularities, as above\n\nSeems to work; the main complications so far were coming up with\nreasonable behaviour and interlocking when you drop tablespaces that\ncontain undo logs (short version: if they're not needed for snapshots\nor rollback, they are dropped, wasting the rest of their undo address\nspace; otherwise they prevents the tablespace from being dropped with\na clear message to that effect).\n\nIt doesn't make any sense to put things like clog or any other SLRU in\na non-default tablespace though. It's perfectly OK if not all smgr\nimplementations know how to deal with tablespaces, and the SLRU\nsupport should just not support that.\n\n> The real reason I'm concerned about this, though, is that for either\n> a database or a tablespace, you can *not* get away with having a magic\n> OID just hanging in space with no actual catalog row matching it.\n> If nothing else, you need an entry there to prevent someone from\n> reusing the OID for another purpose. And a pg_database row that\n> doesn't correspond to a real database is going to break all kinds of\n> code, starting with pg_upgrade and the autovacuum launcher. Special\n> rows in pg_tablespace are much less likely to cause issues, because\n> of the precedent of pg_global and pg_default.\n\nGetNewObjectId() never returns values < FirstNormalObjectId.\n\nI don't think it's impossible for someone to want to put SMGRs in a\ncatalog of some kind some day. Even though the ones for clog, undo\netc would still probably need special hard-coded treatment as\ndiscussed, I suppose it's remotely possible that someone might some\nday figure out a useful way to allow extensions that provide different\nblock storage (nvram? zfs zvols? encryption? (see Haribabu's reply))\nbut I don't have any specific ideas about that or feel inclined to\ndesign something for unknown future use.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Fri, 1 Mar 2019 10:33:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 01, 2019 at 10:33:06AM +1300, Thomas Munro wrote:\n\n> It doesn't make any sense to put things like clog or any other SLRU in\n> a non-default tablespace though. It's perfectly OK if not all smgr\n> implementations know how to deal with tablespaces, and the SLRU\n> support should just not support that.\n\nIf the generic storage manager, or whatever we end up calling it, ends \nup being generic enough - its possible that tablespace value would have \nto be respected.\n\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n", "msg_date": "Thu, 28 Feb 2019 13:41:45 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On 2019-03-01 09:48:33 +1300, Thomas Munro wrote:\n> On Fri, Mar 1, 2019 at 7:24 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-02-28 13:16:02 -0500, Tom Lane wrote:\n> > > Shawn Debnath <sdn@amazon.com> writes:\n> > > > Another thought: my colleague Anton Shyrabokau suggested potentially\n> > > > re-using forknumber to differentiate smgrs. We are using 32 bits to\n> > > > map 5 entries today.\n> > >\n> > > Yeah, that seems like it might be a workable answer.\n> >\n> > Yea, if we just split that into two 16bit entries, there'd not be much\n> > lost. Some mild mild performance regression due to more granular\n> > memory->register reads/writes, but I can't even remotely see that\n> > matter.\n> \n> Ok, that's a interesting way to include it in BufferTag without making\n> it wider. But then how about the SMGR interface? I think that value\n> would need to be added to the smgropen() interface, and all existing\n> callers would pass in (say) SGMR_RELATION (meaning \"use md.c\"), and\n> new code would use SMGR_UNDO, SMGR_SLRU etc. That seems OK to me.\n\nRight, seems like we should do that independent of whether we end up\nreusing the dboid or not.\n\n\n> > > Since, per this discussion, the smgr IDs need not really be OIDs at\n> > > all, we just need a few bits for them.\n> >\n> > Personally I find having them as oids more elegant than the distaste\n> > from misusing the database oid for a wihle, but I think it's fair to\n> > disagree here.\n> \n> It sounds like your buffer mapping redesign would completely change\n> the economics and we could reconsider much of this later without too\n> much drama; that was one of the things that made me feel that the\n> magic database OID approach was acceptable at least in the short term.\n\nRight.\n\nFWIW, I think while distasteful, I could see us actually using oids,\njust ones that are small enough to fit into 16bit...\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 28 Feb 2019 14:08:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> FWIW, I think while distasteful, I could see us actually using oids,\n> just ones that are small enough to fit into 16bit...\n\nIf we suppose that all smgrs must be built-in, that's not even much\nof a restriction...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 17:20:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 02:08:49PM -0800, Andres Freund wrote:\n> On 2019-03-01 09:48:33 +1300, Thomas Munro wrote:\n> > On Fri, Mar 1, 2019 at 7:24 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2019-02-28 13:16:02 -0500, Tom Lane wrote:\n> > > > Shawn Debnath <sdn@amazon.com> writes:\n> > > > > Another thought: my colleague Anton Shyrabokau suggested potentially\n> > > > > re-using forknumber to differentiate smgrs. We are using 32 bits to\n> > > > > map 5 entries today.\n> > > >\n> > > > Yeah, that seems like it might be a workable answer.\n> > >\n> > > Yea, if we just split that into two 16bit entries, there'd not be much\n> > > lost. Some mild mild performance regression due to more granular\n> > > memory->register reads/writes, but I can't even remotely see that\n> > > matter.\n> > \n> > Ok, that's a interesting way to include it in BufferTag without making\n> > it wider. But then how about the SMGR interface? I think that value\n> > would need to be added to the smgropen() interface, and all existing\n> > callers would pass in (say) SGMR_RELATION (meaning \"use md.c\"), and\n> > new code would use SMGR_UNDO, SMGR_SLRU etc. That seems OK to me.\n> \n> Right, seems like we should do that independent of whether we end up\n> reusing the dboid or not.\n\nFood for thought: if we are going to muck with the smgr APIs, would it \nmake sense to move away from SMgrRelation to something a bit more \ngeneric like, say, SMgrHandle to better organize the internal contents \nof this structure? Internally, we could move things into an union and \nbased on type of handle: relation, undo, slru/generic, translate the \ncontents correctly. As you can guess, SMgrRelationData today is very \nspecific to relations and holding md specific information whose memory \nwould be better served re-used for the other storage managers.\n\nThoughts?\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n", "msg_date": "Thu, 28 Feb 2019 14:31:39 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 10:41 AM Shawn Debnath <sdn@amazon.com> wrote:\n> On Fri, Mar 01, 2019 at 10:33:06AM +1300, Thomas Munro wrote:\n> > It doesn't make any sense to put things like clog or any other SLRU in\n> > a non-default tablespace though. It's perfectly OK if not all smgr\n> > implementations know how to deal with tablespaces, and the SLRU\n> > support should just not support that.\n>\n> If the generic storage manager, or whatever we end up calling it, ends\n> up being generic enough - its possible that tablespace value would have\n> to be respected.\n\nRight, you and I have discussed this a bit off-list, but for the\nbenefit of others, I think what you're getting at with \"generic\nstorage manager\" here is something like this: on the one hand, our\nproposed revival of SMGR as a configuration point is about is\nsupporting alternative file layouts for bufmgr data, but at the same\ntime there is some background noise about direct IO, block encryption,\n... and who knows what alternative block storage someone might come up\nwith ... at the block level. So although it sounds a bit\ncontradictory to be saying \"let's make all these different SMGRs!\" at\nthe same time as saying \"but we'll eventually need a single generic\nSMGR that is smart enough to be parameterised for all of these\nlayouts!\", I see why you say it. In fact, the prime motivation for\nputting SLRUs into shared buffers is to get better buffering, because\n(anecdotally) slru.c's mini-buffer scheme performs abysmally without\nthe benefit of an OS page cache. If we add optional direct IO support\n(something I really want), we need it to apply to SLRUs, undo and\nrelations, ideally without duplicating code, so we'd probably want to\nchop things up differently. At some point I think we'll need to\nseparate the questions \"how to map blocks to filenames and offsets\"\nand \"how to actually perform IO\". I think the first question would be\ncontrolled by the SMGR IDs as discussed, but the second question\nprobably needs to be controlled by GUCs that control all IO, and/or\nspecial per relation settings (supposing you can encrypt just one\ntable, as a random example I know nothing about); but that seems way\nout of scope for the present projects. IMHO the best path from here\nis to leave md.c totally untouched for now as the SMGR for plain old\nrelations, while we work on getting these new kinds of bufmgr data\ninto the tree as a first step, and a later hypothetical direct IO or\nwhatever project can pay for the refactoring to separate IO from\nlayout.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Fri, 1 Mar 2019 11:32:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 11:31 AM Shawn Debnath <sdn@amazon.com> wrote:\n> On Thu, Feb 28, 2019 at 02:08:49PM -0800, Andres Freund wrote:\n> > On 2019-03-01 09:48:33 +1300, Thomas Munro wrote:\n> > > On Fri, Mar 1, 2019 at 7:24 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2019-02-28 13:16:02 -0500, Tom Lane wrote:\n> > > > > Shawn Debnath <sdn@amazon.com> writes:\n> > > > > > Another thought: my colleague Anton Shyrabokau suggested potentially\n> > > > > > re-using forknumber to differentiate smgrs. We are using 32 bits to\n> > > > > > map 5 entries today.\n> > > > >\n> > > > > Yeah, that seems like it might be a workable answer.\n> > > >\n> > > > Yea, if we just split that into two 16bit entries, there'd not be much\n> > > > lost. Some mild mild performance regression due to more granular\n> > > > memory->register reads/writes, but I can't even remotely see that\n> > > > matter.\n> > >\n> > > Ok, that's a interesting way to include it in BufferTag without making\n> > > it wider. But then how about the SMGR interface? I think that value\n> > > would need to be added to the smgropen() interface, and all existing\n> > > callers would pass in (say) SGMR_RELATION (meaning \"use md.c\"), and\n> > > new code would use SMGR_UNDO, SMGR_SLRU etc. That seems OK to me.\n> >\n> > Right, seems like we should do that independent of whether we end up\n> > reusing the dboid or not.\n>\n> Food for thought: if we are going to muck with the smgr APIs, would it\n> make sense to move away from SMgrRelation to something a bit more\n> generic like, say, SMgrHandle to better organize the internal contents\n> of this structure? Internally, we could move things into an union and\n> based on type of handle: relation, undo, slru/generic, translate the\n> contents correctly. As you can guess, SMgrRelationData today is very\n> specific to relations and holding md specific information whose memory\n> would be better served re-used for the other storage managers.\n>\n> Thoughts?\n\nRight, it does contain some md-specific stuff without even\napologising. Also smgropen() was rendered non-virtual at some point\n(I mean that implementations don't even get a chance to initialise\nanything, which works out only because md-specific code has leaked\ninto smgr.c). In one of my undo patches (which I'll post an updated\nversion of on the appropriate thread soon) I added a void * called\nprivate_data so that undo_file.c can keep track of some stuff, but\nyeah I agree that more tidying up could be done.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Fri, 1 Mar 2019 11:38:49 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 01, 2019 at 11:38:49AM +1300, Thomas Munro wrote:\n\n> > Food for thought: if we are going to muck with the smgr APIs, would it\n> > make sense to move away from SMgrRelation to something a bit more\n> > generic like, say, SMgrHandle to better organize the internal contents\n> > of this structure? Internally, we could move things into an union and\n> > based on type of handle: relation, undo, slru/generic, translate the\n> > contents correctly. As you can guess, SMgrRelationData today is very\n> > specific to relations and holding md specific information whose memory\n> > would be better served re-used for the other storage managers.\n> >\n> > Thoughts?\n> \n> Right, it does contain some md-specific stuff without even\n> apologising. Also smgropen() was rendered non-virtual at some point\n> (I mean that implementations don't even get a chance to initialise\n> anything, which works out only because md-specific code has leaked\n> into smgr.c). In one of my undo patches (which I'll post an updated\n> version of on the appropriate thread soon) I added a void * called\n> private_data so that undo_file.c can keep track of some stuff, but\n> yeah I agree that more tidying up could be done.\n\nI can send out a patch for this (on a separate thread!) to unblock us \nboth. Unless you are closer to completion on this.\n\nI prefer the union approach to make it more readable. I was considering \nre-factoring the structure to SMgrHandle and having the relation \nspecific structure retain SMgrRelationData. For undo we could have\nSMgrUndoData, and similarly for SLRU (I will come up with a better name \nthan generic). Then have these be in the union instead of the \nindividual members of the struct.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n", "msg_date": "Thu, 28 Feb 2019 15:03:49 -0800", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "\n\nOn 01.03.2019 1:32, Thomas Munro wrote:\n> On Fri, Mar 1, 2019 at 10:41 AM Shawn Debnath <sdn@amazon.com> wrote:\n>> On Fri, Mar 01, 2019 at 10:33:06AM +1300, Thomas Munro wrote:\n>>> It doesn't make any sense to put things like clog or any other SLRU in\n>>> a non-default tablespace though. It's perfectly OK if not all smgr\n>>> implementations know how to deal with tablespaces, and the SLRU\n>>> support should just not support that.\n>> If the generic storage manager, or whatever we end up calling it, ends\n>> up being generic enough - its possible that tablespace value would have\n>> to be respected.\n> Right, you and I have discussed this a bit off-list, but for the\n> benefit of others, I think what you're getting at with \"generic\n> storage manager\" here is something like this: on the one hand, our\n> proposed revival of SMGR as a configuration point is about is\n> supporting alternative file layouts for bufmgr data, but at the same\n> time there is some background noise about direct IO, block encryption,\n> ... and who knows what alternative block storage someone might come up\n> with ... at the block level. So although it sounds a bit\n> contradictory to be saying \"let's make all these different SMGRs!\" at\n> the same time as saying \"but we'll eventually need a single generic\n> SMGR that is smart enough to be parameterised for all of these\n> layouts!\", I see why you say it. In fact, the prime motivation for\n> putting SLRUs into shared buffers is to get better buffering, because\n> (anecdotally) slru.c's mini-buffer scheme performs abysmally without\n> the benefit of an OS page cache. If we add optional direct IO support\n> (something I really want), we need it to apply to SLRUs, undo and\n> relations, ideally without duplicating code, so we'd probably want to\n> chop things up differently. At some point I think we'll need to\n> separate the questions \"how to map blocks to filenames and offsets\"\n> and \"how to actually perform IO\". I think the first question would be\n> controlled by the SMGR IDs as discussed, but the second question\n> probably needs to be controlled by GUCs that control all IO, and/or\n> special per relation settings (supposing you can encrypt just one\n> table, as a random example I know nothing about); but that seems way\n> out of scope for the present projects. IMHO the best path from here\n> is to leave md.c totally untouched for now as the SMGR for plain old\n> relations, while we work on getting these new kinds of bufmgr data\n> into the tree as a first step, and a later hypothetical direct IO or\n> whatever project can pay for the refactoring to separate IO from\n> layout.\n>\n\nI completely agree with this statement:\n\nAt some point I think we'll need to separate the questions \"how to map blocks to filenames and offsets\" and \"how to actually perform IO\".\n\n\nThere are two subsystems developed in PgPro which are integrated in \nPostgres at file IO level: CFS (compressed file system) and SnapFS (fast \ndatabase snapshots).\nFirst one provides page level encryption and compression, second - \nmechanism for fast restoring of database state.\nBoth are implemented by patching fd.c. My first idea was to implement \nthem as alternative storage devices (alternative to md.c). But it will \nrequire duplication of all segment mapping logic from md.c + file \ndescriptors cache from fd.c. It will be nice if it is possible to \nredefine raw file operations (FileWrite, FileRead,...) without affecting \nsegment mapping logic.\n\nOne more thing... From my point of view one of the drawbacks of Postgres \nis that it requires underlaying file system and is not able to work with \nraw partitions.\nIt seems to me that bypassing fle system layer can significantly improve \nperformance and give more possibilities for IO performance tuning.\nCertainly it will require a log of changes in Postgres storage layer so \nthis is not what I suggest to implement or even discuss right now.\nBut it may be useful to keep it in mind in discussions concerning \n\"generic storage manager\".\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 1 Mar 2019 11:11:02 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 9:11 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> One more thing... From my point of view one of the drawbacks of Postgres\n> is that it requires underlaying file system and is not able to work with\n> raw partitions.\n> It seems to me that bypassing fle system layer can significantly improve\n> performance and give more possibilities for IO performance tuning.\n> Certainly it will require a log of changes in Postgres storage layer so\n> this is not what I suggest to implement or even discuss right now.\n> But it may be useful to keep it in mind in discussions concerning\n> \"generic storage manager\".\n\nHmm. Speculation-around-the-water-cooler-mode: I think the arguments\nfor using raw partitions are approximately the same as the arguments\nfor using a big data file that holds many relations. The three I can\nthink of are (1) the space is entirely preallocated, which *might*\nhave performance and safety advantages, (2) if you encrypt it, no one\ncan see the structure (database OIDs, relation OIDs and sizes) and (3)\nit might allow pages to be moved from one relation to another without\ncopying or interleaved in interesting ways (think SQL Server parallel\nbtree creation that \"stitches together\" btrees produced by parallel\nworkers, or Oracle \"clustered\" tables where the pages of two tables\nare physically interleaved in an order that works nicely when you join\nthose two tables, or perhaps schemes for moving data between\nrelations/partitions quickly). On the other hand, to make that work\nyou have to own the problem of space allocation/management that we\ncurrently leave to the authors of XFS et al, and those guys have\nworked on that for *years and years* and they work really well. If\nyou made all that work for big preallocated data files, then sure, you\ncould also make it work for raw partitions, but I'm not sure how much\nperformance advantage there is for that final step. I suspect that a\nmajor reason for Oracle to support raw block devices many years ago\nwas because back then there was no other way to escape from the page\ncache. Direct IO hasn't always been available or portable, and hasn't\nalways worked well. That said, it does seem plausible that we could\ndo the separation of (1) block -> pathname/offset mappings and (2)\nactual IO operations in a way that you could potentially write your\nown pseudo-filesystem that stores a whole PostgreSQL cluster inside\nbig data files or raw partitions. Luckily we don't need to tackle\nsuch mountainous terrain to avoid the page cache, today.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Sat, 2 Mar 2019 11:04:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" }, { "msg_contents": "On Thu, Feb 28, 2019 at 7:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The type smgr has only one value 'magnetic disk'. ~15 years ago it\n> also had a value 'main memory', and in Berkeley POSTGRES 4.2 there was\n> a third value 'sony jukebox'. Back then, all tables had an associated\n> block storage manager, and it was recorded as an attribute relsmgr of\n> pg_class (or pg_relation as it was known further back). This was the\n> type of that attribute, removed by Bruce in 3fa2bb31 (1997).\n>\n> Nothing seems to break if you remove it (except for some tests using\n> it in an incidental way). See attached.\n\nPushed.\n\nThanks all for the interesting discussion. I'm trying out Anton\nShyrabokau's suggestion of stealing bits from the fork number. More\non that soon.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Thu, 7 Mar 2019 15:55:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Drop type \"smgr\"?" } ]
[ { "msg_contents": "Dear hackers\n\nHi there !\nOne past thread about introducing CREATE OR REPLACE TRIGGER into the syntax\nhad stopped without complete discussion in terms of LOCK level.\n\nThe past thread is this. I'd like to inherit this one.\nhttps://www.postgresql.org/message-id/flat/0B4917A40C80E34BBEC4BE1A7A9AB7E276F5D9%40g01jpexmbkw05#39f3c956d549c134474724d2b775399c\nMr. Tom Lane mentioned that this change requires really careful study in this thread.\n\nFirst of all, please don't forget I don't talk about DO CLAUSE in this thread.\nSecondly, Mr. Surafel Temesgen pointed out a bug but it doesn't appear.\n\nAnyway, let's go back to the main topic.\n From my perspective, how CREATE OR REPLACE TRIGGER works is,\nwhen there is no counterpart replaced by a new trigger,\nCREATE TRIGGER is processed with SHARE ROW EXCLUSIVE LOCK as usual.\n\nOn the other hand, when there's,\nREPLACE TRIGGER procedure is executed with ACCESS EXCLUSIVE LOCK.\n\nThis feeling comes from my idea\nthat acquiring ACCESS EXCLUSIVE LOCK when replacing trigger occurs\nprovides data consistency between transactions and protects concurrent pg_dump.\n\nIn order to make this come true, as the first step,\nI've made a patch to add CREATE OR REPLACE TRIGGER with some basic tests in triggers.sql.\n\nYet, I'm still wondering which part of LOCK level in this patch should be raised to ACCESS EXCLUSIVE LOCK.\nCould anyone give me an advise about\nhow to protect the process of trigger replacement in the way I suggested above ?\n\n--------------------\nTakamichi Osumi", "msg_date": "Thu, 28 Feb 2019 08:43:49 +0000", "msg_from": "\"Osumi, Takamichi\" <osumi.takamichi@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "extension patch of CREATE OR REPLACE TRIGGER " }, { "msg_contents": "On Thu, 28 Feb 2019 at 21:44, Osumi, Takamichi\n<osumi.takamichi@jp.fujitsu.com> wrote:\n> I've made a patch to add CREATE OR REPLACE TRIGGER with some basic tests in triggers.sql.\n\nHi,\n\nI see there are two patch entries in the commitfest for this. Is that\na mistake? If so can you \"Withdraw\" one of them?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 1 Mar 2019 00:26:42 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "> I've made a patch to add CREATE OR REPLACE TRIGGER with some basic tests in triggers.sql.\r\n\r\n>> I see there are two patch entries in the commitfest for this. Is that a\r\n>> mistake? If so can you \"Withdraw\" one of them?\r\n\r\nOh my bad. Sorry, this time was my first time to register my patch !\r\nPlease withdraw the old one, \"extension patch to add OR REPLACE clause to CREATE TRIGGER\".\r\nMy latest version is \"extension patch of CREATE OR REPLACE TRIGGER\".\r\n\r\nThanks ! \r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n\r\n\r\n", "msg_date": "Fri, 1 Mar 2019 02:59:48 +0000", "msg_from": "\"Osumi, Takamichi\" <osumi.takamichi@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On 2/28/19 10:43 AM, Osumi, Takamichi wrote:\n> \n> One past thread about introducing CREATE OR REPLACE TRIGGER into the syntax\n> \n> had stopped without complete discussion in terms of LOCK level.\n> \n> The past thread is this. I'd like to inherit this one.\n\nSince this patch landed at the last moment in the last commitfest for \nPG12 I have marked it as targeting PG13.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Tue, 5 Mar 2019 11:18:50 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Tue, Mar 5, 2019 at 10:19 PM David Steele <david@pgmasters.net> wrote:\n> On 2/28/19 10:43 AM, Osumi, Takamichi wrote:\n> > One past thread about introducing CREATE OR REPLACE TRIGGER into the syntax\n> >\n> > had stopped without complete discussion in terms of LOCK level.\n> >\n> > The past thread is this. I'd like to inherit this one.\n>\n> Since this patch landed at the last moment in the last commitfest for\n> PG12 I have marked it as targeting PG13.\n\nHello Osumi-san,\n\nThe July Commitfest is now beginning. To give the patch the best\nchance of attracting reviewers, could you please post a rebased\nversion? The last version doesn't apply.\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Jul 2019 23:57:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Dear Mr. Thomas\r\n\r\n> The July Commitfest is now beginning. To give the patch the best chance of\r\n> attracting reviewers, could you please post a rebased version? The last version\r\n> doesn't apply.\r\n\r\nI really appreciate your comments.\r\nRecently, I'm very busy because of my work.\r\nSo, I'll fix this within a couple of weeks.\r\n\r\nRegards,\r\n\tOsumi Takamichi\r\n\r\n", "msg_date": "Wed, 3 Jul 2019 04:37:00 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Wed, Jul 03, 2019 at 04:37:00AM +0000, osumi.takamichi@fujitsu.com wrote:\n> I really appreciate your comments.\n> Recently, I'm very busy because of my work.\n> So, I'll fix this within a couple of weeks.\n\nPlease note that I have switched the patch as waiting on author.\n--\nMichael", "msg_date": "Thu, 4 Jul 2019 17:01:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Dear Michael san\n\n> > So, I'll fix this within a couple of weeks.\n> Please note that I have switched the patch as waiting on author.\n\nThanks for your support.\nI've rebased the previous patch to be applied\nto the latest PostgreSQL without any failure of regression tests.\n\nBest, \n\tTakamichi Osumi", "msg_date": "Tue, 9 Jul 2019 09:21:50 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi Takamichi Osumi,\nOn Tue, Jul 9, 2019\n\n> I've rebased the previous patch to be applied\n>\n\nI don't test your patch fully yet but here are same comment.\nThere are same white space issue like here\n- bool is_internal)\n+ bool is_internal,\n+ Oid existing_constraint_oid)\nin a few place\n\n+ // trigoid = HeapTupleGetOid(tuple); // raw code\nplease remove this line if you don't use it.\n\n+ if(!existing_constraint_oid){\n+ conOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,\n+ Anum_pg_constraint_oid);\n+ values[Anum_pg_constraint_oid - 1] = ObjectIdGetDatum(conOid);\n+ }\nincorrect bracing style here and its appear in a few other places too\nand it seems to me that the change in regression test is\nhuge can you reduce it?\n\nregards\nSurafel\n\nHi Takamichi Osumi,On Tue, Jul 9, 2019I've rebased the previous patch to be appliedI don't test your patch fully yet but here are same comment.There are same white space issue like here -\t\t\t\t\t  bool is_internal)+\t\t\t\t\t  bool is_internal,+\t\t\t\t\t  Oid existing_constraint_oid)in a few place +\t\t\t// trigoid = HeapTupleGetOid(tuple); // raw codeplease remove this line if you don't use it.+\tif(!existing_constraint_oid){+\t\tconOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,+\t\t\t\t\t\t\t\t\tAnum_pg_constraint_oid);+\t\tvalues[Anum_pg_constraint_oid - 1] = ObjectIdGetDatum(conOid);+\t}incorrect bracing style here and its appear in a few other places too and it seems to me that the change in regression test is huge can you reduce it?regards Surafel", "msg_date": "Wed, 10 Jul 2019 15:42:35 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Dear Surafel\r\n\r\nThank you for your check of my patch.\r\nI’ve made the version 03 to\r\nfix what you mentioned about my patch.\r\n\r\nI corrected my wrong bracing styles and\r\nalso reduced the amount of my regression test.\r\nOff course, I erased unnecessary\r\nwhite spaces and the C++ style comment.\r\n\r\nRegards,\r\n Takamichi Osumi\r\n\r\nI don't test your patch fully yet but here are same comment.\r\nThere are same white space issue like here\r\n- bool is_internal)\r\n+ bool is_internal,\r\n+ Oid existing_constraint_oid)\r\nin a few place\r\n\r\n+ // trigoid = HeapTupleGetOid(tuple); // raw code\r\nplease remove this line if you don't use it.\r\n\r\n+ if(!existing_constraint_oid){\r\n+ conOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,\r\n+ Anum_pg_constraint_oid);\r\n+ values[Anum_pg_constraint_oid - 1] = ObjectIdGetDatum(conOid);\r\n+ }\r\nincorrect bracing style here and its appear in a few other places too\r\nand it seems to me that the change in regression test is\r\nhuge can you reduce it?\r\n\r\nregards\r\nSurafel", "msg_date": "Mon, 22 Jul 2019 11:45:57 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> [ CREATE_OR_REPLACE_TRIGGER_v03.patch ]\n\nI took a quick look through this just to see what was going on.\nA few comments:\n\n* Upthread you asked about changing the lock level to be\nAccessExclusiveLock if the trigger already exists, but the patch doesn't\nactually do that. Which is fine by me, because that sounds like a\nperfectly bad idea. In the first place, nobody is going to expect that\nOR REPLACE changes the lock level, and in the second place, you can't\nactually tell whether the trigger exists until you already have some\nlock on the table. I do not put any credit in the argument that it's\nmore important to lock out pg_dump against a concurrent REPLACE TRIGGER\nthan it is to lock out a concurrent CREATE TRIGGER, anyway. So I think\nkeeping it at ShareRowExclusiveLock is fine.\n\n* I wouldn't recommend adding CreateConstraintEntry's new argument\nat the end. IME, \"add at the end\" is almost always bad coding style;\nthe right thing is \"add where it belongs, that is where you'd have\nput it if you were writing the list from scratch\". To the admittedly\nimperfect extent that the order of CreateConstraintEntry's arguments\nmatches the column order of pg_constraint, there's a good argument\nthat the OID should be *first*. (Maybe, as long as we've gotta touch\nall the callers anyway, we should fix the other random deviations\nfrom the catalog's column order, too.)\n\n* While you're at it, it wouldn't hurt to fix CreateConstraintEntry's\nheader comment, maybe like\n\n- * The new constraint's OID is returned.\n+ * The new constraint's OID is returned. (This will be the same as\n+ * \"conOid\" if that is specified as nonzero.)\n\n* The new code added to CreateTrigger could stand a rethink, too.\nFor starters, this comment does not describe the code stanza\njust below it, but something considerably further down:\n\n \t/*\n+\t * Generate the trigger's OID now, so that we can use it in the name if\n+\t * needed.\n+\t */\n\nIt's also quite confusing because if there is a pre-existing trigger,\nwe *don't* generate any new OID. I'd make that say \"See if there is a\npre-existing trigger of the same name\", and then comment the later OID\ngeneration appropriately. Also, the code below the pg_trigger search\nseems pretty confused and redundant:\n\n+\tif (!trigger_exists)\n+\t\t// do something\n+\tif (stmt->replace && trigger_exists)\n+\t{\n+\t\tif (stmt->isconstraint && !OidIsValid(existing_constraint_oid))\n+\t\t\t// do something\n+\t\telse if (!stmt->isconstraint && OidIsValid(existing_constraint_oid))\n+\t\t\t// do something\n+\t}\n+\telse if (trigger_exists && !isInternal)\n+\t{\n+\t\t// do something\n+\t}\n\nI'm not on board with the idea that testing trigger_exists three separate\ntimes, in three randomly-different-looking ways, makes things more\nreadable. I'm also not excited about spending the time to scan pg_trigger\nat all in the isInternal case, where you're going to ignore the result.\nSo I think this could use some refactoring.\n\nAlso, in the proposed tests:\n\n+\\h CREATE TRIGGER;\n\nWe do not test \\h output in any existing regression test, and we're\nnot going to start doing so in this one. For one thing, the expected\nURL would break every time we forked off a new release branch.\n(There would surely be value in having more-than-no test coverage\nof psql/help.c, but that's a matter for its own patch, which would\nneed some thought about how to cope with instability of the output.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 16:44:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Tue, Jul 30, 2019 at 04:44:11PM -0400, Tom Lane wrote:\n> We do not test \\h output in any existing regression test, and we're\n> not going to start doing so in this one. For one thing, the expected\n> URL would break every time we forked off a new release branch.\n> (There would surely be value in having more-than-no test coverage\n> of psql/help.c, but that's a matter for its own patch, which would\n> need some thought about how to cope with instability of the output.)\n\nOne way to get out of that could be some psql-level options to control\nwhich parts of the help output is showing up. The recent addition of\nthe URL may bring more weight for doing something in this area.\n--\nMichael", "msg_date": "Wed, 31 Jul 2019 10:33:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Wed, Jul 31, 2019 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 30, 2019 at 04:44:11PM -0400, Tom Lane wrote:\n> > We do not test \\h output in any existing regression test, and we're\n> > not going to start doing so in this one. For one thing, the expected\n> > URL would break every time we forked off a new release branch.\n> > (There would surely be value in having more-than-no test coverage\n> > of psql/help.c, but that's a matter for its own patch, which would\n> > need some thought about how to cope with instability of the output.)\n>\n> One way to get out of that could be some psql-level options to control\n> which parts of the help output is showing up. The recent addition of\n> the URL may bring more weight for doing something in this area.\n\nHello again Osumi-san,\n\nThe end of CF1 is here. I've moved this patch to CF2 (September) in\nthe Commitfest app. Of course, everyone is free to continue\ndiscussing the patch before then. When you have a new version, please\nset the status to \"Needs review\".\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 21:49:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On 2019-Jul-22, osumi.takamichi@fujitsu.com wrote:\n\n> Dear Surafel\n> \n> Thank you for your check of my patch.\n> I’ve made the version 03 to\n> fix what you mentioned about my patch.\n> \n> I corrected my wrong bracing styles and\n> also reduced the amount of my regression test.\n> Off course, I erased unnecessary\n> white spaces and the C++ style comment.\n\nA new version of this patch, handling Tom's comments, would be\nappreciated. Besides, per CFbot this patch applies no longer.\n\nThanks\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Sep 2019 13:11:41 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Dear Tom Lane\n\nThank you so much for your comment.\n\n> * Upthread you asked about changing the lock level to be AccessExclusiveLock if\n> the trigger already exists, but the patch doesn't actually do that. Which is fine by\n> me, because that sounds like a perfectly bad idea. \n\nWhy I suggested a discussion \nto make the lock level of C.O.R.T. stronger above comes from my concern.\n\nI've worried about a case that\nC.O.R.T. weak lock like ShareRowExclusiveLock allows \none session to replace other session's trigger for new trigger by COMMIT;\nAs a result, the session is made to use the new one unintentionally.\n\nAs you can see below, the previous trigger is replaced by Session2 after applying this patch.\nThis seems to conflict with user's expectation to data consistency between sessions or \nto identify C.O.R.T with DROP TRIGGER (AcessExclusive) + CREATE TRIGGER in terms of lock level.\n\n-- Preparation\ncreate table my_table1 (id integer, name text);\ncreate table my_table2 (id integer, name text);\nCREATE OR REPLACE FUNCTION public.my_updateproc1() RETURNS trigger LANGUAGE plpgsql\n AS $function$\n begin\n UPDATE my_table2 SET name = 'new ' WHERE id=OLD.id;\n RETURN NULL;\n end;$function$;\n\nCREATE OR REPLACE FUNCTION public.my_updateproc2() RETURNS trigger LANGUAGE plpgsql\n AS $function$\n begin\n UPDATE my_table2 SET name = 'replace' WHERE id=OLD.id;\n RETURN NULL;\n end;$function$;\n\nCREATE OR REPLACE TRIGGER my_regular_trigger AFTER UPDATE\n ON my_table1 FOR EACH ROW EXECUTE PROCEDURE my_updateproc1();\n\n--Session 1---\nBEGIN;\nselect * from my_table1; -- Cause AccessShareLock here by referring to my_table1;\n\n--Session 2---\nBEGIN;\nCREATE OR REPLACE TRIGGER my_regular_trigger\n AFTER UPDATE ON my_table1 FOR EACH ROW\n EXECUTE PROCEDURE my_updateproc2();\nCOMMIT;\n\n--Session 1---\nselect pg_get_triggerdef(oid, true) from pg_trigger where tgrelid = 'my_table1'::regclass AND tgname = 'my_regular_trigger'; \n------------------------------------------------------------------------------------------------------------\n CREATE TRIGGER my_regular_trigger AFTER UPDATE ON my_table1 FOR EACH ROW EXECUTE FUNCTION my_updateproc2()\n(1 row)\n\nBy the way, I've fixed other points of my previous patch.\n> * I wouldn't recommend adding CreateConstraintEntry's new argument at the end.\nI changed the order of CreateConstraintEntry function and its header comment.\n\nBesides that,\n> I'm not on board with the idea that testing trigger_exists three separate times, in\n> three randomly-different-looking ways, makes things more readable. \nI did code refactoring of the redundant and confusing part.\n\n> We do not test \\h output in any existing regression test\nAnd off course, I deleted the \\h test you mentioned above.\t\n\nRegards,\n Takamichi Osumi", "msg_date": "Fri, 18 Oct 2019 01:23:55 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Thu, Aug 01, 2019 at 09:49:53PM +1200, Thomas Munro wrote:\n> The end of CF1 is here. I've moved this patch to CF2 (September) in\n> the Commitfest app. Of course, everyone is free to continue\n> discussing the patch before then. When you have a new version, please\n> set the status to \"Needs review\".\n\nThe latest patch includes calls to heap_open(), causing its\ncompilation to fail. Could you please send a rebased version of the\npatch? I have moved the entry to next CF, waiting on author.\n--\nMichael", "msg_date": "Sun, 1 Dec 2019 12:36:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Dear Michael san\n\n> The latest patch includes calls to heap_open(), causing its compilation to fail.\n> Could you please send a rebased version of the patch? I have moved the entry to\n> next CF, waiting on author.\nThanks. I've fixed where you pointed out. \nAlso, I'm waiting for other kind of feedbacks from anyone.\n\nRegards,\n\tOsumi Takamichi", "msg_date": "Mon, 2 Dec 2019 01:56:49 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On 12/1/19 8:56 PM, osumi.takamichi@fujitsu.com wrote:\n> \n>> The latest patch includes calls to heap_open(), causing its compilation to fail.\n>> Could you please send a rebased version of the patch? I have moved the entry to\n>> next CF, waiting on author.\n> Thanks. I've fixed where you pointed out.\n\nThis patch no longer applies: http://cfbot.cputube.org/patch_27_2307.log\n\nCF entry has been updated to Waiting on Author.\n\n> Also, I'm waiting for other kind of feedbacks from anyone.\n\nHopefully a re-based patch will help with that.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 24 Mar 2020 11:08:01 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> Also, I'm waiting for other kind of feedbacks from anyone.\n\nAs David pointed out, this needs to be rebased, though it looks like\nthe conflict is pretty trivial.\n\nA few other notes from a quick look:\n\n* You missed updating equalfuncs.c/copyfuncs.c. Pretty much any change in\na Node struct will require touching backend/nodes/ functions, and in\ngeneral it's a good idea to grep for uses of the struct to see what else\nmight be affected.\n\n* Did you use a dartboard while deciding where to add the new field\nin struct CreateTrigger? Its placement certainly seems quite random.\nMaybe we should put both \"replace\" and \"isconstraint\" up near the\nfront, to match up with the statement's syntax.\n\n* The patch doesn't appear to have any defenses against being asked to\nreplace the definition of, say, a foreign key trigger. It might be\nsufficient to refuse to replace an entry that has tgisinternal set,\nthough I'm not sure if that covers all cases that we'd want to disallow.\n\n* Speaking of which, I think you broke the isInternal case by insisting\non doing a lookup first. isInternal should *not* do a lookup, period,\nespecially not with the name it's initially given which will not be the\nfinal trigger name. A conflict on that name is irrelevant, and so is\nthe OID of any pre-existing trigger.\n\n* I'm not entirely sure that this patch interacts gracefully with\nthe provisions for per-partition triggers, either. Is the change\ncorrectly cascaded to per-partition triggers if there are any?\nDo we disallow making a change on a child partition trigger rather\nthan its parent? (Checking tgisinternal is going to be bound up\nin that, since it looks like somebody decided to set that for child\ntriggers. I'm inclined to think that that was a dumb idea; we\nmay need to break out a separate tgischild flag so that we can tell\nwhat's what.)\n\n* I'm a little bit concerned about the semantics of changing the\ntgdeferrable/tginitdeferred properties of an existing trigger. If there\nare trigger events pending, and the trigger is redefined in such a way\nthat those events should already have been fired, what then? This doesn't\napply in other sessions, because taking ShareRowExclusiveLock should be\nenough to ensure that no other session has uncommitted updates pending\nagainst the table. But it *does* apply in our own session, because\nShareRowExclusiveLock won't conflict against our own locks. One answer\nwould be to run CheckTableNotInUse() once we discover that we're modifying\nan existing trigger. Or we could decide that it doesn't matter --- if you\ndo that and it breaks, tough. For comparison, I notice that there doesn't\nseem to be any guard against dropping a trigger that has pending events\nin our own session, though that doesn't work out too well:\n\nregression=# create constraint trigger my_trig after insert on trig_table deferrable initially deferred for each row execute procedure before_replacement();\nCREATE TRIGGER\nregression=# begin;\nBEGIN\nregression=*# insert into trig_table default values;\nINSERT 0 1\nregression=*# drop trigger my_trig on trig_table;\nDROP TRIGGER\nregression=*# commit;\nERROR: relation 38489 has no triggers\n\nBut arguably that's a bug to be fixed, not desirable behavior to emulate.\n\n* Not the fault of this patch exactly, but trigger.c seems to have an\nannoyingly large number of copies of the code to look up a trigger by\nname. I wonder if we could refactor that, say by extracting the guts of\nget_trigger_oid() into an internal function that's passed an already-open\npg_trigger relation.\n\n* Upthread you were complaining about ShareRowExclusiveLock not being a\nstrong enough lock, but I think that's nonsense, for the same reason that\nit's a sufficient lock for plain CREATE TRIGGER: if we have that lock then\nno other session can have pending trigger events of any sort on the\nrelation, nor can new ones get made before we commit. But there's no\nreason to lock out SELECTs on the relation, since those don't interact\nwith triggers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 17:20:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Dear Tom Lane\n\nThanks for your so many fruitful comments !\n\nI have fixed my patch again.\nOn the other hand, there're some questions left\nthat I'd like to discuss.\n\n> * You missed updating equalfuncs.c/copyfuncs.c. Pretty much any change in a\n> Node struct will require touching backend/nodes/ functions, and in general it's a\n> good idea to grep for uses of the struct to see what else might be affected.\nYeah, thanks.\n\n> * Did you use a dartboard while deciding where to add the new field in struct\n> CreateTrigger? Its placement certainly seems quite random.\n> Maybe we should put both \"replace\" and \"isconstraint\" up near the front, to match\n> up with the statement's syntax.\nBy following the statement's syntax of CREATE TRIGGER,\nI've listed up where I should change and fixed their orders.\n\n> * Speaking of which, I think you broke the isInternal case by insisting on doing a\n> lookup first. isInternal should *not* do a lookup, period, especially not with the\n> name it's initially given which will not be the final trigger name. A conflict on that\n> name is irrelevant, and so is the OID of any pre-existing trigger.\nSorry for this.\nI inserted codes to skip the first lookup for isInternal case.\nAs a result, when isInternal is set true, trigger_exists flag never becomes true.\nDoing a lookup first is necessary to fetch information\nfor following codes such as existing_constraint_oid to run CreateConstraintEntry().\n\n> * I'm not entirely sure that this patch interacts gracefully with the provisions for\n> per-partition triggers, either. Is the change correctly cascaded to per-partition\n> triggers if there are any?\nYes.\nPlease check added 4 test cases to prove that replacement of trigger\ncascades to partition's trigger when there are other triggers on the relation.\n\n> * The patch doesn't appear to have any defenses against being asked to\n> replace the definition of, say, a foreign key trigger. It might be\n> sufficient to refuse to replace an entry that has tgisinternal set,\n> though I'm not sure if that covers all cases that we'd want to disallow.\n\n> Do we disallow making a change on a child partition trigger rather than its parent?\n> (Checking tgisinternal is going to be bound up in that, since it looks like somebody\n> decided to set that for child triggers. I'm inclined to think that that was a dumb\n> idea; we may need to break out a separate tgischild flag so that we can tell what's\n> what.)\nDoes this mean I need to add a new catalog member named 'tgischild' in pg_trigger?\nThis change sounds really widely influential, which means touching other many files additionally.\nIsn't there any other way to distinguish trigger on partition table\nfrom internally generated trigger ?\nOtherwise, I need to fix many codes to achieve\nthe protection of internally generated trigger from being replaced.\n\n> * I'm a little bit concerned about the semantics of changing the\n> tgdeferrable/tginitdeferred properties of an existing trigger. If there are trigger\n> events pending, and the trigger is redefined in such a way that those events\n> should already have been fired, what then? \nOK. I need a discussion about this point. \nThere would be two ideas to define the behavior of this semantics change, I think.\nThe first idea is to throw an error that means\nthe *pending* trigger can't be replaced during the session.\nThe second one is just to replace the trigger and ignite the new trigger\nat the end of the session when its tginitdeferred is set true.\nFor me, the first one sounds safer. Yet, I'd like to know other opinions.\n\n> regression=# create constraint trigger my_trig after insert on trig_table\n> deferrable initially deferred for each row execute procedure\n> before_replacement(); CREATE TRIGGER regression=# begin; BEGIN\n> regression=*# insert into trig_table default values; INSERT 0 1 regression=*#\n> drop trigger my_trig on trig_table; DROP TRIGGER regression=*# commit;\n> ERROR: relation 38489 has no triggers\nI could reproduce this bug, using the current master without my patch.\nSo this is another issue.\nI'm thinking that throwing an error when *pending* trigger is dropped\nmakes sense. Does everyone agree with it ?\n\n> * Not the fault of this patch exactly, but trigger.c seems to have an annoyingly\n> large number of copies of the code to look up a trigger by name. I wonder if we\n> could refactor that, say by extracting the guts of\n> get_trigger_oid() into an internal function that's passed an already-open\n> pg_trigger relation.\nWhile waiting for other reviews and comments, I'm willing to give it a try. \n\n> * Upthread you were complaining about ShareRowExclusiveLock not being a\n> strong enough lock, but I think that's nonsense, for the same reason that it's a\n> sufficient lock for plain CREATE TRIGGER: if we have that lock then no other\n> session can have pending trigger events of any sort on the relation, nor can new\n> ones get made before we commit. But there's no reason to lock out SELECTs on\n> the relation, since those don't interact with triggers.\nProbably I misunderstand the function's priority like execution of pg_dump\nand SELECTs. I'll sort out the information about this.\n\nOther commends and reviews are welcome.\n\nBest,\n\tTakamichi Osumi", "msg_date": "Tue, 21 Apr 2020 10:32:14 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "osumi.takamichi@fujitsu.com:\n>> * I'm a little bit concerned about the semantics of changing the\n>> tgdeferrable/tginitdeferred properties of an existing trigger. If there are trigger\n>> events pending, and the trigger is redefined in such a way that those events\n>> should already have been fired, what then?\n> OK. I need a discussion about this point.\n> There would be two ideas to define the behavior of this semantics change, I think.\n> The first idea is to throw an error that means\n> the *pending* trigger can't be replaced during the session.\n> The second one is just to replace the trigger and ignite the new trigger\n> at the end of the session when its tginitdeferred is set true.\n> For me, the first one sounds safer. Yet, I'd like to know other opinions.\n\nIMHO, constraint triggers should behave the same in that regard as other \nconstraints. I just checked:\n\nBEGIN;\nCREATE TABLE t1 (a int CONSTRAINT u UNIQUE DEFERRABLE INITIALLY DEFERRED);\nINSERT INTO t1 VALUES (1),(1);\nALTER TABLE t1 ALTER CONSTRAINT u NOT DEFERRABLE;\n\nwill throw with:\n\nERROR: cannot ALTER TABLE \"t1\" because it has pending trigger events\nSQL state: 55006\n\nSo if a trigger event is pending, CREATE OR REPLACE for that trigger \nshould throw. I think it should do in any case, not just when changing \ndeferrability. This makes it easier to reason about.\n\nIf the user has a pending trigger, they can still do SET CONSTRAINTS \ntrigger_name IMMEDIATE; to resolve that and then do CREATE OR REPLACE \nTRIGGER, just like in the ALTER TABLE case.\n\n>> regression=# create constraint trigger my_trig after insert on trig_table\n>> deferrable initially deferred for each row execute procedure\n>> before_replacement(); CREATE TRIGGER regression=# begin; BEGIN\n>> regression=*# insert into trig_table default values; INSERT 0 1 regression=*#\n>> drop trigger my_trig on trig_table; DROP TRIGGER regression=*# commit;\n>> ERROR: relation 38489 has no triggers\n> I could reproduce this bug, using the current master without my patch.\n> So this is another issue.\n> I'm thinking that throwing an error when *pending* trigger is dropped\n> makes sense. Does everyone agree with it ?\n\nJust tested the same example as above, but with DROP TABLE t1; instead \nof ALTER TABLE. This throws with:\n\nERROR: cannot DROP TABLE \"t1\" because it has pending trigger events\nSQL state: 55006\n\nSo yes, your suggestion makes a lot of sense!\n\n\n", "msg_date": "Mon, 3 Aug 2020 21:25:18 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Thu, Aug 20, 2020 at 12:11 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> I have fixed my patch again.\n\nHi Osumi-san,\n\nFYI - The latest patch (v06) has conflicts when applied.\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 Aug 2020 12:18:25 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi,\r\n\r\nThanks, Peter.\r\n> FYI - The latest patch (v06) has conflicts when applied.\r\n\r\nI've fixed my v06 and created v07.\r\nAlso, I added one test to throw an error\r\nto avoid a situation that trigger which has pending events are replaced by\r\nany other trigger in the same session.\r\n\r\nBest,\r\n\tTakamichi Osumi", "msg_date": "Mon, 24 Aug 2020 11:33:31 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Mon, Aug 24, 2020 at 9:33 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> I've fixed my v06 and created v07.\n\nHi Osumi-san.\n\nI have reviewed the source code of the v07 patch.\n\n(I also reviewed the test cases but I will share those comments as a\nseparate post).\n\nBelow are my comments - sorry, many of them are very minor.\n\n====\n\nCOMMENT pg_constraint.c (wrong comment?)\n\n- * The new constraint's OID is returned.\n+ * The new constraint's OID is returned. (This will be the same as\n+ * \"conOid\" if that is specified as nonzero.)\n\nShouldn't that comment say:\n(This will be the same as \"existing_constraint_oid\" if that is other\nthan InvalidOid)\n\n====\n\nCOMMENT pg_constraint.c (declarations)\n\n@@ -91,6 +93,11 @@ CreateConstraintEntry(const char *constraintName,\n NameData cname;\n int i;\n ObjectAddress conobject;\n+ SysScanDesc conscan;\n+ ScanKeyData skey[2];\n+ HeapTuple tuple;\n+ bool replaces[Natts_pg_constraint];\n+ Form_pg_constraint constrForm;\n\nMaybe it is more convenient/readable to declare these in the scope\nwhere they are actually used.\n\n====\n\nCOMMENT pg_constraint.c (oid checking)\n\n- conOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,\n- Anum_pg_constraint_oid);\n- values[Anum_pg_constraint_oid - 1] = ObjectIdGetDatum(conOid);\n+ if (!existing_constraint_oid)\n+ {\n+ conOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,\n+ Anum_pg_constraint_oid);\n\nMaybe better to use if (!OidIsValid(existing_constraint_oid)) here.\n\n====\n\nCOMMENT tablecmds.c (unrelated change)\n\n- false); /* is_internal */\n+ false); /* is_internal */\n\nSome whitespace which has nothing to do with the patch was changed.\n\n====\n\nCOMMENT trigger.c (declarations / whitespace)\n\n+ bool is_update = false;\n+ HeapTuple newtup;\n+ TupleDesc tupDesc;\n+ bool replaces[Natts_pg_trigger];\n+ Oid existing_constraint_oid = InvalidOid;\n+ bool trigger_exists = false;\n+ bool trigger_deferrable = false;\n\n1. One of those variables is misaligned with tabbing.\n\n2. Maybe it is more convenient/readable to declare some of these in\nthe scope where they are actually used.\ne.g. newtup, tupDesc, replaces.\n\n====\n\nCOMMENT trigger.c (error messages)\n\n+ /*\n+ * If this trigger has pending event, throw an error.\n+ */\n+ if (stmt->replace && !isInternal && trigger_deferrable &&\n+ AfterTriggerPendingOnRel(RelationGetRelid(rel)))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_IN_USE),\n+ errmsg(\"cannot replace \\\"%s\\\" on \\\"%s\\\" because it has pending\ntrigger events\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n+ /*\n+ * without OR REPLACE clause, can't override the trigger with the same name.\n+ */\n+ if (!stmt->replace && !isInternal)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" already exists\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n+ /*\n+ * CREATE OR REPLACE CONSTRAINT TRIGGER command can't replace\nnon-constraint trigger.\n+ */\n+ if (stmt->replace && stmt->isconstraint &&\n!OidIsValid(existing_constraint_oid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"Trigger \\\"%s\\\" for relation \\\"%s\\\" cannot be replaced with\nconstraint trigger\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n+ /*\n+ * CREATE OR REPLACE TRIGGER command can't replace constraint trigger.\n+ */\n+ if (stmt->replace && !stmt->isconstraint &&\nOidIsValid(existing_constraint_oid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"Constraint trigger \\\"%s\\\" for relation \\\"%s\\\" cannot be\nreplaced with non-constraint trigger\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n\n\n1. The order of these new errors is confusing. Maybe do the \"already\nexists\" check first so that all the REPLACE errors can be grouped\ntogether.\n\n2. There is inconsistent message text capitalising the 1st word.\nShould all be lower [1]\n[1] - https://www.postgresql.org/docs/current/error-style-guide.html\n\n3. That \"already exists\" error might benefit from a message hint. e.g.\n---\nereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\nerrmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" already exists\", ...),\nerrhint(\"use CREATE OR REPLACE to replace it\")));\n---\n\n4. Those last two errors seem like a complicated way just to say the\ntrigger types are not compatible. Maybe these messages can be\nsimplified, and some message hints added. e.g.\n---\nereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\n errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a regular trigger\", ...),\n errhint(\"use CREATE OR REPLACE TRIGGER to replace a regular trigger\")));\n\n\nereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\n errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a constraint trigger\", ...),\n errhint(\"use CREATE OR REPLACE CONSTRAINT to replace a constraint\ntrigger\")));\n---\n\n====\n\nCOMMENT trigger.c (comment wording)\n\n+ * In case of replace trigger, trigger should no-more dependent on old\n+ * referenced objects. Always remove the old dependencies and then\n\nNeeds re-wording.\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 25 Aug 2020 18:06:09 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Mon, Aug 24, 2020 at 9:33 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> I've fixed my v06 and created v07.\n\nHi Osumi-san.\n\nI have reviewed the test code of the v07 patch.\n\nBelow are my comments.\n\n====\n\nCOMMENT (confusing functions)\n\n+create function before_replacement() returns trigger as $$\n+begin\n+raise notice 'function replaced by another function';\n+return null;\n+end; $$ language plpgsql;\n+create function after_replacement() returns trigger as $$\n+begin\n+raise notice 'function to replace the initial function';\n+return null;\n+end; $$ language plpgsql;\n\nWhy have function names with a hard-wired dependency on how you expect\nthey will be called.\nI think just call them \"funcA\" and \"funcB\" is much easier and works\njust as well. e.g.\n---\ncreate function funcA() returns trigger as $$\nbegin\nraise notice 'hello from funcA';\nreturn null;\nend; $$ language plpgsql;\n\ncreate function funcB() returns trigger as $$\nbegin\nraise notice 'hello from funcB';\nreturn null;\nend; $$ language plpgsql;\n---\n\nAnd this same comment applies for all the other test functions created\nfor this v07 patch.\n\n====\n\nCOMMENT (drops)\n\n+-- setup for another test of CREATE OR REPLACE TRIGGER\n+drop table if exists parted_trig;\n+NOTICE: table \"parted_trig\" does not exist, skipping\n+drop trigger if exists my_trig on parted_trig;\n+NOTICE: relation \"parted_trig\" does not exist, skipping\n+drop function if exists before_replacement;\n+NOTICE: function before_replacement() does not exist, skipping\n+drop function if exists after_replacement;\n+NOTICE: function after_replacement() does not exist, skipping\n\nWas it deliberate to attempt to drop the trigger after dropping the table?\nAlso this seems to be dropping functions which were already dropped\njust several lines earlier.\n\n====\n\nCOMMENT (typos)\n\nThere are a couple of typos in the test comments. e.g.\n\"vefify\" -> \"verify\"\n\"child parition\" -> \"child partition\"\n\n====\n\nCOMMENT (partition table inserts)\n\n1. Was it deliberate to insert explicitly into each partition table?\nWhy not insert everything into the top table and let the partitions\ntake care of themselves?\n\n2. The choice of values to insert also seemed strange. Inserting 1 and\n1 and 10 is going to all end up in the \"parted_trig_1_1\".\n\nTo summarise, I thought all subsequent partition tests maybe should be\ninserting more like this:\n---\ninsert into parted_trig (a) values (50); -- into parted_trig_1_1\ninsert into parted_trig (a) values (1500); -- into parted_trig_2\ninsert into parted_trig (a) values (2500); -- into default_parted_trig\n---\n\n====\n\nCOMMENT (missing error test cases)\n\nThere should be some more test cases to cover the new error messages\nthat were added to trigger.c:\n\ne.g. test for \"can't create regular trigger because already exists\"\ne.g. test for \"can't create constraint trigger because already exists\"\ne.g. test for \"can't replace regular trigger with constraint trigger\"\"\ne.g. test for \"can't replace constraint trigger with regular trigger\"\netc.\n\n====\n\nCOMMENT (trigger with pending events)\n\nThis is another test where the complexity of the functions\n(\"not_replaced\", and \"fail_to_replace\") seemed excessive.\nI think just calling these \"funcA\" and \"funcB\" as mentioned above\nwould be easier, and would serve just as well.\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 25 Aug 2020 18:15:41 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi, Peter\r\n\r\n\r\nYou gave me two posts for my patch review.\r\nThank you so much. I'll write all my replies into this post.\r\n\r\n> ====\r\n> \r\n> COMMENT pg_constraint.c (wrong comment?)\r\n> \r\n> - * The new constraint's OID is returned.\r\n> + * The new constraint's OID is returned. (This will be the same as\r\n> + * \"conOid\" if that is specified as nonzero.)\r\n> \r\n> Shouldn't that comment say:\r\n> (This will be the same as \"existing_constraint_oid\" if that is other than\r\n> InvalidOid)\r\nThanks. I corrected this part.\r\n\r\n> \r\n> ====\r\n> \r\n> COMMENT pg_constraint.c (declarations)\r\n> \r\n> @@ -91,6 +93,11 @@ CreateConstraintEntry(const char *constraintName,\r\n> NameData cname;\r\n> int i;\r\n> ObjectAddress conobject;\r\n> + SysScanDesc conscan;\r\n> + ScanKeyData skey[2];\r\n> + HeapTuple tuple;\r\n> + bool replaces[Natts_pg_constraint];\r\n> + Form_pg_constraint constrForm;\r\n> \r\n> Maybe it is more convenient/readable to declare these in the scope where they\r\n> are actually used.\r\nYou're right. Fixed.\r\n\r\n\r\n> ====\r\n> \r\n> COMMENT pg_constraint.c (oid checking)\r\n> \r\n> - conOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,\r\n> - Anum_pg_constraint_oid);\r\n> - values[Anum_pg_constraint_oid - 1] = ObjectIdGetDatum(conOid);\r\n> + if (!existing_constraint_oid)\r\n> + {\r\n> + conOid = GetNewOidWithIndex(conDesc, ConstraintOidIndexId,\r\n> + Anum_pg_constraint_oid);\r\n> \r\n> Maybe better to use if (!OidIsValid(existing_constraint_oid)) here.\r\nGot it. I replaced that part by OidIsValid ().\r\n\r\n\r\n> ====\r\n> \r\n> COMMENT tablecmds.c (unrelated change)\r\n> \r\n> - false); /* is_internal */\r\n> + false); /* is_internal */\r\n> \r\n> Some whitespace which has nothing to do with the patch was changed.\r\nYeah. Fixed.\r\n\r\n> ====\r\n> \r\n> COMMENT trigger.c (declarations / whitespace)\r\n> \r\n> + bool is_update = false;\r\n> + HeapTuple newtup;\r\n> + TupleDesc tupDesc;\r\n> + bool replaces[Natts_pg_trigger];\r\n> + Oid existing_constraint_oid = InvalidOid; bool trigger_exists = false;\r\n> + bool trigger_deferrable = false;\r\n> \r\n> 1. One of those variables is misaligned with tabbing.\r\nFixed.\r\n\r\n> \r\n> 2. Maybe it is more convenient/readable to declare some of these in the scope\r\n> where they are actually used.\r\n> e.g. newtup, tupDesc, replaces.\r\nI cannot do this because those variables are used\r\nat the top level in this function. Anyway, thanks for the comment.\r\n\r\n> ====\r\n> \r\n> COMMENT trigger.c (error messages)\r\n> \r\n> + /*\r\n> + * If this trigger has pending event, throw an error.\r\n> + */\r\n> + if (stmt->replace && !isInternal && trigger_deferrable &&\r\n> + AfterTriggerPendingOnRel(RelationGetRelid(rel)))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_IN_USE),\r\n> + errmsg(\"cannot replace \\\"%s\\\" on \\\"%s\\\" because it has pending\r\n> trigger events\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> + /*\r\n> + * without OR REPLACE clause, can't override the trigger with the same name.\r\n> + */\r\n> + if (!stmt->replace && !isInternal)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" already exists\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> + /*\r\n> + * CREATE OR REPLACE CONSTRAINT TRIGGER command can't replace\r\n> non-constraint trigger.\r\n> + */\r\n> + if (stmt->replace && stmt->isconstraint &&\r\n> !OidIsValid(existing_constraint_oid))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"Trigger \\\"%s\\\" for relation \\\"%s\\\" cannot be replaced with\r\n> constraint trigger\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> + /*\r\n> + * CREATE OR REPLACE TRIGGER command can't replace constraint trigger.\r\n> + */\r\n> + if (stmt->replace && !stmt->isconstraint &&\r\n> OidIsValid(existing_constraint_oid))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"Constraint trigger \\\"%s\\\" for relation \\\"%s\\\" cannot be\r\n> replaced with non-constraint trigger\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> \r\n> \r\n> 1. The order of these new errors is confusing. Maybe do the \"already exists\"\r\n> check first so that all the REPLACE errors can be grouped together.\r\nOK. I sorted out the order and conditions for this part.\r\n\r\n> 2. There is inconsistent message text capitalising the 1st word.\r\n> Should all be lower [1]\r\n> [1] - https://www.postgresql.org/docs/current/error-style-guide.html\r\nFixed.\r\n\r\n\r\n> 3. That \"already exists\" error might benefit from a message hint. e.g.\r\n> ---\r\n> ereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" already exists\", ...), errhint(\"use\r\n> CREATE OR REPLACE to replace it\")));\r\n> ---\r\n> \r\n> 4. Those last two errors seem like a complicated way just to say the trigger\r\n> types are not compatible. Maybe these messages can be simplified, and some\r\n> message hints added. e.g.\r\n> ---\r\n> ereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a regular trigger\", ...),\r\n> errhint(\"use CREATE OR REPLACE TRIGGER to replace a regular\r\n> trigger\")));\r\n> \r\n> \r\n> ereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a constraint trigger\", ...),\r\n> errhint(\"use CREATE OR REPLACE CONSTRAINT to replace a constraint\r\n> trigger\")));\r\nI simplified those messages.\r\n\r\n> ====\r\n> \r\n> COMMENT trigger.c (comment wording)\r\n> \r\n> + * In case of replace trigger, trigger should no-more dependent on old\r\n> + * referenced objects. Always remove the old dependencies and then\r\n> \r\n> Needs re-wording.\r\nFixed.\r\n\r\n\r\n> ====\r\n> \r\n> COMMENT (confusing functions)\r\n> \r\n> +create function before_replacement() returns trigger as $$ begin raise\r\n> +notice 'function replaced by another function'; return null; end; $$\r\n> +language plpgsql; create function after_replacement() returns trigger\r\n> +as $$ begin raise notice 'function to replace the initial function';\r\n> +return null; end; $$ language plpgsql;\r\n> \r\n> Why have function names with a hard-wired dependency on how you expect\r\n> they will be called.\r\n> I think just call them \"funcA\" and \"funcB\" is much easier and works just as well.\r\n> e.g.\r\n> ---\r\n> create function funcA() returns trigger as $$ begin raise notice 'hello from\r\n> funcA'; return null; end; $$ language plpgsql;\r\n> \r\n> create function funcB() returns trigger as $$ begin raise notice 'hello from\r\n> funcB'; return null; end; $$ language plpgsql;\r\n> ---\r\nGot it. I simplified such kind of confusing names. Thanks.\r\n\r\n\r\n> ====\r\n> \r\n> COMMENT (drops)\r\n> \r\n> +-- setup for another test of CREATE OR REPLACE TRIGGER drop table if\r\n> +exists parted_trig;\r\n> +NOTICE: table \"parted_trig\" does not exist, skipping drop trigger if\r\n> +exists my_trig on parted_trig;\r\n> +NOTICE: relation \"parted_trig\" does not exist, skipping drop function\r\n> +if exists before_replacement;\r\n> +NOTICE: function before_replacement() does not exist, skipping drop\r\n> +function if exists after_replacement;\r\n> +NOTICE: function after_replacement() does not exist, skipping\r\n> \r\n> Was it deliberate to attempt to drop the trigger after dropping the table?\r\n> Also this seems to be dropping functions which were already dropped just\r\n> several lines earlier.\r\nThis wasn't necessary. I deleted those.\r\n\r\n> ====\r\n> \r\n> COMMENT (typos)\r\n> \r\n> There are a couple of typos in the test comments. e.g.\r\n> \"vefify\" -> \"verify\"\r\n> \"child parition\" -> \"child partition\"\r\nFixed.\r\n\r\n> ====\r\n> \r\n> COMMENT (partition table inserts)\r\n> \r\n> 1. Was it deliberate to insert explicitly into each partition table?\r\n> Why not insert everything into the top table and let the partitions take care of\r\n> themselves?\r\nActually, yes. I wanted readers of this code to easily identify which partitioned is used.\r\nBut, I fixed those because it was redundant and not smart. Your suggestion sounds better.\r\n\r\n> \r\n> 2. The choice of values to insert also seemed strange. Inserting 1 and\r\n> 1 and 10 is going to all end up in the \"parted_trig_1_1\".\r\n> \r\n> To summarise, I thought all subsequent partition tests maybe should be\r\n> inserting more like this:\r\n> ---\r\n> insert into parted_trig (a) values (50); -- into parted_trig_1_1 insert into\r\n> parted_trig (a) values (1500); -- into parted_trig_2 insert into parted_trig (a)\r\n> values (2500); -- into default_parted_trig\r\nThis makes sense. I adopted your idea.\r\n\r\n> ====\r\n> \r\n> COMMENT (missing error test cases)\r\n> \r\n> There should be some more test cases to cover the new error messages that\r\n> were added to trigger.c:\r\n> \r\n> e.g. test for \"can't create regular trigger because already exists\"\r\n> e.g. test for \"can't create constraint trigger because already exists\"\r\n> e.g. test for \"can't replace regular trigger with constraint trigger\"\"\r\n> e.g. test for \"can't replace constraint trigger with regular trigger\"\r\n> etc.\r\nI've added those tests additionally.\r\n\r\n> ====\r\n> \r\n> COMMENT (trigger with pending events)\r\n> \r\n> This is another test where the complexity of the functions (\"not_replaced\", and\r\n> \"fail_to_replace\") seemed excessive.\r\n> I think just calling these \"funcA\" and \"funcB\" as mentioned above would be\r\n> easier, and would serve just as well.\r\nI modified the names of functions.\r\n\r\n\r\nBest,\r\n\tTakamichi Osumi", "msg_date": "Fri, 28 Aug 2020 09:29:16 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi Osumi-san.\n\nThanks for addressing my previous review comments in your new v08 patch.\n\nI have checked it again.\n\nMy only remaining comments are for trivial stuff:\n\n====\n\nCOMMENT trigger.c (tab alignment)\n\n@@ -184,6 +185,13 @@ CreateTrigger(CreateTrigStmt *stmt, const char\n*queryString,\n char *oldtablename = NULL;\n char *newtablename = NULL;\n bool partition_recurse;\n+ bool is_update = false;\n+ HeapTuple newtup;\n+ TupleDesc tupDesc;\n+ bool replaces[Natts_pg_trigger];\n+ Oid existing_constraint_oid = InvalidOid;\n+ bool trigger_exists = false;\n+ bool trigger_deferrable = false;\n\nMaybe my editor configuration is wrong, but the alignment of\n\"existing_constraint_oid\" still does not look fixed to me.\n\n====\n\nCOMMENT trigger.c (move some declarations)\n\n>> 2. Maybe it is more convenient/readable to declare some of these in the scope\n>> where they are actually used.\n>> e.g. newtup, tupDesc, replaces.\n>I cannot do this because those variables are used\n>at the top level in this function. Anyway, thanks for the comment.\n\nAre you sure it can't be done? It looks doable to me.\n\n====\n\nCOMMENT trigger.c (\"already exists\" message)\n\nIn my v07 review I suggested adding a hint for the new \"already exists\" error.\nOf course choice is yours, but since you did not add it I just wanted\nto ask was that accidental or deliberate?\n\n====\n\nCOMMENT triggers.sql/.out (typo in comment)\n\n+-- test for detecting imcompatible replacement of trigger\n\n\"imcompatible\" -> \"incompatible\"\n\n====\n\nCOMMENT triggers.sql/.out (wrong comment)\n\n+drop trigger my_trig on my_table;\n+create or replace constraint trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcB(); --should fail\n+create or replace trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcB(); --should fail\n+ERROR: trigger \"my_trig\" for relation \"my_table\" is a constraint trigger\n+HINT: use CREATE OR REPLACE CONSTRAINT TRIGGER to replace a costraint trigger\n\nI think the first \"--should fail\" comment is misleading. First time is OK.\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Sep 2020 17:35:36 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi, Peter-San\r\n\r\nI've fixed all except one point.\r\n> My only remaining comments are for trivial stuff:\r\nNot trivial but important.\r\n\r\n> COMMENT trigger.c (tab alignment)\r\n> \r\n> @@ -184,6 +185,13 @@ CreateTrigger(CreateTrigStmt *stmt, const char\r\n> *queryString,\r\n> char *oldtablename = NULL;\r\n> char *newtablename = NULL;\r\n> bool partition_recurse;\r\n> + bool is_update = false;\r\n> + HeapTuple newtup;\r\n> + TupleDesc tupDesc;\r\n> + bool replaces[Natts_pg_trigger];\r\n> + Oid existing_constraint_oid = InvalidOid; bool trigger_exists = false;\r\n> + bool trigger_deferrable = false;\r\n> \r\n> Maybe my editor configuration is wrong, but the alignment of\r\n> \"existing_constraint_oid\" still does not look fixed to me.\r\nYou were right. In order to solve this point completely,\r\nI've executed pgindent and gotten clean alignment.\r\nHow about v09 ?\r\nOther alignments of C source codes have been fixed as well.\r\nThis is mainly comments, though.\r\n\r\n> COMMENT trigger.c (move some declarations)\r\n> \r\n> >> 2. Maybe it is more convenient/readable to declare some of these in\r\n> >> the scope where they are actually used.\r\n> >> e.g. newtup, tupDesc, replaces.\r\n> >I cannot do this because those variables are used at the top level in\r\n> >this function. Anyway, thanks for the comment.\r\n> \r\n> Are you sure it can't be done? It looks doable to me.\r\nDone. I was wrong. Thank you.\r\n\r\n> COMMENT trigger.c (\"already exists\" message)\r\n> \r\n> In my v07 review I suggested adding a hint for the new \"already exists\" error.\r\n> Of course choice is yours, but since you did not add it I just wanted to ask was\r\n> that accidental or deliberate?\r\nThis was deliberate.\r\nThe code path of \"already exists\" error you mentioned above\r\nis used for other errors as well. For example, a failure case of \r\n\"ALTER TABLE name ATTACH PARTITION partition_name...\".\r\n\r\nThis command fails if the \"partition_name\" table has a trigger,\r\nwhose name is exactly same as the trigger on the \"name\" table.\r\nFor each user-defined row-level trigger that exists in the \"name\" table,\r\na corresponding one is created in the attached table, automatically.\r\nThus, the \"ALTER TABLE\" command throws the error which says\r\ntrigger \"name\" for relation \"partition_name\" already exists.\r\nI felt if I add the hint, application developer would get confused.\r\nDid it make sense ?\r\n\r\n> COMMENT triggers.sql/.out (typo in comment)\r\n> \r\n> +-- test for detecting imcompatible replacement of trigger\r\n> \r\n> \"imcompatible\" -> \"incompatible\"\r\nFixed.\r\n\r\n\r\n> COMMENT triggers.sql/.out (wrong comment)\r\n> \r\n> +drop trigger my_trig on my_table;\r\n> +create or replace constraint trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcB(); --should fail create or\r\n> +replace trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcB(); --should fail\r\n> +ERROR: trigger \"my_trig\" for relation \"my_table\" is a constraint\r\n> +trigger\r\n> +HINT: use CREATE OR REPLACE CONSTRAINT TRIGGER to replace a\r\n> costraint\r\n> +trigger\r\n> \r\n> I think the first \"--should fail\" comment is misleading. First time is OK.\r\nThanks. Removed the misleading comment.\r\n\r\n\r\nRegards,\r\n\tTakamichi Osumi", "msg_date": "Tue, 8 Sep 2020 11:36:10 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Tue, Sep 8, 2020 at 9:36 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> I've fixed all except one point.\n\nThanks for addressing my previous review comments in your new v09 patch.\n\nThose are fixed OK now, but I found 2 new review points.\n\n====\n\nCOMMENT trigger.c (typo)\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a constraint trigger\",\n+ stmt->trigname, RelationGetRelationName(rel)),\n+ errhint(\"use CREATE OR REPLACE CONSTRAINT TRIGGER to replace a\ncostraint trigger\")));\n\n\nTypo in the errhint text.\n\"costraint\" -> \"constraint\"\n\n====\n\nCOMMENT create_trigger.sgmg (add more help?)\n\nI noticed that the CREATE OR REPLACE FUNCTION help [1] describes the\nOR REPLACE syntax (\"Description\" section) and also mentions some of\nthe restrictions when using REPLACE (\"Notes\" section).\n[1] - https://www.postgresql.org/docs/current/sql-createfunction.html\n\n~~\nOTOH this trigger patch does not add anything much at all in the trigger help.\n\nShouldn't the \"Description\" at least say something like:\n\"CREATE OR REPLACE will either create a new trigger, or replace an\nexisting definition.\"\n\nShouldn't the \"Notes\" include information about restrictions when\nusing OR REPLACE\ne.g. cannot replace triggers with triggers of a different kind\ne.g. cannot replace triggers with pending events\n\nWhat do you think?\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 9 Sep 2020 18:18:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi\r\n\r\n\r\n> Those are fixed OK now, but I found 2 new review points.\r\n> \r\n> ====\r\n> \r\n> COMMENT trigger.c (typo)\r\n> \r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a constraint trigger\",\r\n> + stmt->trigname, RelationGetRelationName(rel)),\r\n> + errhint(\"use CREATE OR REPLACE CONSTRAINT TRIGGER to replace a\r\n> costraint trigger\")));\r\n> \r\n> \r\n> Typo in the errhint text.\r\n> \"costraint\" -> \"constraint\"\r\nFixed. Thank you.\r\n\r\n> ====\r\n> \r\n> COMMENT create_trigger.sgmg (add more help?)\r\n> \r\n> I noticed that the CREATE OR REPLACE FUNCTION help [1] describes the OR\r\n> REPLACE syntax (\"Description\" section) and also mentions some of the\r\n> restrictions when using REPLACE (\"Notes\" section).\r\n> [1] - https://www.postgresql.org/docs/current/sql-createfunction.html\r\n> \r\n> ~~\r\n> OTOH this trigger patch does not add anything much at all in the trigger help.\r\n> \r\n> Shouldn't the \"Description\" at least say something like:\r\n> \"CREATE OR REPLACE will either create a new trigger, or replace an existing\r\n> definition.\"\r\n> \r\n> Shouldn't the \"Notes\" include information about restrictions when using OR\r\n> REPLACE e.g. cannot replace triggers with triggers of a different kind e.g.\r\n> cannot replace triggers with pending events\r\n> \r\n> What do you think?\r\nThat's a great idea. I've applied this idea to the latest patch v10.\r\n\r\nRegards,\r\n\tTakamichi Osumi", "msg_date": "Wed, 9 Sep 2020 13:28:12 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Wed, Sep 9, 2020 at 11:28 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> That's a great idea. I've applied this idea to the latest patch v10.\n\n====\n\nCOMMENT create_trigger.sgml (typo/wording)\n\n\"vise versa\" -> \"vice versa\"\n\nBEFORE\nYou cannot replace triggers with a different type of trigger, that\nmeans it is impossible to replace regular trigger with constraint\ntrigger and vise versa.\n\nAFTER (suggestion)\nYou cannot replace triggers with a different type of trigger. That\nmeans it is impossible to replace a regular trigger with a constraint\ntrigger, and vice versa.\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 10 Sep 2020 11:31:31 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello, Peter-San\r\n\r\n\r\n> > That's a great idea. I've applied this idea to the latest patch v10.\r\n> \r\n> ====\r\n> \r\n> COMMENT create_trigger.sgml (typo/wording)\r\n> \r\n> \"vise versa\" -> \"vice versa\"\r\nSorry and thank you for all your pointing out.\r\n\r\n> BEFORE\r\n> You cannot replace triggers with a different type of trigger, that means it is\r\n> impossible to replace regular trigger with constraint trigger and vise versa.\r\n> \r\n> AFTER (suggestion)\r\n> You cannot replace triggers with a different type of trigger. That means it is\r\n> impossible to replace a regular trigger with a constraint trigger, and vice versa.\r\nThank you. Your suggestion must be better.\r\n\r\nI attached the v11 patch.\r\n\r\nRegards,\r\n\tTakamichi Osumi", "msg_date": "Thu, 10 Sep 2020 02:34:25 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Thu, Sep 10, 2020 at 12:34 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> I attached the v11 patch.\n\nThe v11 patch looked OK to me.\n\nSince I have no more review comments I am marking this as \"ready for committer\".\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 10 Sep 2020 15:18:29 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> [ CREATE_OR_REPLACE_TRIGGER_v11.patch ]\n\nI took a quick look through this. I think there's still work left to do.\n\n* I'm concerned by the fact that there doesn't seem to be any defense\nagainst somebody replacing a foreign-key trigger with something that\ndoes something else entirely, and thereby silently breaking their\nforeign key constraint. I think it might be a good idea to forbid\nreplacing triggers for which tgisinternal is true; but I've not done\nthe legwork to see if that's exactly the condition we want.\n\n* In the same vein, I'm not sure that the right things happen when fooling\nwith triggers attached to partitioned tables. We presumably don't want to\nallow mucking directly with a child trigger. Perhaps refusing an update\nwhen tgisinternal might fix this too (although we'll have to be careful to\nmake the error message not too confusing).\n\n* I don't think that you've fully thought through the implications\nof replacing a trigger for a table that the current transaction has\nalready modified. Is it really sufficient, or even useful, to do\nthis:\n\n+ /*\n+ * If this trigger has pending events, throw an error.\n+ */\n+ if (trigger_deferrable && AfterTriggerPendingOnRel(RelationGetRelid(rel)))\n\nAs an example, if we change a BEFORE trigger to an AFTER trigger,\nthat's not going to affect the fact that we *already* fired that\ntrigger. Maybe this is okay and we just need to document it, but\nI'm not convinced.\n\n* BTW, I don't think a trigger necessarily has to be deferrable\nin order to have pending AFTER events. The existing use of\nAfterTriggerPendingOnRel certainly doesn't assume that. But really,\nI think we probably ought to be applying CheckTableNotInUse which'd\ninclude that test. (Another way in which there's fuzzy thinking\nhere is that AfterTriggerPendingOnRel isn't specific to *this*\ntrigger.)\n\n* A lesser point is that I think you're overcomplicating the\ncode by applying heap_modify_tuple. You might as well just\nbuild the new tuple normally in all cases, and then apply\neither CatalogTupleInsert or CatalogTupleUpdate.\n\n* Also, the search for an existing trigger tuple is being\ndone the hard way. You're using an index on (tgrelid, tgname),\nso you could include the name in the index key and expect that\nthere's at most one match.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Sep 2020 16:11:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi\n\n\nI spent too much time to respond to this e-mail. Sorry.\nActually, I got stuck to deal with achieving both\nerror detection of internal trigger case and pending trigger case.\n\n> * I'm concerned by the fact that there doesn't seem to be any defense against\n> somebody replacing a foreign-key trigger with something that does something\n> else entirely, and thereby silently breaking their foreign key constraint. I think\n> it might be a good idea to forbid replacing triggers for which tgisinternal is true;\n> but I've not done the legwork to see if that's exactly the condition we want.\n> \n> * In the same vein, I'm not sure that the right things happen when fooling with\n> triggers attached to partitioned tables. We presumably don't want to allow\n> mucking directly with a child trigger. Perhaps refusing an update when\n> tgisinternal might fix this too (although we'll have to be careful to make the error\n> message not too confusing).\nYeah, you are right. tgisinternal works to detect an invalid cases.\nI added a new check condition in my patch to prohibit\nthe replacement of internal triggers by an user,\nwhich protects FK trigger and child trigger from being replaced directly.\n\n\n> * I don't think that you've fully thought through the implications of replacing a\n> trigger for a table that the current transaction has already modified. Is it really\n> sufficient, or even useful, to do\n> this:\n> \n> + /*\n> + * If this trigger has pending events, throw an error.\n> + */\n> + if (trigger_deferrable &&\n> + AfterTriggerPendingOnRel(RelationGetRelid(rel)))\n> \n> As an example, if we change a BEFORE trigger to an AFTER trigger, that's not\n> going to affect the fact that we *already* fired that trigger. Maybe this is okay\n> and we just need to document it, but I'm not convinced.\n> \n> * BTW, I don't think a trigger necessarily has to be deferrable in order to have\n> pending AFTER events. The existing use of AfterTriggerPendingOnRel\n> certainly doesn't assume that. But really, I think we probably ought to be\n> applying CheckTableNotInUse which'd include that test. (Another way in\n> which there's fuzzy thinking here is that AfterTriggerPendingOnRel isn't specific\n> to *this*\n> trigger.)\nHmm, actually, when I just put a code of CheckTableNotInUse() in CreateTrigger(),\nit throws error like \"cannot CREATE OR REPLACE\nTRIGGER because it is being used by active queries in this session\".\nThis causes an break of the protection for internal cases above\nand a contradiction of already passed test cases.\nI though adding a condition of in_partition==false to call\nCheckTableNotInUse(). But this doesn't work in a corner case that\nchild trigger generated internally is pending and\nwe don't want to allow the replacement of this kind of trigger.\nDid you have any good idea to achieve both points at the same time ?\n\n\n> * A lesser point is that I think you're overcomplicating the code by applying\n> heap_modify_tuple. You might as well just build the new tuple normally in all\n> cases, and then apply either CatalogTupleInsert or CatalogTupleUpdate.\n> \n> * Also, the search for an existing trigger tuple is being done the hard way.\n> You're using an index on (tgrelid, tgname), so you could include the name in the\n> index key and expect that there's at most one match.\nWhile waiting for a new reply, I'll doing those 2 refactorings.\n\n\nRegards,\n\tTakamichi Osumi", "msg_date": "Tue, 6 Oct 2020 08:01:21 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello\n\n\n> > * A lesser point is that I think you're overcomplicating the code by\n> > applying heap_modify_tuple. You might as well just build the new\n> > tuple normally in all cases, and then apply either CatalogTupleInsert or\n> CatalogTupleUpdate.\n> >\n> > * Also, the search for an existing trigger tuple is being done the hard way.\n> > You're using an index on (tgrelid, tgname), so you could include the\n> > name in the index key and expect that there's at most one match.\n> While waiting for a new reply, I'll doing those 2 refactorings.\nI'm done with those refactorings. Please have a look at the changes\nof the latest patch.\n\n> > * I don't think that you've fully thought through the implications of\n> > replacing a trigger for a table that the current transaction has\n> > already modified. Is it really sufficient, or even useful, to do\n> > this:\n> >\n> > + /*\n> > + * If this trigger has pending events, throw an error.\n> > + */\n> > + if (trigger_deferrable &&\n> > + AfterTriggerPendingOnRel(RelationGetRelid(rel)))\n> >\n> > As an example, if we change a BEFORE trigger to an AFTER trigger,\n> > that's not going to affect the fact that we *already* fired that\n> > trigger. Maybe this is okay and we just need to document it, but I'm not\n> convinced.\n> >\n> > * BTW, I don't think a trigger necessarily has to be deferrable in\n> > order to have pending AFTER events. The existing use of\n> > AfterTriggerPendingOnRel certainly doesn't assume that. But really, I\n> > think we probably ought to be applying CheckTableNotInUse which'd\n> > include that test. (Another way in which there's fuzzy thinking here\n> > is that AfterTriggerPendingOnRel isn't specific to *this*\n> > trigger.)\n> Hmm, actually, when I just put a code of CheckTableNotInUse() in\n> CreateTrigger(), it throws error like \"cannot CREATE OR REPLACE TRIGGER\n> because it is being used by active queries in this session\".\n> This causes an break of the protection for internal cases above and a\n> contradiction of already passed test cases.\n> I though adding a condition of in_partition==false to call CheckTableNotInUse().\n> But this doesn't work in a corner case that child trigger generated internally is\n> pending and we don't want to allow the replacement of this kind of trigger.\n> Did you have any good idea to achieve both points at the same time ?\nStill, in terms of this point, I'm waiting for a comment !\n\n\nRegards,\n\tTakamichi Osumi", "msg_date": "Fri, 9 Oct 2020 03:19:47 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "From: osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>\n> > > * I don't think that you've fully thought through the implications\n> > > of replacing a trigger for a table that the current transaction has\n> > > already modified. Is it really sufficient, or even useful, to do\n> > > this:\n> > >\n> > > + /*\n> > > + * If this trigger has pending events, throw an error.\n> > > + */\n> > > + if (trigger_deferrable &&\n> > > + AfterTriggerPendingOnRel(RelationGetRelid(rel)))\n> > >\n> > > As an example, if we change a BEFORE trigger to an AFTER trigger,\n> > > that's not going to affect the fact that we *already* fired that\n> > > trigger. Maybe this is okay and we just need to document it, but\n> > > I'm not\n> > convinced.\n> > >\n> > > * BTW, I don't think a trigger necessarily has to be deferrable in\n> > > order to have pending AFTER events. The existing use of\n> > > AfterTriggerPendingOnRel certainly doesn't assume that. But really,\n> > > I think we probably ought to be applying CheckTableNotInUse which'd\n> > > include that test. (Another way in which there's fuzzy thinking\n> > > here is that AfterTriggerPendingOnRel isn't specific to *this*\n> > > trigger.)\n> > Hmm, actually, when I just put a code of CheckTableNotInUse() in\n> > CreateTrigger(), it throws error like \"cannot CREATE OR REPLACE\n> > TRIGGER because it is being used by active queries in this session\".\n> > This causes an break of the protection for internal cases above and a\n> > contradiction of already passed test cases.\n> > I though adding a condition of in_partition==false to call\n> CheckTableNotInUse().\n> > But this doesn't work in a corner case that child trigger generated\n> > internally is pending and we don't want to allow the replacement of this kind\n> of trigger.\n> > Did you have any good idea to achieve both points at the same time ?\n> Still, in terms of this point, I'm waiting for a comment !\n\nI understand this patch is intended for helping users to migrate from other DBMSs (mainly Oracle?) because they can easily alter some trigger attributes (probably the trigger action and WHEN condition in practice.) OTOH, the above issue seems to be associated with the Postgres-specific constraint trigger that is created with CONSTRAINT clause. (Oracle and the SQL standard doesn't have an equivalent feature.)\n\nSo, how about just disallowing the combination of REPLACE and CONSTRAINT? I think nobody would be crippled with that. If someone wants the combination by all means, that can be a separate enhancement.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 27 Oct 2020 06:07:44 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi,\n\n\nFrom: Tsunakawa, Takayuki < tsunakawa.takay@fujitsu.com>\n> From: osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>\n> > > > * I don't think that you've fully thought through the implications\n> > > > of replacing a trigger for a table that the current transaction\n> > > > has already modified. Is it really sufficient, or even useful, to\n> > > > do\n> > > > this:\n> > > >\n> > > > + /*\n> > > > + * If this trigger has pending events, throw an error.\n> > > > + */\n> > > > + if (trigger_deferrable &&\n> > > > + AfterTriggerPendingOnRel(RelationGetRelid(rel)))\n> > > >\n> > > > As an example, if we change a BEFORE trigger to an AFTER trigger,\n> > > > that's not going to affect the fact that we *already* fired that\n> > > > trigger. Maybe this is okay and we just need to document it, but\n> > > > I'm not\n> > > convinced.\n> > > >\n> > > > * BTW, I don't think a trigger necessarily has to be deferrable in\n> > > > order to have pending AFTER events. The existing use of\n> > > > AfterTriggerPendingOnRel certainly doesn't assume that. But\n> > > > really, I think we probably ought to be applying\n> > > > CheckTableNotInUse which'd include that test. (Another way in\n> > > > which there's fuzzy thinking here is that AfterTriggerPendingOnRel\n> > > > isn't specific to *this*\n> > > > trigger.)\n> > > Hmm, actually, when I just put a code of CheckTableNotInUse() in\n> > > CreateTrigger(), it throws error like \"cannot CREATE OR REPLACE\n> > > TRIGGER because it is being used by active queries in this session\".\n> > > This causes an break of the protection for internal cases above and\n> > > a contradiction of already passed test cases.\n> > > I though adding a condition of in_partition==false to call\n> > CheckTableNotInUse().\n> > > But this doesn't work in a corner case that child trigger generated\n> > > internally is pending and we don't want to allow the replacement of\n> > > this kind\n> > of trigger.\n> > > Did you have any good idea to achieve both points at the same time ?\n> > Still, in terms of this point, I'm waiting for a comment !\n> \n> I understand this patch is intended for helping users to migrate from other\n> DBMSs (mainly Oracle?) because they can easily alter some trigger attributes\n> (probably the trigger action and WHEN condition in practice.) OTOH, the\n> above issue seems to be associated with the Postgres-specific constraint\n> trigger that is created with CONSTRAINT clause. (Oracle and the SQL\n> standard doesn't have an equivalent feature.)\n> \n> So, how about just disallowing the combination of REPLACE and\n> CONSTRAINT? I think nobody would be crippled with that. If someone\n> wants the combination by all means, that can be a separate enhancement.\nI didn't notice this kind of perspective and you are right.\nIn order to achieve the purpose to help database migration from Oracle to Postgres,\nprohibitting the usage of OR REPLACE for constraint trigger is no problem.\n\nThanks for your great advice. I fixed and created new version.\nAlso, the size of this patch becomes much smaller.\n\n\nBest,\n\tTakamichi Osumi", "msg_date": "Thu, 29 Oct 2020 09:00:58 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello Osumi-san.\n\nBelow are my v14 patch review comments for your consideration.\n\n===\n\n(1) COMMENT\nFile: NA\nMaybe next time consider using format-patch to make the patch. Then\nyou can include a comment to give the background/motivation for this\nchange.\n\n===\n\n(2) COMMENT\nFile: doc/src/sgml/ref/create_trigger.sgml\n@@ -446,6 +454,13 @@ UPDATE OF <replaceable>column_name1</replaceable>\n[, <replaceable>column_name2</\nCurrently it says:\nWhen replacing an existing trigger with CREATE OR REPLACE TRIGGER,\nthere are restrictions. You cannot replace constraint triggers. That\nmeans it is impossible to replace a regular trigger with a constraint\ntrigger and to replace a constraint trigger with another constraint\ntrigger.\n\n--\n\nIs that correct wording? I don't think so. Saying \"to replace a\nregular trigger with a constraint trigger\" is NOT the same as \"replace\na constraint trigger\".\n\nMaybe I am mistaken but I think the help and the code are no longer in\nsync anymore. e.g. In previous versions of this patch you used to\nverify replacement trigger kinds (regular/constraint) match. AFAIK you\nare not doing that in the current code (but you should be). So\nalthough you say \"impossible to replace a regular trigger with a\nconstraint trigger\" I don't see any code to check/enforce that ( ?? )\n\nIMO when you simplified this v14 patch you may have removed some extra\ntrigger kind conditions that should not have been removed.\n\nAlso, the test code should have detected this problem, but I think the\ntests have also been broken in v14. See later COMMENT (9).\n\n===\n\n(3) COMMENT\nFile: src/backend/commands/trigger.c\n@@ -185,6 +185,10 @@ CreateTrigger(CreateTrigStmt *stmt, const char\n*queryString,\n+ bool internal_trigger = false;\n\n--\n\nThere is potential for confusion of \"isInternal\" versus\n\"internal_trigger\". The meaning is not apparent from the names, but\nIIUC isInternal seems to be when creating an internal trigger, whereas\ninternal_trigger seems to be when you found an existing trigger that\nwas previously created as isInternal.\n\nMaybe something like \"existing_isInternal\" would be a better name\ninstead of \"internal_trigger\".\n\n===\n\n(4) COMMENT\nFile: src/backend/commands/trigger.c\n@@ -185,6 +185,10 @@ CreateTrigger(CreateTrigStmt *stmt, const char\n*queryString,\n+ bool is_update = false;\n\nConsider if \"was_replaced\" might be a better name than \"is_update\".\n\n===\n\n(5) COMMENT\nFile: src/backend/commands/trigger.c\n@@ -669,6 +673,81 @@ CreateTrigger(CreateTrigStmt *stmt, const char\n*queryString,\n+ if (!stmt->replace)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" already exists\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n+ else\n+ {\n+ /*\n+ * An internal trigger cannot be replaced by another user defined\n+ * trigger. This should exclude the case that internal trigger is\n+ * child trigger of partition table and needs to be rewritten when\n+ * the parent trigger is replaced by user.\n+ */\n+ if (internal_trigger && isInternal == false && in_partition == false)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is an internal trigger\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n+\n+ /*\n+ * CREATE OR REPLACE TRIGGER command can't replace constraint\n+ * trigger.\n+ */\n+ if (OidIsValid(existing_constraint_oid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DUPLICATE_OBJECT),\n+ errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a constraint trigger\",\n+ stmt->trigname, RelationGetRelationName(rel))));\n+ }\n\nIt is not really necessary for the \"OR REPLACE\" code to need to be\ninside an \"else\" block like that because the \"if (!stmt->replace)\" has\nalready been tested above. Consider removing the \"else {\" to remove\nunnecessary indent if you want to.\n\n===\n\n(6) COMMENT\nFile: src/backend/commands/trigger.c\n(same code block as above)\n\nCondition is strangely written:\ne.g.\nBefore: if (internal_trigger && isInternal == false && in_partition == false)\nAfter: if (internal_trigger && !isInternal && !in_partition)\n\n===\n\n(7) COMMENT\nFile: src/backend/commands/trigger.c\n(same code block as above)\n/*\n * CREATE OR REPLACE TRIGGER command can't replace constraint\n * trigger.\n */\n\n--\n\nOnly need to say\n/* Can't replace a constraint trigger. */\n\n===\n\n(8) COMMENT\nFile: src/include/nodes/parsenodes.h\n@@ -2429,6 +2429,9 @@ typedef struct CreateAmStmt\n\nThe comment does not need to say \"when true,\". Just saying \"replace\ntrigger if already exists\" is enough.\n\n===\n\n(9) COMMENT\nFile: src/test/regress/expected/triggers.out\n+-- test for detecting incompatible replacement of trigger\n+create table my_table (id integer);\n+create function funcA() returns trigger as $$\n+begin\n+ raise notice 'hello from funcA';\n+ return null;\n+end; $$ language plpgsql;\n+create function funcB() returns trigger as $$\n+begin\n+ raise notice 'hello from funcB';\n+ return null;\n+end; $$ language plpgsql;\n+create or replace trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcA();\n+create constraint trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcB(); -- should fail\n+ERROR: trigger \"my_trig\" for relation \"my_table\" already exists\n\n--\n\nI think this test has been broken in v14. That last \"create constraint\ntrigger my_trig\" above can never be expected to work simply because\nyou are not specifying the \"OR REPLACE\" syntax. So in fact this is not\nproperly testing for incompatible types at all. It needs to say\n\"create OR REPLACE constraint trigger my_trig\" to be testing what it\nclaims to be testing.\n\nI also think there is a missing check in the code - see COMMENT (2) -\nfor handling this scenario. But since this test case is broken you do\nnot then notice the code check is missing.\n\n===\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 2 Nov 2020 16:38:58 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi\r\n\r\n\r\nPeter-San, thanks for your support.\r\nOn Monday, November 2, 2020 2:39 PM Peter Smith wrote:\r\n> ===\r\n> \r\n> (1) COMMENT\r\n> File: NA\r\n> Maybe next time consider using format-patch to make the patch. Then you\r\n> can include a comment to give the background/motivation for this change.\r\nOK. How about v15 ?\r\n\r\n> ===\r\n> \r\n> (2) COMMENT\r\n> File: doc/src/sgml/ref/create_trigger.sgml\r\n> @@ -446,6 +454,13 @@ UPDATE OF\r\n> <replaceable>column_name1</replaceable>\r\n> [, <replaceable>column_name2</\r\n> Currently it says:\r\n> When replacing an existing trigger with CREATE OR REPLACE TRIGGER,\r\n> there are restrictions. You cannot replace constraint triggers. That means it is\r\n> impossible to replace a regular trigger with a constraint trigger and to replace\r\n> a constraint trigger with another constraint trigger.\r\n> \r\n> --\r\n> \r\n> Is that correct wording? I don't think so. Saying \"to replace a regular trigger\r\n> with a constraint trigger\" is NOT the same as \"replace a constraint trigger\".\r\nI corrected my wording in create_trigger.sgml, which should cause less confusion\r\nthan v14. The reason why I changed the documents is described below.\r\n\r\n> Maybe I am mistaken but I think the help and the code are no longer in sync\r\n> anymore. e.g. In previous versions of this patch you used to verify\r\n> replacement trigger kinds (regular/constraint) match. AFAIK you are not\r\n> doing that in the current code (but you should be). So although you say\r\n> \"impossible to replace a regular trigger with a constraint trigger\" I don't see\r\n> any code to check/enforce that ( ?? )\r\n> IMO when you simplified this v14 patch you may have removed some extra\r\n> trigger kind conditions that should not have been removed.\r\n> \r\n> Also, the test code should have detected this problem, but I think the tests\r\n> have also been broken in v14. See later COMMENT (9).\r\nDon't worry and those are not broken.\r\n\r\nI changed some codes in gram.y to throw a syntax error when\r\nOR REPLACE clause is used with CREATE CONSTRAINT TRIGGER.\r\n\r\nIn the previous discussion with Tsunakawa-San in this thread,\r\nI judged that OR REPLACE clause is\r\nnot necessary for *CONSTRAINT* TRIGGER to achieve the purpose of this patch.\r\nIt is to support the database migration from Oracle to Postgres\r\nby supporting the same syntax for trigger replacement. Here,\r\nbecause the constraint trigger is unique to the latter,\r\nI prohibited the usage of CREATE CONSTRAINT TRIGGER and OR REPLACE\r\nclauses at the same time at the grammatical level.\r\nDid you agree with this way of modification ?\r\n\r\nTo prohibit the combination OR REPLACE and CONSTRAINT clauses may seem\r\na little bit radical but I refer to an example of the combination to use\r\nCREATE CONSTRAINT TRIGGER and AFTER clause.\r\nWhen the timing of trigger is not AFTER for CONSTRAINT TRIGGER,\r\nan syntax error is caused. I learnt and followed the way for\r\nmy modification from it.\r\n\r\n> ===\r\n> \r\n> (3) COMMENT\r\n> File: src/backend/commands/trigger.c\r\n> @@ -185,6 +185,10 @@ CreateTrigger(CreateTrigStmt *stmt, const char\r\n> *queryString,\r\n> + bool internal_trigger = false;\r\n> \r\n> --\r\n> \r\n> There is potential for confusion of \"isInternal\" versus \"internal_trigger\". The\r\n> meaning is not apparent from the names, but IIUC isInternal seems to be\r\n> when creating an internal trigger, whereas internal_trigger seems to be when\r\n> you found an existing trigger that was previously created as isInternal.\r\n> \r\n> Maybe something like \"existing_isInternal\" would be a better name instead of\r\n> \"internal_trigger\".\r\nDefinitely sounds better. I fixed the previous confusing name.\r\n\r\n> \r\n> ===\r\n> \r\n> (4) COMMENT\r\n> File: src/backend/commands/trigger.c\r\n> @@ -185,6 +185,10 @@ CreateTrigger(CreateTrigStmt *stmt, const char\r\n> *queryString,\r\n> + bool is_update = false;\r\n> \r\n> Consider if \"was_replaced\" might be a better name than \"is_update\".\r\n> \r\n> ===\r\nAlso, this must be good. Done.\r\n\r\n> \r\n> (5) COMMENT\r\n> File: src/backend/commands/trigger.c\r\n> @@ -669,6 +673,81 @@ CreateTrigger(CreateTrigStmt *stmt, const char\r\n> *queryString,\r\n> + if (!stmt->replace)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" already exists\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> + else\r\n> + {\r\n> + /*\r\n> + * An internal trigger cannot be replaced by another user defined\r\n> + * trigger. This should exclude the case that internal trigger is\r\n> + * child trigger of partition table and needs to be rewritten when\r\n> + * the parent trigger is replaced by user.\r\n> + */\r\n> + if (internal_trigger && isInternal == false && in_partition == false)\r\n> + ereport(ERROR, (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is an internal trigger\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> +\r\n> + /*\r\n> + * CREATE OR REPLACE TRIGGER command can't replace constraint\r\n> + * trigger.\r\n> + */\r\n> + if (OidIsValid(existing_constraint_oid))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DUPLICATE_OBJECT),\r\n> + errmsg(\"trigger \\\"%s\\\" for relation \\\"%s\\\" is a constraint trigger\",\r\n> + stmt->trigname, RelationGetRelationName(rel))));\r\n> + }\r\n> \r\n> It is not really necessary for the \"OR REPLACE\" code to need to be inside an\r\n> \"else\" block like that because the \"if (!stmt->replace)\" has already been\r\n> tested above. Consider removing the \"else {\" to remove unnecessary indent if\r\n> you want to.\r\nYeah, you are right. Fixed.\r\n\r\n> ===\r\n> \r\n> (6) COMMENT\r\n> File: src/backend/commands/trigger.c\r\n> (same code block as above)\r\n> \r\n> Condition is strangely written:\r\n> e.g.\r\n> Before: if (internal_trigger && isInternal == false && in_partition == false)\r\n> After: if (internal_trigger && !isInternal && !in_partition)\r\nOK. Done\r\n\r\n> \r\n> ===\r\n> \r\n> (7) COMMENT\r\n> File: src/backend/commands/trigger.c\r\n> (same code block as above)\r\n> /*\r\n> * CREATE OR REPLACE TRIGGER command can't replace constraint\r\n> * trigger.\r\n> */\r\n> \r\n> --\r\n> \r\n> Only need to say\r\n> /* Can't replace a constraint trigger. */\r\n> \r\n> ===\r\n> \r\n> (8) COMMENT\r\n> File: src/include/nodes/parsenodes.h\r\n> @@ -2429,6 +2429,9 @@ typedef struct CreateAmStmt\r\n> \r\n> The comment does not need to say \"when true,\". Just saying \"replace trigger\r\n> if already exists\" is enough.\r\n> \r\n> ===\r\nApplied.\r\n\r\n\r\n> (9) COMMENT\r\n> File: src/test/regress/expected/triggers.out\r\n> +-- test for detecting incompatible replacement of trigger create table\r\n> +my_table (id integer); create function funcA() returns trigger as $$\r\n> +begin\r\n> + raise notice 'hello from funcA';\r\n> + return null;\r\n> +end; $$ language plpgsql;\r\n> +create function funcB() returns trigger as $$ begin\r\n> + raise notice 'hello from funcB';\r\n> + return null;\r\n> +end; $$ language plpgsql;\r\n> +create or replace trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcA(); create constraint trigger\r\n> +my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcB(); -- should fail\r\n> +ERROR: trigger \"my_trig\" for relation \"my_table\" already exists\r\n> \r\n> --\r\n> \r\n> I think this test has been broken in v14. That last \"create constraint trigger\r\n> my_trig\" above can never be expected to work simply because you are not\r\n> specifying the \"OR REPLACE\" syntax. \r\nAs I described above, the grammatical error occurs to use\r\n\"CREATE OR REPLACE CONSTRAINT TRIGGER\" in v14 (and v15 also).\r\nAt the time to write v14, I wanted to list up all imcompatible cases\r\neven if some tests did *not* or can *not* contain \"OR REPLACE\" clause.\r\nI think this way of change seemed broken to you.\r\n\r\nStill now I think it's a good idea to cover such confusing cases,\r\nso I didn't remove both failure tests in v15\r\n(1) CREATE OR REPLACE TRIGGER creates a regular trigger and execute\r\n CREATE CONSTRAINT TRIGGER, which should fail\r\n(2) CREATE CONSTRAINT TRIGGER creates a constraint trigger and\r\n execute CREATE OR REPLACE TRIGGER, which should fail\r\nin order to show in such cases, the detection of error nicely works.\r\nThe results of tests are fine.\r\n\r\n> So in fact this is not properly testing for\r\n> incompatible types at all. It needs to say \"create OR REPLACE constraint\r\n> trigger my_trig\" to be testing what it claims to be testing.\r\n> \r\n> I also think there is a missing check in the code - see COMMENT (2) - for\r\n> handling this scenario. But since this test case is broken you do not then\r\n> notice the code check is missing.\r\n> \r\n> ===\r\nMy inappropriate explanation especially in the create_trigger.sgml\r\nmade you think those are broken. But, as I said they are necessary still\r\nto cover corner combination cases.\r\n\r\n\r\nBest,\r\n\tTakamichi Osumi", "msg_date": "Wed, 4 Nov 2020 03:53:33 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello Osumi-san.\n\nI have checked the v15 patch with regard to my v14 review comments.\n\nOn Wed, Nov 4, 2020 at 2:53 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> > (1) COMMENT\n> > File: NA\n> > Maybe next time consider using format-patch to make the patch. Then you\n> > can include a comment to give the background/motivation for this change.\n> OK. How about v15 ?\n\nYes, it is good now, apart from some typos in the first sentence:\n\"grammer\" --> \"grammar\"\n\"exisiting\" --> \"existing\"\n\n>\n> > ===\n> >\n> > (2) COMMENT\n> > File: doc/src/sgml/ref/create_trigger.sgml\n> > @@ -446,6 +454,13 @@ UPDATE OF\n> > <replaceable>column_name1</replaceable>\n> > [, <replaceable>column_name2</\n> > Currently it says:\n> > When replacing an existing trigger with CREATE OR REPLACE TRIGGER,\n> > there are restrictions. You cannot replace constraint triggers. That means it is\n> > impossible to replace a regular trigger with a constraint trigger and to replace\n> > a constraint trigger with another constraint trigger.\n> >\n> > --\n> >\n> > Is that correct wording? I don't think so. Saying \"to replace a regular trigger\n> > with a constraint trigger\" is NOT the same as \"replace a constraint trigger\".\n> I corrected my wording in create_trigger.sgml, which should cause less confusion\n> than v14. The reason why I changed the documents is described below.\n\nYes, OK. But it might be simpler still just to it like:\n\"CREATE OR REPLACE TRIGGER works only for replacing a regular (not\nconstraint) trigger with another regular trigger.\"\n\n>\n> > Maybe I am mistaken but I think the help and the code are no longer in sync\n> > anymore. e.g. In previous versions of this patch you used to verify\n> > replacement trigger kinds (regular/constraint) match. AFAIK you are not\n> > doing that in the current code (but you should be). So although you say\n> > \"impossible to replace a regular trigger with a constraint trigger\" I don't see\n> > any code to check/enforce that ( ?? )\n> > IMO when you simplified this v14 patch you may have removed some extra\n> > trigger kind conditions that should not have been removed.\n> >\n> > Also, the test code should have detected this problem, but I think the tests\n> > have also been broken in v14. See later COMMENT (9).\n> Don't worry and those are not broken.\n>\n> I changed some codes in gram.y to throw a syntax error when\n> OR REPLACE clause is used with CREATE CONSTRAINT TRIGGER.\n>\n> In the previous discussion with Tsunakawa-San in this thread,\n> I judged that OR REPLACE clause is\n> not necessary for *CONSTRAINT* TRIGGER to achieve the purpose of this patch.\n> It is to support the database migration from Oracle to Postgres\n> by supporting the same syntax for trigger replacement. Here,\n> because the constraint trigger is unique to the latter,\n> I prohibited the usage of CREATE CONSTRAINT TRIGGER and OR REPLACE\n> clauses at the same time at the grammatical level.\n> Did you agree with this way of modification ?\n>\n> To prohibit the combination OR REPLACE and CONSTRAINT clauses may seem\n> a little bit radical but I refer to an example of the combination to use\n> CREATE CONSTRAINT TRIGGER and AFTER clause.\n> When the timing of trigger is not AFTER for CONSTRAINT TRIGGER,\n> an syntax error is caused. I learnt and followed the way for\n> my modification from it.\n\nOK, I understand now. In my v14 review I failed to notice that you did\nit this way, which is why I thought a check was missing in the code.\n\nI do think this is a bit subtle. Perhaps this should be asserted and\ncommented a bit more in the code to make it much clearer what you did.\nFor example:\n----------\nBEFORE\n/*\n * can't replace constraint trigger.\n */\n if (OidIsValid(existing_constraint_oid))\nAFTER\n/*\n * It is not allowed to replace with a constraint trigger.\n * The OR REPLACE syntax is not available for constraint triggers (see gram.y).\n */\nAssert(!stmt->isconstraint);\n\n/*\n * It is not allowed to replace an existing constraint trigger.\n */\n if (OidIsValid(existing_constraint_oid))\n----------\n\n\n> > (9) COMMENT\n> > File: src/test/regress/expected/triggers.out\n> > +-- test for detecting incompatible replacement of trigger create table\n> > +my_table (id integer); create function funcA() returns trigger as $$\n> > +begin\n> > + raise notice 'hello from funcA';\n> > + return null;\n> > +end; $$ language plpgsql;\n> > +create function funcB() returns trigger as $$ begin\n> > + raise notice 'hello from funcB';\n> > + return null;\n> > +end; $$ language plpgsql;\n> > +create or replace trigger my_trig\n> > + after insert on my_table\n> > + for each row execute procedure funcA(); create constraint trigger\n> > +my_trig\n> > + after insert on my_table\n> > + for each row execute procedure funcB(); -- should fail\n> > +ERROR: trigger \"my_trig\" for relation \"my_table\" already exists\n> >\n> > --\n> >\n> > I think this test has been broken in v14. That last \"create constraint trigger\n> > my_trig\" above can never be expected to work simply because you are not\n> > specifying the \"OR REPLACE\" syntax.\n> As I described above, the grammatical error occurs to use\n> \"CREATE OR REPLACE CONSTRAINT TRIGGER\" in v14 (and v15 also).\n> At the time to write v14, I wanted to list up all imcompatible cases\n> even if some tests did *not* or can *not* contain \"OR REPLACE\" clause.\n> I think this way of change seemed broken to you.\n>\n> Still now I think it's a good idea to cover such confusing cases,\n> so I didn't remove both failure tests in v15\n> (1) CREATE OR REPLACE TRIGGER creates a regular trigger and execute\n> CREATE CONSTRAINT TRIGGER, which should fail\n> (2) CREATE CONSTRAINT TRIGGER creates a constraint trigger and\n> execute CREATE OR REPLACE TRIGGER, which should fail\n> in order to show in such cases, the detection of error nicely works.\n> The results of tests are fine.\n>\n> > So in fact this is not properly testing for\n> > incompatible types at all. It needs to say \"create OR REPLACE constraint\n> > trigger my_trig\" to be testing what it claims to be testing.\n> >\n> > I also think there is a missing check in the code - see COMMENT (2) - for\n> > handling this scenario. But since this test case is broken you do not then\n> > notice the code check is missing.\n> >\n> > ===\n> My inappropriate explanation especially in the create_trigger.sgml\n> made you think those are broken. But, as I said they are necessary still\n> to cover corner combination cases.\n\nYes, I agree that all the combinations should be present. That is why\nI wrote the \"create constraint trigger\" should be written \"create OR\nREPLACE constraint trigger\" because otherwise AFAIK there is no test\nattempting to replace using a constraint trigger - you are only really\ntesting you cannot create a duplicate name trigger (but those tests\nalready existed)\n\nIn other words, IMO the \"incompatible\" tests should be like below (I\nadded comments to try make it more clear what are the combinations)\n----------\ncreate or replace trigger my_trig\n after insert on my_table\n for each row execute procedure funcA(); -- 1. create a regular trigger. OK\ncreate or replace constraint trigger my_trig\n after insert on my_table\n for each row execute procedure funcB(); -- Test 1a. Replace regular\ntrigger with constraint trigger. Expect ERROR (bad syntax)\ndrop trigger my_trig on my_table;\ncreate constraint trigger my_trig -- 2. create a constraint trigger. OK\n after insert on my_table\n for each row execute procedure funcA();\ncreate or replace trigger my_trig\n after insert on my_table\n for each row execute procedure funcB(); -- Test 2a. Replace\nconstraint trigger with regular trigger. Expect ERROR (cannot replace\na constraint trigger)\ncreate or replace constraint trigger my_trig\n after insert on my_table\n for each row execute procedure funcB(); -- Test 2b. Replace\nconstraint trigger with constraint trigger. Expect ERROR (bad syntax)\ndrop table my_table;\ndrop function funcA();\ndrop function funcB();\n----------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 5 Nov 2020 15:21:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi,\r\n\r\n\r\nOn Thursday, November 5, 2020 1:22 PM\r\nPeter Smith <smithpb2250@gmail.com> wrote:\r\n> On Wed, Nov 4, 2020 at 2:53 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > > (1) COMMENT\r\n> > > File: NA\r\n> > > Maybe next time consider using format-patch to make the patch. Then\r\n> > > you can include a comment to give the background/motivation for this\r\n> change.\r\n> > OK. How about v15 ?\r\n> \r\n> Yes, it is good now, apart from some typos in the first sentence:\r\n> \"grammer\" --> \"grammar\"\r\n> \"exisiting\" --> \"existing\"\r\nSorry for such minor mistakes. Fixed.\r\n\r\n\r\n> > > ===\r\n> > >\r\n> > > (2) COMMENT\r\n> > > File: doc/src/sgml/ref/create_trigger.sgml\r\n> > > @@ -446,6 +454,13 @@ UPDATE OF\r\n> > > <replaceable>column_name1</replaceable>\r\n> > > [, <replaceable>column_name2</\r\n> > > Currently it says:\r\n> > > When replacing an existing trigger with CREATE OR REPLACE TRIGGER,\r\n> > > there are restrictions. You cannot replace constraint triggers. That\r\n> > > means it is impossible to replace a regular trigger with a\r\n> > > constraint trigger and to replace a constraint trigger with another\r\n> constraint trigger.\r\n> > >\r\n> > > --\r\n> > >\r\n> > > Is that correct wording? I don't think so. Saying \"to replace a\r\n> > > regular trigger with a constraint trigger\" is NOT the same as \"replace a\r\n> constraint trigger\".\r\n> > I corrected my wording in create_trigger.sgml, which should cause less\r\n> > confusion than v14. The reason why I changed the documents is described\r\n> below.\r\n> \r\n> Yes, OK. But it might be simpler still just to it like:\r\n> \"CREATE OR REPLACE TRIGGER works only for replacing a regular (not\r\n> constraint) trigger with another regular trigger.\"\r\nYeah, this kind of supplementary words help user to understand the\r\nexact usage of this feature. Thanks.\r\n\r\n> \r\n> >\r\n> > > Maybe I am mistaken but I think the help and the code are no longer\r\n> > > in sync anymore. e.g. In previous versions of this patch you used to\r\n> > > verify replacement trigger kinds (regular/constraint) match. AFAIK\r\n> > > you are not doing that in the current code (but you should be). So\r\n> > > although you say \"impossible to replace a regular trigger with a\r\n> > > constraint trigger\" I don't see any code to check/enforce that ( ??\r\n> > > ) IMO when you simplified this v14 patch you may have removed some\r\n> > > extra trigger kind conditions that should not have been removed.\r\n> > >\r\n> > > Also, the test code should have detected this problem, but I think\r\n> > > the tests have also been broken in v14. See later COMMENT (9).\r\n> > Don't worry and those are not broken.\r\n> >\r\n> > I changed some codes in gram.y to throw a syntax error when OR REPLACE\r\n> > clause is used with CREATE CONSTRAINT TRIGGER.\r\n> >\r\n> > In the previous discussion with Tsunakawa-San in this thread, I judged\r\n> > that OR REPLACE clause is not necessary for *CONSTRAINT* TRIGGER to\r\n> > achieve the purpose of this patch.\r\n> > It is to support the database migration from Oracle to Postgres by\r\n> > supporting the same syntax for trigger replacement. Here, because the\r\n> > constraint trigger is unique to the latter, I prohibited the usage of\r\n> > CREATE CONSTRAINT TRIGGER and OR REPLACE clauses at the same\r\n> time at\r\n> > the grammatical level.\r\n> > Did you agree with this way of modification ?\r\n> >\r\n> > To prohibit the combination OR REPLACE and CONSTRAINT clauses may\r\n> seem\r\n> > a little bit radical but I refer to an example of the combination to\r\n> > use CREATE CONSTRAINT TRIGGER and AFTER clause.\r\n> > When the timing of trigger is not AFTER for CONSTRAINT TRIGGER, an\r\n> > syntax error is caused. I learnt and followed the way for my\r\n> > modification from it.\r\n> \r\n> OK, I understand now. In my v14 review I failed to notice that you did it this\r\n> way, which is why I thought a check was missing in the code.\r\n> \r\n> I do think this is a bit subtle. Perhaps this should be asserted and commented\r\n> a bit more in the code to make it much clearer what you did.\r\n> For example:\r\n> ----------\r\n> BEFORE\r\n> /*\r\n> * can't replace constraint trigger.\r\n> */\r\n> if (OidIsValid(existing_constraint_oid))\r\n> AFTER\r\n> /*\r\n> * It is not allowed to replace with a constraint trigger.\r\n> * The OR REPLACE syntax is not available for constraint triggers (see\r\n> gram.y).\r\n> */\r\n> Assert(!stmt->isconstraint);\r\n> \r\n> /*\r\n> * It is not allowed to replace an existing constraint trigger.\r\n> */\r\n> if (OidIsValid(existing_constraint_oid))\r\n> ----------\r\nAgreed.\r\nNote that this part of the latest patch v16 shows different indent of the comments\r\nthat you gave me in your previous reply but those came from the execution of pgindent.\r\n\r\n> \r\n> > > (9) COMMENT\r\n> > > File: src/test/regress/expected/triggers.out\r\n> > > +-- test for detecting incompatible replacement of trigger create\r\n> > > +table my_table (id integer); create function funcA() returns\r\n> > > +trigger as $$ begin\r\n> > > + raise notice 'hello from funcA';\r\n> > > + return null;\r\n> > > +end; $$ language plpgsql;\r\n> > > +create function funcB() returns trigger as $$ begin\r\n> > > + raise notice 'hello from funcB';\r\n> > > + return null;\r\n> > > +end; $$ language plpgsql;\r\n> > > +create or replace trigger my_trig\r\n> > > + after insert on my_table\r\n> > > + for each row execute procedure funcA(); create constraint trigger\r\n> > > +my_trig\r\n> > > + after insert on my_table\r\n> > > + for each row execute procedure funcB(); -- should fail\r\n> > > +ERROR: trigger \"my_trig\" for relation \"my_table\" already exists\r\n> > >\r\n> > > --\r\n> > >\r\n> > > I think this test has been broken in v14. That last \"create\r\n> > > constraint trigger my_trig\" above can never be expected to work\r\n> > > simply because you are not specifying the \"OR REPLACE\" syntax.\r\n> > As I described above, the grammatical error occurs to use \"CREATE OR\r\n> > REPLACE CONSTRAINT TRIGGER\" in v14 (and v15 also).\r\n> > At the time to write v14, I wanted to list up all imcompatible cases\r\n> > even if some tests did *not* or can *not* contain \"OR REPLACE\" clause.\r\n> > I think this way of change seemed broken to you.\r\n> >\r\n> > Still now I think it's a good idea to cover such confusing cases, so I\r\n> > didn't remove both failure tests in v15\r\n> > (1) CREATE OR REPLACE TRIGGER creates a regular trigger and execute\r\n> > CREATE CONSTRAINT TRIGGER, which should fail\r\n> > (2) CREATE CONSTRAINT TRIGGER creates a constraint trigger and\r\n> > execute CREATE OR REPLACE TRIGGER, which should fail in order to\r\n> > show in such cases, the detection of error nicely works.\r\n> > The results of tests are fine.\r\n> >\r\n> > > So in fact this is not properly testing for incompatible types at\r\n> > > all. It needs to say \"create OR REPLACE constraint trigger my_trig\"\r\n> > > to be testing what it claims to be testing.\r\n> > >\r\n> > > I also think there is a missing check in the code - see COMMENT (2)\r\n> > > - for handling this scenario. But since this test case is broken you\r\n> > > do not then notice the code check is missing.\r\n> > >\r\n> > > ===\r\n> > My inappropriate explanation especially in the create_trigger.sgml\r\n> > made you think those are broken. But, as I said they are necessary\r\n> > still to cover corner combination cases.\r\n> \r\n> Yes, I agree that all the combinations should be present. That is why I wrote\r\n> the \"create constraint trigger\" should be written \"create OR REPLACE\r\n> constraint trigger\" because otherwise AFAIK there is no test attempting to\r\n> replace using a constraint trigger - you are only really testing you cannot\r\n> create a duplicate name trigger (but those tests already existed)\r\n> \r\n> In other words, IMO the \"incompatible\" tests should be like below (I added\r\n> comments to try make it more clear what are the combinations)\r\n> ----------\r\n> create or replace trigger my_trig\r\n> after insert on my_table\r\n> for each row execute procedure funcA(); -- 1. create a regular trigger. OK\r\n> create or replace constraint trigger my_trig after insert on my_table for\r\n> each row execute procedure funcB(); -- Test 1a. Replace regular trigger with\r\n> constraint trigger. Expect ERROR (bad syntax) drop trigger my_trig on\r\n> my_table; create constraint trigger my_trig -- 2. create a constraint trigger. OK\r\n> after insert on my_table for each row execute procedure funcA(); create or\r\n> replace trigger my_trig after insert on my_table for each row execute\r\n> procedure funcB(); -- Test 2a. Replace constraint trigger with regular trigger.\r\n> Expect ERROR (cannot replace a constraint trigger) create or replace\r\n> constraint trigger my_trig after insert on my_table for each row execute\r\n> procedure funcB(); -- Test 2b. Replace constraint trigger with constraint\r\n> trigger. Expect ERROR (bad syntax) drop table my_table; drop function\r\n> funcA(); drop function funcB();\r\n> ----------\r\nI understand that\r\nI need to add 2 syntax error cases and\r\n1 error case to replace constraint trigger at least. It makes sense.\r\nAt the same time, I supposed that the order of the tests\r\nin v15 patch is somehow hard to read.\r\nSo, I decided to sort out those and take your new sets of tests there.\r\nWhat I'd like to test there is not different, though.\r\nPlease have a look at the new patch.\r\n\r\n\r\nBest,\r\n\tTakamichi Osumi", "msg_date": "Thu, 5 Nov 2020 23:57:29 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello Osumi-san.\n\nI have checked again v16 patch w.r.t. to my previous comments.\n\n> > > > (2) COMMENT\n> > > > File: doc/src/sgml/ref/create_trigger.sgml\n> > > > @@ -446,6 +454,13 @@ UPDATE OF\n> > > > <replaceable>column_name1</replaceable>\n> > > > [, <replaceable>column_name2</\n> > > > Currently it says:\n> > > > When replacing an existing trigger with CREATE OR REPLACE TRIGGER,\n> > > > there are restrictions. You cannot replace constraint triggers. That\n> > > > means it is impossible to replace a regular trigger with a\n> > > > constraint trigger and to replace a constraint trigger with another\n> > constraint trigger.\n> > > >\n> > > > --\n> > > >\n> > > > Is that correct wording? I don't think so. Saying \"to replace a\n> > > > regular trigger with a constraint trigger\" is NOT the same as \"replace a\n> > constraint trigger\".\n> > > I corrected my wording in create_trigger.sgml, which should cause less\n> > > confusion than v14. The reason why I changed the documents is described\n> > below.\n> >\n> > Yes, OK. But it might be simpler still just to it like:\n> > \"CREATE OR REPLACE TRIGGER works only for replacing a regular (not\n> > constraint) trigger with another regular trigger.\"\n> Yeah, this kind of supplementary words help user to understand the\n> exact usage of this feature. Thanks.\n\nActually, I meant that after making that 1st sentence wording change,\nI thought the 2nd sentence (i.e. \"That means it is impossible...\") is\nno longer needed at all since it is just re-stating what the 1st\nsentence already says.\n\nBut if you prefer to leave it worded how it is now that is ok too.\n\n> > > > (9) COMMENT\n(snip)\n> I understand that\n> I need to add 2 syntax error cases and\n> 1 error case to replace constraint trigger at least. It makes sense.\n> At the same time, I supposed that the order of the tests\n> in v15 patch is somehow hard to read.\n> So, I decided to sort out those and take your new sets of tests there.\n> What I'd like to test there is not different, though.\n> Please have a look at the new patch.\n\nYes, the tests are generally OK, but unfortunately a few new problems\nare introduced with the refactoring of the combination tests.\n\n1) It looks like about 40 lines of test code are cut/paste 2 times by accident\n2) Typo \"gramatically\" --> \"grammatically\"\n3) Your last test described as \"create or replace constraint trigger\nis not gramatically correct.\" is not really doing what it is meant to\ndo. That test was supposed to be trying to replace an existing\nCONSTRAINT trigger.\n\nIMO if all the combination tests were consistently commented like my 8\nexamples below then risk of accidental mistakes is reduced.\ne.g.\n-- 1. Overwrite existing regular trigger with regular trigger (without\nOR REPLACE)\n-- 2. Overwrite existing regular trigger with regular trigger (with OR REPLACE)\n-- 3. Overwrite existing regular trigger with constraint trigger\n(without OR REPLACE)\n-- 4. Overwrite existing regular trigger with constraint trigger (with\nOR REPLACE)\n-- 5. Overwrite existing constraint trigger with regular trigger\n(without OR REPLACE)\n-- 6. Overwrite existing constraint trigger with regular trigger (with\nOR REPLACE)\n-- 7. Overwrite existing constraint trigger with constraint trigger\n(without OR REPLACE)\n-- 8. Overwrite existing constraint trigger with constraint trigger\n(with OR REPLACE)\n\nTo avoid any confusion I have attached triggers.sql updated how I\nthink it should be. Please compare it to see what I mean. PSA.\n\nI hope it helps.\n\n===\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 6 Nov 2020 16:24:46 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello\r\n\r\n\r\nOn Friday, November 6, 2020 2:25 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> > > Yes, OK. But it might be simpler still just to it like:\r\n> > > \"CREATE OR REPLACE TRIGGER works only for replacing a regular (not\r\n> > > constraint) trigger with another regular trigger.\"\r\n> > Yeah, this kind of supplementary words help user to understand the\r\n> > exact usage of this feature. Thanks.\r\n> \r\n> Actually, I meant that after making that 1st sentence wording change, I\r\n> thought the 2nd sentence (i.e. \"That means it is impossible...\") is no longer\r\n> needed at all since it is just re-stating what the 1st sentence already says.\r\n> \r\n> But if you prefer to leave it worded how it is now that is ok too.\r\nThe simpler, the better for sure ? I deleted that 2nd sentence.\r\n\r\n\r\n> > > > > (9) COMMENT\r\n> (snip)\r\n> > I understand that\r\n> > I need to add 2 syntax error cases and\r\n> > 1 error case to replace constraint trigger at least. It makes sense.\r\n> > At the same time, I supposed that the order of the tests in v15 patch\r\n> > is somehow hard to read.\r\n> > So, I decided to sort out those and take your new sets of tests there.\r\n> > What I'd like to test there is not different, though.\r\n> > Please have a look at the new patch.\r\n> \r\n> Yes, the tests are generally OK, but unfortunately a few new problems are\r\n> introduced with the refactoring of the combination tests.\r\n> \r\n> 1) It looks like about 40 lines of test code are cut/paste 2 times by accident\r\nThis was not a mistake. The cases of 40 lines are with OR REPLACE to define\r\neach regular trigger that will be overwritten.\r\nBut, it doesn't make nothing probably so I deleted such cases.\r\nPlease forget that part.\r\n\r\n> 2) Typo \"gramatically\" --> \"grammatically\"\r\n> 3) Your last test described as \"create or replace constraint trigger is not\r\n> gramatically correct.\" is not really doing what it is meant to do. That test was\r\n> supposed to be trying to replace an existing CONSTRAINT trigger.\r\nSigh. Yeah, those were not right. Fixed.\r\n\r\n\r\n> \r\n> IMO if all the combination tests were consistently commented like my 8\r\n> examples below then risk of accidental mistakes is reduced.\r\n> e.g.\r\n> -- 1. Overwrite existing regular trigger with regular trigger (without OR\r\n> REPLACE)\r\n> -- 2. Overwrite existing regular trigger with regular trigger (with OR REPLACE)\r\n> -- 3. Overwrite existing regular trigger with constraint trigger (without OR\r\n> REPLACE)\r\n> -- 4. Overwrite existing regular trigger with constraint trigger (with OR\r\n> REPLACE)\r\n> -- 5. Overwrite existing constraint trigger with regular trigger (without OR\r\n> REPLACE)\r\n> -- 6. Overwrite existing constraint trigger with regular trigger (with OR\r\n> REPLACE)\r\n> -- 7. Overwrite existing constraint trigger with constraint trigger (without OR\r\n> REPLACE)\r\n> -- 8. Overwrite existing constraint trigger with constraint trigger (with OR\r\n> REPLACE)\r\n> \r\n> To avoid any confusion I have attached triggers.sql updated how I think it\r\n> should be. Please compare it to see what I mean. PSA.\r\n> \r\n> I hope it helps.\r\nI cannot thank you enough.\r\n\r\n\r\nBest,\r\n\tTakamichi Osumi", "msg_date": "Fri, 6 Nov 2020 08:06:14 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hello Osumi-san.\n\nI have checked the latest v17 patch w.r.t. to my previous comments.\n\nThe v17 patch applies cleanly.\n\nmake check is successful.\n\nThe regenerated docs look OK.\n\nI have no further review comments, so have flagged this v17 as \"ready\nfor committer\" - https://commitfest.postgresql.org/30/2307/\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sat, 7 Nov 2020 15:30:21 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "On Sat, Nov 7, 2020 at 10:00 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hello Osumi-san.\n>\n> I have checked the latest v17 patch w.r.t. to my previous comments.\n>\n> The v17 patch applies cleanly.\n>\n> make check is successful.\n>\n> The regenerated docs look OK.\n>\n> I have no further review comments, so have flagged this v17 as \"ready\n> for committer\" - https://commitfest.postgresql.org/30/2307/\n>\n\nThe patch looks fine to me however I feel that in the test case there\nare a lot of duplicate statement which can be reduced\ne.g.\n+-- 1. Overwrite existing regular trigger with regular trigger\n(without OR REPLACE)\n+create trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcA();\n+create trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcB(); -- should fail\n+drop trigger my_trig on my_table;\n+\n+-- 2. Overwrite existing regular trigger with regular trigger (with OR REPLACE)\n+create trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcA();\n+insert into my_table values (1);\n+create or replace trigger my_trig\n+ after insert on my_table\n+ for each row execute procedure funcB(); -- OK\n+insert into my_table values (1);\n+drop trigger my_trig on my_table;\n\nIn this test, test 1 failed because it tried to change the trigger\nfunction without OR REPLACE, which is fine but now test 2 can continue\nfrom there, I mean we don't need to drop the trigger at end of the\ntest1 and then test2 can try it with OR REPLACE syntax. This way we\ncan reduce the extra statement execution which is not necessary.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 7 Nov 2020 10:35:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "Hi,\r\n\r\n\r\nOn Saturday, Nov 7, 2020 2:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> The patch looks fine to me however I feel that in the test case there are a lot\r\n> of duplicate statement which can be reduced e.g.\r\n> +-- 1. Overwrite existing regular trigger with regular trigger\r\n> (without OR REPLACE)\r\n> +create trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcA(); create trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcB(); -- should fail drop trigger\r\n> +my_trig on my_table;\r\n> +\r\n> +-- 2. Overwrite existing regular trigger with regular trigger (with OR\r\n> +REPLACE) create trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcA(); insert into my_table values\r\n> +(1); create or replace trigger my_trig\r\n> + after insert on my_table\r\n> + for each row execute procedure funcB(); -- OK insert into my_table\r\n> +values (1); drop trigger my_trig on my_table;\r\n> \r\n> In this test, test 1 failed because it tried to change the trigger function without\r\n> OR REPLACE, which is fine but now test 2 can continue from there, I mean we\r\n> don't need to drop the trigger at end of the\r\n> test1 and then test2 can try it with OR REPLACE syntax. This way we can\r\n> reduce the extra statement execution which is not necessary.\r\nOK. That makes sense.\r\n\r\nAttached the revised version.\r\nThe tests in this patch should not include redundancy.\r\nI checked the tests of trigger replacement for partition tables as well.\r\n\r\nHere, I did not and will not delete the comments with numbering from 1 to 8 so that\r\nother developers can check if the all cases are listed up or not easily.\r\n\r\n\r\nBest,\r\n\tTakamichi Osumi", "msg_date": "Mon, 9 Nov 2020 03:27:55 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: extension patch of CREATE OR REPLACE TRIGGER" }, { "msg_contents": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> [ CREATE_OR_REPLACE_TRIGGER_v18.patch ]\n\nPushed with some mostly-minor cleanup.\n\n(I know this has been a very long slog. Congratulations for\nseeing it through.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Nov 2020 17:07:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extension patch of CREATE OR REPLACE TRIGGER" } ]
[ { "msg_contents": "Hackers,\n\nThe 2019-03 CF is almost upon us. The CF will officially start at 00:00 \nAoE (12:00 UTC) on Friday, March 1st.\n\nAny large, new patches submitted at the last moment will likely be \nlabelled as targeting PG13 and may be pushed off to the 2017-07 CF as \nwell. With the new labeling scheme it is not quite as important to keep \nPG13 patches out of this CF but we'll see how the community feels about \nthat as we go.\n\nIf you have a patch that has been Waiting on Author without any \ndiscussion since before the last CF ended then you should submit a new \npatch for the start of the 2019-03 CF.\n\nHappy Hacking!\n\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Thu, 28 Feb 2019 11:05:33 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "2019-03 Starts Tomorrow" }, { "msg_contents": "Hi David,\n\nOn Thu, Feb 28, 2019 at 11:05:33AM +0200, David Steele wrote:\n> The 2019-03 CF is almost upon us. The CF will officially start at 00:00 AoE\n> (12:00 UTC) on Friday, March 1st.\n\nThanks for the reminder.\n\n> If you have a patch that has been Waiting on Author without any discussion\n> since before the last CF ended then you should submit a new patch for the\n> start of the 2019-03 CF.\n\nSo do we have anybody willing to take the glorious position of CFM for\nthis commit fest?\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 11:48:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> So do we have anybody willing to take the glorious position of CFM for\n> this commit fest?\n\nIIRC, Steele already said he'd do it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 22:47:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "On Thu, Feb 28, 2019 at 10:47:06PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> So do we have anybody willing to take the glorious position of CFM for\n>> this commit fest?\n> \n> IIRC, Steele already said he'd do it.\n\nOkay, fine for me of course if that's the case! For what it's worth,\nI did not understand that he wanted to be CFM, I just understood that\nthis email is a reminder that the CF will begin... These are quite\nseparate things.\n\nSorry for being confused.\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 15:19:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "On 3/1/19 8:19 AM, Michael Paquier wrote:\n> On Thu, Feb 28, 2019 at 10:47:06PM -0500, Tom Lane wrote:\n>> Michael Paquier <michael@paquier.xyz> writes:\n>>> So do we have anybody willing to take the glorious position of CFM for\n>>> this commit fest?\n>>\n>> IIRC, Steele already said he'd do it.\n> \n> Okay, fine for me of course if that's the case! For what it's worth,\n> I did not understand that he wanted to be CFM, I just understood that\n> this email is a reminder that the CF will begin... These are quite\n> separate things.\n> \n> Sorry for being confused.\nNot at all. I volunteered on the thread closing out the last CF so it \nwasn't that obvious.\n\nSo, yes, I am the CFM.\n\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Fri, 1 Mar 2019 08:46:56 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "Hi,\n\nOn 2019-02-28 11:05:33 +0200, David Steele wrote:\n> The 2019-03 CF is almost upon us. The CF will officially start at 00:00 AoE\n> (12:00 UTC) on Friday, March 1st.\n\nI'd like to move all the CF entries targeting v13 at this time. We might\nget a patch or five more committed in the next two days, but I doubt any\nmeaningful review is going to be done for v13 items - and we're past the\nnormal CF end anyway. Mainly want to make triaging the remaining items\na bit easier.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Apr 2019 19:15:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "Hi Andres,\n\nI’m just about to board a plane for the rest of the day. I can do that when I get home tonight or anyone else is welcome to do so. If I can get internet on the flight I’ll do it myself but I have not had much luck with that recently.\n\nRegards,\n--\n- David\n\n> On Apr 6, 2019, at 03:15, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n>> On 2019-02-28 11:05:33 +0200, David Steele wrote:\n>> The 2019-03 CF is almost upon us. The CF will officially start at 00:00 AoE\n>> (12:00 UTC) on Friday, March 1st.\n> \n> I'd like to move all the CF entries targeting v13 at this time. We might\n> get a patch or five more committed in the next two days, but I doubt any\n> meaningful review is going to be done for v13 items - and we're past the\n> normal CF end anyway. Mainly want to make triaging the remaining items\n> a bit easier.\n> \n> Greetings,\n> \n> Andres Freund\n\n\n\n", "msg_date": "Sat, 6 Apr 2019 11:48:59 +0100", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": ">>> On 2019-02-28 11:05:33 +0200, David Steele wrote:\n>>> The 2019-03 CF is almost upon us. The CF will officially start at 00:00 AoE\n>>> (12:00 UTC) on Friday, March 1st.\n>>\n>> I'd like to move all the CF entries targeting v13 at this time. We might\n>> get a patch or five more committed in the next two days, but I doubt any\n>> meaningful review is going to be done for v13 items - and we're past the\n>> normal CF end anyway. Mainly want to make triaging the remaining items\n>> a bit easier.\n\nOK, these have all been pushed now.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sat, 6 Apr 2019 16:52:42 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "On Sat, Apr 06, 2019 at 04:52:42PM -0400, David Steele wrote:\n> OK, these have all been pushed now.\n\nPlease note that the commit fest has been closed as a result of the\nfeature freeze, any remaining items have been moved to the next CF if\nthey still were in \"Needs Review\" state and patches waiting on author\nhave been marked as returned with feedback. Bug fixes have all been\nmoved to the next CF.\n\nHere is the final score:\nCommitted: 100.\nMoved to next CF: 82.\nWithdrawn: 5.\nRejected: 3.\nReturned with Feedback: 17.\nTotal: 207. \n--\nMichael", "msg_date": "Tue, 9 Apr 2019 13:03:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2019-03 Starts Tomorrow" }, { "msg_contents": "út 9. 4. 2019 v 6:04 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Sat, Apr 06, 2019 at 04:52:42PM -0400, David Steele wrote:\n> > OK, these have all been pushed now.\n>\n> Please note that the commit fest has been closed as a result of the\n> feature freeze, any remaining items have been moved to the next CF if\n> they still were in \"Needs Review\" state and patches waiting on author\n> have been marked as returned with feedback. Bug fixes have all been\n> moved to the next CF.\n>\n> Here is the final score:\n> Committed: 100.\n> Moved to next CF: 82.\n> Withdrawn: 5.\n> Rejected: 3.\n> Returned with Feedback: 17.\n> Total: 207.\n>\n\nlot of work is done,\n\ngood work\n\nRegards\n\nPavel\n\n\n> --\n> Michael\n>\n\nút 9. 4. 2019 v 6:04 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Sat, Apr 06, 2019 at 04:52:42PM -0400, David Steele wrote:\n> OK, these have all been pushed now.\n\nPlease note that the commit fest has been closed as a result of the\nfeature freeze, any remaining items have been moved to the next CF if\nthey still were in \"Needs Review\" state and patches waiting on author\nhave been marked as returned with feedback.  Bug fixes have all been\nmoved to the next CF.\n\nHere is the final score:\nCommitted: 100.\nMoved to next CF: 82.\nWithdrawn: 5.\nRejected: 3.\nReturned with Feedback: 17.\nTotal: 207. lot of work is done, good workRegardsPavel \n--\nMichael", "msg_date": "Tue, 9 Apr 2019 06:10:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2019-03 Starts Tomorrow" } ]
[ { "msg_contents": "Hackers,\n\nIt has been noted on multiple threads, such as [1], that it would be \ngood to have additional notes in the documentation to explain why \nexclusive backups have been deprecated and why they should be avoided \nwhen possible.\n\nThis patch attempts to document the limitations of the exclusive mode.\n\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/flat/ac7339ca-3718-3c93-929f-99e725d1172c%40pgmasters.net", "msg_date": "Thu, 28 Feb 2019 17:01:23 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "David Steele wrote:\n> This patch attempts to document the limitations of the exclusive mode.\n\nThanks!\n\n> + <para>\n> + The primary issue with the exclusive method is that the\n> + <filename>backup_label</filename> file is written into the data directory\n> + when <function>pg_start_backup</function> is called and remains until\n> + <function>pg_stop_backup</function> is called. If\n> + <productname>PostgreSQL</productname> or the host terminates abnormally\n\nThere should be a comma at the end of this line.\n\n> + then <filename>backup_label</filename> will be left in the data directory\n> + and <productname>PostgreSQL</productname> will not start. A log message\n\nYou should say \"*may* not start\", because it will if the WAL segment is still there.\n\n> + recommends that <filename>backup_label</filename> be removed if not\n> + restoring from a backup.\n> + </para>\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 28 Feb 2019 17:08:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 2/28/19 6:08 PM, Laurenz Albe wrote:\n> David Steele wrote:\n>> This patch attempts to document the limitations of the exclusive mode.\n> \n> Thanks!\n> \n>> + <para>\n>> + The primary issue with the exclusive method is that the\n>> + <filename>backup_label</filename> file is written into the data directory\n>> + when <function>pg_start_backup</function> is called and remains until\n>> + <function>pg_stop_backup</function> is called. If\n>> + <productname>PostgreSQL</productname> or the host terminates abnormally\n> \n> There should be a comma at the end of this line.\n\nFixed.\n\n>> + then <filename>backup_label</filename> will be left in the data directory\n>> + and <productname>PostgreSQL</productname> will not start. A log message\n> \n> You should say \"*may* not start\", because it will if the WAL segment is still there.\n\nYou are correct. It's still pretty likely though so I went with \"probably\".\n\nI added some extra language to the warning that gets emitted in the log. \n Users are more like to see that than the documentation.\n\nI also addressed a comment from another thread by adding pg_basebackup \nas .e.g. rather than or.\n\nThanks,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Fri, 1 Mar 2019 11:21:32 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "David Steele wrote:\n> I added some extra language to the warning that gets emitted in the log. \n> Users are more like to see that than the documentation.\n> \n> I also addressed a comment from another thread by adding pg_basebackup \n> as .e.g. rather than or.\n\nLooks good to me.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 01 Mar 2019 10:28:34 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "El vie., 1 de mar. de 2019 a la(s) 06:21, David Steele\n(david@pgmasters.net) escribió:\n>\n>\n> I also addressed a comment from another thread by adding pg_basebackup\n> as .e.g. rather than or.\n\nThanks David,\n\nThis looks very good!\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 1 Mar 2019 06:45:56 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "El vie., 1 de mar. de 2019 a la(s) 06:21, David Steele\n(david@pgmasters.net) escribió:\n>\n> I added some extra language to the warning that gets emitted in the log.\n> Users are more like to see that than the documentation.\n>\n> I also addressed a comment from another thread by adding pg_basebackup\n> as .e.g. rather than or.\n\nMore awake, I gave this last patch a second read. Wording is good now.\nNo objections there at all.\n\nI do think that paragraph 2 and 3 should be merged as it seems the\nidea on the third is a continuation of what's in the second.\n\nBut even without that change, I believe this patch is good for commit.\n\nRegards,\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 1 Mar 2019 07:41:08 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Please follow the 1-space indentation in the documentation files.\n\nI think the style of mentioning all the problems in a note before the\nactual description is a bit backwards.\n\nThe layout of the section should be:\n\n- This is what it does.\n\n- Here are some comparisons with other methods.\n\n- For this or that reason, it's deprecated.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Fri, 1 Mar 2019 12:13:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/1/19 1:13 PM, Peter Eisentraut wrote:\n> Please follow the 1-space indentation in the documentation files.\n\nWhoops. Will fix.\n\n> I think the style of mentioning all the problems in a note before the\n> actual description is a bit backwards.\n\nIn the case of an important note like this I think it should be right at \nthe top where people will see it. Not everyone reads to the end.\n\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Fri, 1 Mar 2019 15:07:13 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Fri, Mar 1, 2019 at 2:07 PM David Steele <david@pgmasters.net> wrote:\n\n> On 3/1/19 1:13 PM, Peter Eisentraut wrote:\n> > Please follow the 1-space indentation in the documentation files.\n>\n> Whoops. Will fix.\n>\n> > I think the style of mentioning all the problems in a note before the\n> > actual description is a bit backwards.\n>\n> In the case of an important note like this I think it should be right at\n> the top where people will see it. Not everyone reads to the end\n>\n\nMaybe have the first note say \"This method is deprecated bceause it has\nserious risks (see bellow)\" and then list the actual risks at the end?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Mar 1, 2019 at 2:07 PM David Steele <david@pgmasters.net> wrote:On 3/1/19 1:13 PM, Peter Eisentraut wrote:\n> Please follow the 1-space indentation in the documentation files.\n\nWhoops.  Will fix.\n\n> I think the style of mentioning all the problems in a note before the\n> actual description is a bit backwards.\n\nIn the case of an important note like this I think it should be right at \nthe top where people will see it.  Not everyone reads to the endMaybe have the first note say \"This method is deprecated bceause it has serious risks (see bellow)\" and then list the actual risks at the end? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 1 Mar 2019 14:10:57 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Magnus Hagander wrote:\n> Maybe have the first note say \"This method is deprecated bceause it has serious\n> risks (see bellow)\" and then list the actual risks at the end? \n\nGood idea. That may attract the attention of the dogs among the readers.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 01 Mar 2019 14:14:34 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/1/19 3:14 PM, Laurenz Albe wrote:\n> Magnus Hagander wrote:\n>> Maybe have the first note say \"This method is deprecated bceause it has serious\n>> risks (see bellow)\" and then list the actual risks at the end?\n> \n> Good idea. That may attract the attention of the dogs among the readers.\n\nOK, here's a new version that splits the deprecation notes from the \ndiscussion of risks. I also fixed the indentation.\n\nThanks,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Thu, 7 Mar 2019 11:33:20 +0200", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Thu, Mar 07, 2019 at 11:33:20AM +0200, David Steele wrote:\n> OK, here's a new version that splits the deprecation notes from the\n> discussion of risks. I also fixed the indentation.\n\nThe documentation part looks fine to me. Just one nit regarding the\nerror hint.\n\n> -\terrhint(\"If you are not restoring from a backup, try removing the file \\\"%s/backup_label\\\".\", DataDir)));\n> +\terrhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" and add recovery options to \\\"%s/postgresql.auto.conf\\\".\\n\"\n\nHere do we really want to recommend adding options to\npostgresql.auto.conf? This depends a lot on the solution integration\nso I think that this hint could actually confuse some users because it\nimplies that they kind of *have* to do so, which is not correct. I\nwould recommend to be a bit more generic and just use \"and add\nnecessary recovery configuration\".\n\n> +\t\t\"If you are not restoring from a backup, try removing the file \\\"%s/backup_label\\\".\\n\"\n> +\t\t\"Be careful: removing \\\"%s/backup_label\\\" will result in a corrupt cluster if restoring from a backup.\",\n\nFine for these two ones.\n\n> +\t\tDataDir, DataDir, DataDir, DataDir)));\n\n:)\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 10:35:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Thu, Mar 7, 2019 at 5:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Mar 07, 2019 at 11:33:20AM +0200, David Steele wrote:\n> > OK, here's a new version that splits the deprecation notes from the\n> > discussion of risks. I also fixed the indentation.\n>\n> The documentation part looks fine to me. Just one nit regarding the\n> error hint.\n>\n> > - errhint(\"If you are not restoring from a backup, try removing the\n> file \\\"%s/backup_label\\\".\", DataDir)));\n> > + errhint(\"If you are restoring from a backup, touch\n> \\\"%s/recovery.signal\\\" and add recovery options to\n> \\\"%s/postgresql.auto.conf\\\".\\n\"\n>\n> Here do we really want to recommend adding options to\n> postgresql.auto.conf? This depends a lot on the solution integration\n> so I think that this hint could actually confuse some users because it\n> implies that they kind of *have* to do so, which is not correct. I\n> would recommend to be a bit more generic and just use \"and add\n> necessary recovery configuration\".\n>\n\nAgreed, I think we should never tell people to \"add recovery options to\npostgresql.auto.conf\". Becuase they should never do that manually. If we\nwant to suggest people use postgresql.auto.conf surely they should be using\nALTER SYSTEM SET. Which of course doesn't work in this case, since\npostgrsql isn't running yet.\n\nSo yeah either that, or say \"add to postgresql.conf\" without the auto?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Mar 7, 2019 at 5:35 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 07, 2019 at 11:33:20AM +0200, David Steele wrote:\n> OK, here's a new version that splits the deprecation notes from the\n> discussion of risks.  I also fixed the indentation.\n\nThe documentation part looks fine to me.  Just one nit regarding the\nerror hint.\n\n> -     errhint(\"If you are not restoring from a backup, try removing the file \\\"%s/backup_label\\\".\", DataDir)));\n> +     errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" and add recovery options to \\\"%s/postgresql.auto.conf\\\".\\n\"\n\nHere do we really want to recommend adding options to\npostgresql.auto.conf?  This depends a lot on the solution integration\nso I think that this hint could actually confuse some users because it\nimplies that they kind of *have* to do so, which is not correct.  I\nwould recommend to be a bit more generic and just use \"and add\nnecessary recovery configuration\".Agreed, I think we should never tell people to \"add recovery options to postgresql.auto.conf\". Becuase they should never do that manually. If we want to suggest people use postgresql.auto.conf surely they should be using ALTER SYSTEM SET. Which of course doesn't work in this case, since postgrsql isn't running yet.So yeah either that, or say \"add to postgresql.conf\" without the auto?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 7 Mar 2019 18:08:10 -0800", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 2019-03-07 10:33, David Steele wrote:\n> On 3/1/19 3:14 PM, Laurenz Albe wrote:\n>> Magnus Hagander wrote:\n>>> Maybe have the first note say \"This method is deprecated bceause it has serious\n>>> risks (see bellow)\" and then list the actual risks at the end?\n>>\n>> Good idea. That may attract the attention of the dogs among the readers.\n> \n> OK, here's a new version that splits the deprecation notes from the \n> discussion of risks. I also fixed the indentation.\n\nThe documentation changes appear to continue the theme from the other\nthread that the exclusive backup mode is terrible and everyone should\nfeel bad about it. I don't think there is consensus about that.\n\nI do welcome a more precise description of the handling of backup_label\nand a better hint in the error message. I think we haven't gotten to\nthe final shape there yet, especially for the latter. I suggest to\nfocus on that.\n\nThe other changes repeat points already made in nearby documentation.\n\nI think it would be helpful to frame the documentation in a way to\nsuggest that the nonexclusive mode is more for automation and more\nsophisticated tools and the exclusive mode is more for manual or simple\nscripted use.\n\nIf we do think that the exclusive mode will be removed in PG13, then I\ndon't think we need further documentation changes. It already says it's\ndeprecated, and we don't need to justify that at length. But again, I'm\nnot convinced that that will happen.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Mon, 18 Mar 2019 13:33:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Mon, Mar 18, 2019 at 8:33 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The documentation changes appear to continue the theme from the other\n> thread that the exclusive backup mode is terrible and everyone should\n> feel bad about it. I don't think there is consensus about that.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Mon, 18 Mar 2019 11:47:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-03-07 10:33, David Steele wrote:\n> > On 3/1/19 3:14 PM, Laurenz Albe wrote:\n> >> Magnus Hagander wrote:\n> >>> Maybe have the first note say \"This method is deprecated bceause it has serious\n> >>> risks (see bellow)\" and then list the actual risks at the end?\n> >>\n> >> Good idea. That may attract the attention of the dogs among the readers.\n> > \n> > OK, here's a new version that splits the deprecation notes from the \n> > discussion of risks. I also fixed the indentation.\n> \n> The documentation changes appear to continue the theme from the other\n> thread that the exclusive backup mode is terrible and everyone should\n> feel bad about it. I don't think there is consensus about that.\n\nI don't view it as up for much debate. The exclusive backup mode is\nquite bad.\n\n> I do welcome a more precise description of the handling of backup_label\n> and a better hint in the error message. I think we haven't gotten to\n> the final shape there yet, especially for the latter. I suggest to\n> focus on that.\n\nThere isn't a way to handle the backup_label in a sane way when it's\ncreated by the server in the data directory, which is why the\nnon-exclusive mode explicitly doesn't do that.\n\n> I think it would be helpful to frame the documentation in a way to\n> suggest that the nonexclusive mode is more for automation and more\n> sophisticated tools and the exclusive mode is more for manual or simple\n> scripted use.\n\nI don't agree with this at all, that's not the reason the two exist nor\nwere they ever developed with the intent that one is for the 'simple'\ncase and one is for the 'automated' case. Trying to wedge them into\nthat framework strikes me as simply trying to sweep the serious issues\nunder the rug and I don't agree with that- if we are going to continue\nto have this, we need to make it clear what the issues are. Sadly, we\nwill still have users who don't actually read the docs that carefully\nand get bit by the exclusive backup mode because they didn't appreciate\nthe issues, but we will continue to have that until we finally remove\nthe exclusive mode.\n\n> If we do think that the exclusive mode will be removed in PG13, then I\n> don't think we need further documentation changes. It already says it's\n> deprecated, and we don't need to justify that at length. But again, I'm\n> not convinced that that will happen.\n\nThere were at least a few comments made on this thread that it wasn't\nmade clear enough in the documentation that it's deprecated. Saying\nthat we already deprecated it doesn't change that and doesn't do\nanything to actually address those concerns. Continuing to carry\nforward these two modes makes further progress in this area difficult\nand unlikely to happen, and that's disappointing.\n\nThanks!\n\nStephen", "msg_date": "Mon, 18 Mar 2019 23:13:32 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Mon, Mar 18, 2019 at 11:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I don't view it as up for much debate.\n\nIn other words, you're not willing to listen to what other people\nthink about this issue.\n\nI can't say I haven't noticed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Mon, 18 Mar 2019 23:31:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Mar 18, 2019 at 11:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I don't view it as up for much debate.\n> \n> In other words, you're not willing to listen to what other people\n> think about this issue.\n\nI have listened, but unfortunately the discussion just revolves around\n\"oh, it isn't actually all that bad\", which, no, isn't something that's\ngoing to sway my opinion.\n\nI suppose some might view that as being principled.\n\nThanks!\n\nStephen", "msg_date": "Mon, 18 Mar 2019 23:35:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Hi,\n\nOn 2019-03-18 23:35:07 -0400, Stephen Frost wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Mon, Mar 18, 2019 at 11:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > I don't view it as up for much debate.\n> > \n> > In other words, you're not willing to listen to what other people\n> > think about this issue.\n> \n> I have listened, but unfortunately the discussion just revolves around\n> \"oh, it isn't actually all that bad\", which, no, isn't something that's\n> going to sway my opinion.\n\nI think you're right about the original issue. But at some point you\njust gotta settle for not everyone agreeing with you (and me). Just\npushing forward hard won't achieve anything. Nobody says you need to\nagree with people saying \"it's not all that bad\", but that doesn't mean\nyou should just try to push forward at full speed.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Mon, 18 Mar 2019 20:40:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-03-18 23:35:07 -0400, Stephen Frost wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > On Mon, Mar 18, 2019 at 11:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > I don't view it as up for much debate.\n> > > \n> > > In other words, you're not willing to listen to what other people\n> > > think about this issue.\n> > \n> > I have listened, but unfortunately the discussion just revolves around\n> > \"oh, it isn't actually all that bad\", which, no, isn't something that's\n> > going to sway my opinion.\n> \n> I think you're right about the original issue. But at some point you\n> just gotta settle for not everyone agreeing with you (and me). Just\n> pushing forward hard won't achieve anything. Nobody says you need to\n> agree with people saying \"it's not all that bad\", but that doesn't mean\n> you should just try to push forward at full speed.\n\nThis thread didn't exactly pop out of nowhere.. It's a result of the\nprior discussion, with the goal of at least improving the situation by\ndocumenting the issues and trying to make it clearer that the exclusive\nmode is deprecated to hopefully provide a chance that we'll actually\nremove it at some point in the future as being dangerous.\n\nThat's a pretty large step back from \"let's rip it out for v12\", and yet\npeople are now pushing back against even that, which is definitely\nfrustrating, and saying that I'm still pushing forward 'at full speed'\ndefinitely comes across as not really considering that this started out\nwith \"rip it out for v12\" and has now regressed back to a documentation\npatch.\n\nThanks!\n\nStephen", "msg_date": "Mon, 18 Mar 2019 23:49:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/18/19 4:33 PM, Peter Eisentraut wrote:\n> On 2019-03-07 10:33, David Steele wrote:\n>> On 3/1/19 3:14 PM, Laurenz Albe wrote:\n>>> Magnus Hagander wrote:\n>>>> Maybe have the first note say \"This method is deprecated bceause it has serious\n>>>> risks (see bellow)\" and then list the actual risks at the end?\n>>>\n>>> Good idea. That may attract the attention of the dogs among the readers.\n>>\n>> OK, here's a new version that splits the deprecation notes from the\n>> discussion of risks. I also fixed the indentation.\n> \n> The documentation changes appear to continue the theme from the other\n> thread that the exclusive backup mode is terrible and everyone should\n> feel bad about it. I don't think there is consensus about that.\n\nI wouldn't characterize documenting the limitations of the method as \nmaking people feel bad about it. If you feel my language implies that \nthen please let me know where you see it.\n\n> I do welcome a more precise description of the handling of backup_label\n> and a better hint in the error message. I think we haven't gotten to\n> the final shape there yet, especially for the latter. I suggest to\n> focus on that.\n\nI was planning to update the error message hint as Magnus and Michael \nhave suggested.\n\nI'm not a fan of normalizing the documentation around backup_label, i.e. \nmaking it seem like a perfectly normal thing that you need to manually \ndelete a file to get the cluster to start. This may still lead users to \nscript their way around the problem, with possible corruption as the result.\n\n> The other changes repeat points already made in nearby documentation.\n\nGranted, but in this sense they are meant to concisely describe why the \nfeature is deprecated, rather than being instructions in a user guide.\n\n> I think it would be helpful to frame the documentation in a way to\n> suggest that the nonexclusive mode is more for automation and more\n> sophisticated tools and the exclusive mode is more for manual or simple\n> scripted use.\n\nI don't think the features were developed with this in mind and I \nwouldn't want to characterize them this way now. Non-exclusive mode was \ndeveloped to address the shortcomings of exclusive mode, not as an \n\"automate-able\" version of it.\n\nDescribing any backup method as primarily \"manual\" in nature seems \ncounter-intuitive to me.\n\n> If we do think that the exclusive mode will be removed in PG13, then I\n> don't think we need further documentation changes. It already says it's\n> deprecated, and we don't need to justify that at length. But again, I'm\n> not convinced that that will happen.\n\nI think we should remove it entirely in PG13, but I'm not sure if there \nis enough support. I'll propose it again in the first CF and see where \nit goes.\n\nThis patch was intended to be a compromise based on discussion in the \nthread about removing the feature.\n\nIf this is now a bridge too far then I'm at a bit of a loss as to how to \nproceed. If we water it down and normalize it then we are not achieving \nthe goals of the patch as I see them -- to steer users away from this \nmethod when possible and to make it less of a shock if it goes away.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Tue, 19 Mar 2019 09:22:33 +0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/8/19 6:08 AM, Magnus Hagander wrote:\n> On Thu, Mar 7, 2019 at 5:35 PM Michael Paquier <michael@paquier.xyz \n> <mailto:michael@paquier.xyz>> wrote:\n> \n> On Thu, Mar 07, 2019 at 11:33:20AM +0200, David Steele wrote:\n> > OK, here's a new version that splits the deprecation notes from the\n> > discussion of risks.  I also fixed the indentation.\n> \n> The documentation part looks fine to me.  Just one nit regarding the\n> error hint.\n> \n> > -     errhint(\"If you are not restoring from a backup, try\n> removing the file \\\"%s/backup_label\\\".\", DataDir)));\n> > +     errhint(\"If you are restoring from a backup, touch\n> \\\"%s/recovery.signal\\\" and add recovery options to\n> \\\"%s/postgresql.auto.conf\\\".\\n\"\n> \n> Here do we really want to recommend adding options to\n> postgresql.auto.conf?  This depends a lot on the solution integration\n> so I think that this hint could actually confuse some users because it\n> implies that they kind of *have* to do so, which is not correct.  I\n> would recommend to be a bit more generic and just use \"and add\n> necessary recovery configuration\".\n> \n> \n> Agreed, I think we should never tell people to \"add recovery options to \n> postgresql.auto.conf\". Becuase they should never do that manually. If we \n> want to suggest people use postgresql.auto.conf surely they should be \n> using ALTER SYSTEM SET. Which of course doesn't work in this case, since \n> postgrsql isn't running yet.\n> \n> So yeah either that, or say \"add to postgresql.conf\" without the auto?\n\nI went with Michael's suggestion. Attached is a new patch.\n\nI also think we should set a flag and throw the error below this if/else \nblock. This is a rather large message and maintaining two copies of it \nis not ideal.\n\nPlease note that there have been objections to the patch later in this \nthread by Peter and Robert. I'm not very interested in watering down \nthe documentation changes as Peter suggests, but I think at the very \nleast we should commit the added hints in the error message. For many \nusers this error will be their first point of contact with the \nbackup_label issue/behavior.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Wed, 20 Mar 2019 16:29:35 +0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Wed, Mar 20, 2019 at 04:29:35PM +0400, David Steele wrote:\n> Please note that there have been objections to the patch later in this\n> thread by Peter and Robert. I'm not very interested in watering down the\n> documentation changes as Peter suggests, but I think at the very least we\n> should commit the added hints in the error message. For many users this\n> error will be their first point of contact with the backup_label\n> issue/behavior.\n\nThe updates of the log message do not imply anything negative as I\nread them as they mention to not remove the backup_label file. So\nwhile we don't have an agreement about the docs, the log messages may\nbe able to be committed? Peter? Robert?\n\n\"will result in a corruptED cluster\" is more correct?\n--\nMichael", "msg_date": "Wed, 20 Mar 2019 22:00:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Mon, Mar 18, 2019 at 1:33 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-07 10:33, David Steele wrote:\n> > On 3/1/19 3:14 PM, Laurenz Albe wrote:\n> I think it would be helpful to frame the documentation in a way to\n> suggest that the nonexclusive mode is more for automation and more\n> sophisticated tools and the exclusive mode is more for manual or simple\n> scripted use.\n>\n\nBut that would be factually incorrect and backwards, so it seems like a\nterrible idea, at least when it comes to manual. If you are doing it\nmanually, it's a lot *easier* to do it right with the non-exclusive mode,\nbecause you can easily keep one psql and one shell open. And that's safe.\n\nThe only real use case that has been put forward for the exclusive backup\nmode is when the backups are done through a script, and that script is\nlimited to only use something like bash (and can't use a scripting language\nlike perl or python or powershell or other more advanced scripting\nlanguages).\n\nAnd I don't think exclusive mode should be suggested for \"simple scripts\"\neither, since it's anything but -- scripts using the exclusive mode\ncorrectly will be anything but simple. A better term there would be to\nsingle out shellscripts, I'd suggest, if we want to single something out.\nOr more generic, for \"scripting languages incapable of keeping a connection\nopen across multiple lines\" or something?\n\nWe can certainly keep it, but let's not tell people something is simple\nwhen it's not.\n\n\nIf we do think that the exclusive mode will be removed in PG13, then I\n> don't think we need further documentation changes. It already says it's\n> deprecated, and we don't need to justify that at length. But again, I'm\n> not convinced that that will happen.\n>\n\nBut the complaints before was that the deprecation currently in the\ndocumentation was not enough to remove it....\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Mar 18, 2019 at 1:33 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-03-07 10:33, David Steele wrote:\n> On 3/1/19 3:14 PM, Laurenz Albe wrote:I think it would be helpful to frame the documentation in a way to\nsuggest that the nonexclusive mode is more for automation and more\nsophisticated tools and the exclusive mode is more for manual or simple\nscripted use.But that would be factually incorrect and backwards, so it seems like a terrible idea, at least when it comes to manual. If you are doing it manually, it's a lot *easier* to do it right with the non-exclusive mode, because you can easily keep one psql and one shell open. And that's safe.The only real use case that has been put forward for the exclusive backup mode is when the backups are done through a script, and that script is limited to only use something like bash (and can't use a scripting language like perl or python or powershell or other more advanced scripting languages).And I don't think exclusive mode should be suggested for \"simple scripts\" either, since it's anything but -- scripts using the exclusive mode correctly will be anything but simple. A better term there would be to single out shellscripts, I'd suggest, if we want to single something out. Or more generic, for \"scripting languages incapable of keeping a connection open across multiple lines\" or something?We can certainly keep it, but let's not tell people something is simple when it's not.\nIf we do think that the exclusive mode will be removed in PG13, then I\ndon't think we need further documentation changes.  It already says it's\ndeprecated, and we don't need to justify that at length.  But again, I'm\nnot convinced that that will happen.But the complaints before was that the deprecation currently in the documentation was not enough to remove it.... --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 20 Mar 2019 14:42:21 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Wed, Mar 20, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Mar 20, 2019 at 04:29:35PM +0400, David Steele wrote:\n> > Please note that there have been objections to the patch later in this\n> > thread by Peter and Robert. I'm not very interested in watering down the\n> > documentation changes as Peter suggests, but I think at the very least we\n> > should commit the added hints in the error message. For many users this\n> > error will be their first point of contact with the backup_label\n> > issue/behavior.\n>\n> The updates of the log message do not imply anything negative as I\n> read them as they mention to not remove the backup_label file. So\n> while we don't have an agreement about the docs, the log messages may\n> be able to be committed? Peter? Robert?\n>\n> \"will result in a corruptED cluster\" is more correct?\n\nI really like the proposed changes to the ereport() text. I think the\n\"Be careful\" hint is a really helpful way of phrasing it. I think\n\"corrupt\" as the patch has it is slightly better than \"corrupted\".\nObviously, we have to make the updates for recovery.signal no matter\nwhat, and you could argue that part should be its own commit, but I\nlike all of the changes.\n\nI'm not too impressed with the documentation changes. A lot of the\ninformation being added is already present somewhere in that very same\nsection. It's reasonable to revise the section so that the dangers\nstand out more clearly, but it doesn't seem good to revise it in a way\nthat ends up duplicating the existing information. Here's my\nsuggestion -- ditch the note at the end of the section and make the\none at the beginning read like this:\n\nThe exclusive backup method is deprecated and should be avoided.\nPrior to <productname>PostgreSQL</productname> 9.6, this was the only\nlow-level method available, but it is now recommended that all users\nupgrade their scripts to use non-exclusive backups.\n\nThen, revise the first paragraph like this:\n\nThe process for an exclusive backup is mostly the same as for a\nnon-exclusive one, but it differs in a few key steps. This type of\nbackup can only be taken on a primary and does not allow concurrent\nbackups. Moreover, because it writes a backup_label file on the\nmaster, it can cause the master to fail to restart automatically after\na crash. On the other hand, the erroneous removal of a backup_label\nfile from a backup or standby is a common mistake which can can result\nin serious data corruption. If it is necessary to use this method,\nthe following steps may be used.\n\nLater, where it says:\n\n Note that if the server crashes during the backup it may not be\n possible to restart until the <literal>backup_label</literal>\nfile has been\n manually deleted from the <envar>PGDATA</envar> directory.\n\nChange it to read:\n\nAs noted above, if the server crashes during the backup it may not be\npossible to restart until the <literal>backup_label</literal> file has\nbeen manually deleted from the <envar>PGDATA</envar> directory. Note\nthat it is very important to never to remove the\n<literal>backup_label</literal> file when restoring a backup, because\nthis will result in corruption. Confusion about the circumstances\nunder which it is appropriate to remove this file is a common cause of\ndata corruption when using this method; be very certain that you\nremove the file only on an existing master and never when building a\nstandby or restoring a backup, even if you are building a standby that\nwill subsequently be promoted to a new master.\n\nI also think we should revise this thoroughly terrible advice:\n\n If you wish to place a time limit on the execution of\n <function>pg_stop_backup</function>, set an appropriate\n <varname>statement_timeout</varname> value, but make note that if\n <function>pg_stop_backup</function> terminates because of this your backup\n may not be valid.\n\nThat seems awful, not only because it encourages people to do that\nparticular thing and end up leaving the server in backup mode, but\nalso because it doesn't clearly articulate the extreme importance of\nmaking sure that the server is not left in backup mode. So I would\npropose that we strike that text entirely and replace it with\nsomething like:\n\nWhen using exclusive backup mode, it is absolutely imperative to make\nsure that <function>pg_stop_backup</function> completes successfully\nat the end of the backup. Even if the backup itself fails, for\nexample due to lack of disk space, failure to call\n<function>pg_stop_backup</function> will leave the server in backup\nmode indefinitely, causing future backups to fail and increasing the\nrisk of a restart during a time when <literal>backup_label</literal>\nexists.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 20 Mar 2019 10:31:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Hi Robert,\n\nOn 3/20/19 6:31 PM, Robert Haas wrote:\n> On Wed, Mar 20, 2019 at 9:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Wed, Mar 20, 2019 at 04:29:35PM +0400, David Steele wrote:\n>>> Please note that there have been objections to the patch later in this\n>>> thread by Peter and Robert. I'm not very interested in watering down the\n>>> documentation changes as Peter suggests, but I think at the very least we\n>>> should commit the added hints in the error message. For many users this\n>>> error will be their first point of contact with the backup_label\n>>> issue/behavior.\n>>\n>> The updates of the log message do not imply anything negative as I\n>> read them as they mention to not remove the backup_label file. So\n>> while we don't have an agreement about the docs, the log messages may\n>> be able to be committed? Peter? Robert?\n>>\n>> \"will result in a corruptED cluster\" is more correct?\n> \n> I really like the proposed changes to the ereport() text. I think the\n> \"Be careful\" hint is a really helpful way of phrasing it. I think\n> \"corrupt\" as the patch has it is slightly better than \"corrupted\".\n> Obviously, we have to make the updates for recovery.signal no matter\n> what, and you could argue that part should be its own commit, but I\n> like all of the changes.\n\nCool.\n\n> I'm not too impressed with the documentation changes. A lot of the\n> information being added is already present somewhere in that very same\n> section. It's reasonable to revise the section so that the dangers\n> stand out more clearly, but it doesn't seem good to revise it in a way\n> that ends up duplicating the existing information. \n\nOK.\n\n> Here's my\n> suggestion -- ditch the note at the end of the section and make the\n> one at the beginning read like this:\n> \n> The exclusive backup method is deprecated and should be avoided.\n> Prior to <productname>PostgreSQL</productname> 9.6, this was the only\n> low-level method available, but it is now recommended that all users\n> upgrade their scripts to use non-exclusive backups.\n> \n> Then, revise the first paragraph like this:\n> \n> The process for an exclusive backup is mostly the same as for a\n> non-exclusive one, but it differs in a few key steps. This type of\n> backup can only be taken on a primary and does not allow concurrent\n> backups. Moreover, because it writes a backup_label file on the\n> master, it can cause the master to fail to restart automatically after\n> a crash. On the other hand, the erroneous removal of a backup_label\n> file from a backup or standby is a common mistake which can can result\n> in serious data corruption. If it is necessary to use this method,\n> the following steps may be used.\n\nThis works for me as I feel like the cautions here (and below) are still \nstrongly worded. Peter?\n\n> Later, where it says:\n> \n> Note that if the server crashes during the backup it may not be\n> possible to restart until the <literal>backup_label</literal>\n> file has been\n> manually deleted from the <envar>PGDATA</envar> directory.\n> \n> Change it to read:\n> \n> As noted above, if the server crashes during the backup it may not be\n> possible to restart until the <literal>backup_label</literal> file has\n> been manually deleted from the <envar>PGDATA</envar> directory. Note\n> that it is very important to never to remove the\n> <literal>backup_label</literal> file when restoring a backup, because\n> this will result in corruption. Confusion about the circumstances\n> under which it is appropriate to remove this file is a common cause of\n> data corruption when using this method; be very certain that you\n> remove the file only on an existing master and never when building a\n> standby or restoring a backup, even if you are building a standby that\n> will subsequently be promoted to a new master.\n\nTechnically we are into repetition here, but I'm certainly OK with it as \nthis point bears repeating.\n\n> I also think we should revise this thoroughly terrible advice:\n> \n> If you wish to place a time limit on the execution of\n> <function>pg_stop_backup</function>, set an appropriate\n> <varname>statement_timeout</varname> value, but make note that if\n> <function>pg_stop_backup</function> terminates because of this your backup\n> may not be valid.\n> \n> That seems awful, not only because it encourages people to do that\n> particular thing and end up leaving the server in backup mode, but\n> also because it doesn't clearly articulate the extreme importance of\n> making sure that the server is not left in backup mode. So I would\n> propose that we strike that text entirely and replace it with\n> something like:\n> \n> When using exclusive backup mode, it is absolutely imperative to make\n> sure that <function>pg_stop_backup</function> completes successfully\n> at the end of the backup. Even if the backup itself fails, for\n> example due to lack of disk space, failure to call\n> <function>pg_stop_backup</function> will leave the server in backup\n> mode indefinitely, causing future backups to fail and increasing the\n> risk of a restart during a time when <literal>backup_label</literal>\n> exists.\n\nIt's also pretty important that exclusive backups complete successfully \nsince backup_label is returned from pg_stop_backup() -- the backup will \ndefinitely be corrupt without that if there was a checkpoint during the \nbackup. But, yeah, leaving a backup_label around for a long time \nincreases the restart risks a lot.\n\nI'll revise the patch if Peter thinks this approach looks reasonable.\n\nThanks,\n-- \n-David\ndavid@pgmasters.net\n\n", "msg_date": "Wed, 20 Mar 2019 21:08:53 +0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "Hi Robert,\n\nOn 3/20/19 5:08 PM, David Steele wrote:\n> \n> I'll revise the patch if Peter thinks this approach looks reasonable.\n\nHopefully Peter's silence can be interpreted as consent. Probably just \nbusy, though.\n\nI used your suggestions with minor editing. After some reflection, I \nagree that the inline warnings are likely to be more effective than \nsomething at the end, at least for those working on a new implementation.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Fri, 29 Mar 2019 10:27:11 +0000", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 2019-03-20 14:42, Magnus Hagander wrote:\n> But that would be factually incorrect and backwards, so it seems like a\n> terrible idea, at least when it comes to manual. If you are doing it\n> manually, it's a lot *easier* to do it right with the non-exclusive\n> mode, because you can easily keep one psql and one shell open. And\n> that's safe.\n\nThe scenario I have in mind is, a poorly maintained server, nothing\ninstalled, can't install anything (no internet connection, license\nexpired), flaky network, you fear it's going to fail soon, you need to\ntake a backup. The simplest procedure would appear to be: start backup\nmode, copy files away, stop backup mode. Anything else that involves\nholding a session open over there for the whole time is way more fragile\nunless proper preparations have been made (and even then). So I don't\nknow what you want to call that scenario, but I would feel more\ncomfortable having these basic tools available in a bind.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 Mar 2019 12:58:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Fri, Mar 29, 2019 at 6:27 AM David Steele <david@pgmasters.net> wrote:\n> I used your suggestions with minor editing. After some reflection, I\n> agree that the inline warnings are likely to be more effective than\n> something at the end, at least for those working on a new implementation.\n\nI'm glad we could agree on something. Committed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 29 Mar 2019 08:18:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Fri, Mar 29, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Mar 29, 2019 at 6:27 AM David Steele <david@pgmasters.net> wrote:\n> > I used your suggestions with minor editing. After some reflection, I\n> > agree that the inline warnings are likely to be more effective than\n> > something at the end, at least for those working on a new implementation.\n>\n> I'm glad we could agree on something. Committed.\n>\n\n+1, thanks.\n\nMinor nitpick:\n+ backup can only be taken on a primary and does not allow concurrent\n+ backups. Moreover, because it writes a backup_label file on the\n+ master, it can cause the master to fail to restart automatically after\n\nLet's be consistent in if we call it a primary or a master, at least within\nthe same paragraph :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Mar 29, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Mar 29, 2019 at 6:27 AM David Steele <david@pgmasters.net> wrote:\n> I used your suggestions with minor editing.  After some reflection, I\n> agree that the inline warnings are likely to be more effective than\n> something at the end, at least for those working on a new implementation.\n\nI'm glad we could agree on something.  Committed.+1, thanks.Minor nitpick:+     backup can only be taken on a primary and does not allow concurrent+     backups.  Moreover, because it writes a backup_label file on the+     master, it can cause the master to fail to restart automatically afterLet's be consistent in if we call it a primary or a master, at least within the same paragraph :) --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 29 Mar 2019 13:25:07 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 2019-03-29 11:27, David Steele wrote:\n>> I'll revise the patch if Peter thinks this approach looks reasonable.\n> \n> Hopefully Peter's silence can be interpreted as consent. Probably just \n> busy, though.\n> \n> I used your suggestions with minor editing. After some reflection, I \n> agree that the inline warnings are likely to be more effective than \n> something at the end, at least for those working on a new implementation.\n\nIt looks very sensible now, I think. Thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 Mar 2019 13:30:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/29/19 11:58 AM, Peter Eisentraut wrote:\n> On 2019-03-20 14:42, Magnus Hagander wrote:\n>> But that would be factually incorrect and backwards, so it seems like a\n>> terrible idea, at least when it comes to manual. If you are doing it\n>> manually, it's a lot *easier* to do it right with the non-exclusive\n>> mode, because you can easily keep one psql and one shell open. And\n>> that's safe.\n> \n> The scenario I have in mind is, a poorly maintained server, nothing\n> installed, can't install anything (no internet connection, license\n> expired), flaky network, you fear it's going to fail soon, you need to\n> take a backup. The simplest procedure would appear to be: start backup\n> mode, copy files away, stop backup mode. Anything else that involves\n> holding a session open over there for the whole time is way more fragile\n> unless proper preparations have been made (and even then). So I don't\n> know what you want to call that scenario, but I would feel more\n> comfortable having these basic tools available in a bind.\n\nI would argue the best thing in this scenario is to use pg_basebackup. \nIt's a solid tool and likely far better than any script the user might \ncook up on the spot.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 29 Mar 2019 12:33:29 +0000", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/29/19 12:25 PM, Magnus Hagander wrote:\n> On Fri, Mar 29, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> On Fri, Mar 29, 2019 at 6:27 AM David Steele <david@pgmasters.net\n> <mailto:david@pgmasters.net>> wrote:\n> > I used your suggestions with minor editing.  After some reflection, I\n> > agree that the inline warnings are likely to be more effective than\n> > something at the end, at least for those working on a new\n> implementation.\n> \n> I'm glad we could agree on something.  Committed.\n\nMe, too. Thanks!\n\n> Minor nitpick:\n> +     backup can only be taken on a primary and does not allow concurrent\n> +     backups.  Moreover, because it writes a backup_label file on the\n> +     master, it can cause the master to fail to restart automatically after\n> \n> Let's be consistent in if we call it a primary or a master, at least \n> within the same paragraph :)\n\nAgreed, let's stick with \"primary\".\n\nAre we planning to back-patch this? The deprecation was added to the\ndocs in 9.6 -- I think these clarifications would be helpful.\n\nThanks,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 29 Mar 2019 12:45:24 +0000", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Fri, Mar 29, 2019 at 8:45 AM David Steele <david@pgmasters.net> wrote:\n> Are we planning to back-patch this? The deprecation was added to the\n> docs in 9.6 -- I think these clarifications would be helpful.\n\nI wasn't planning too, but I guess we could consider it. I'd be more\ninclined to back-patch the documentation changes than the message text\nchanges, but what do other people think?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 29 Mar 2019 08:46:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 3/29/19 12:46 PM, Robert Haas wrote:\n> On Fri, Mar 29, 2019 at 8:45 AM David Steele <david@pgmasters.net> wrote:\n>> Are we planning to back-patch this? The deprecation was added to the\n>> docs in 9.6 -- I think these clarifications would be helpful.\n> \n> I wasn't planning too, but I guess we could consider it. I'd be more\n> inclined to back-patch the documentation changes than the message text\n> changes, but what do other people think?\n\nI think we should definitely do the docs, I'm 50% on the message. You \ncould argue that is is a behavioral change, but it is pretty important info.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 29 Mar 2019 12:49:07 +0000", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On Fri, Mar 29, 2019 at 1:49 PM David Steele <david@pgmasters.net> wrote:\n\n> On 3/29/19 12:46 PM, Robert Haas wrote:\n> > On Fri, Mar 29, 2019 at 8:45 AM David Steele <david@pgmasters.net>\n> wrote:\n> >> Are we planning to back-patch this? The deprecation was added to the\n> >> docs in 9.6 -- I think these clarifications would be helpful.\n> >\n> > I wasn't planning too, but I guess we could consider it. I'd be more\n> > inclined to back-patch the documentation changes than the message text\n> > changes, but what do other people think?\n>\n> I think we should definitely do the docs, I'm 50% on the message. You\n> could argue that is is a behavioral change, but it is pretty important\n> info.\n>\n\n+1 on the docs.\n\nIs the changes to the messages going to cause issues or weirdness for\ntranslators? That would be a reason not to backpatch it. Without that, I'm\nleaning towards backpatching it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Mar 29, 2019 at 1:49 PM David Steele <david@pgmasters.net> wrote:On 3/29/19 12:46 PM, Robert Haas wrote:\n> On Fri, Mar 29, 2019 at 8:45 AM David Steele <david@pgmasters.net> wrote:\n>> Are we planning to back-patch this?  The deprecation was added to the\n>> docs in 9.6 -- I think these clarifications would be helpful.\n> \n> I wasn't planning too, but I guess we could consider it.  I'd be more\n> inclined to back-patch the documentation changes than the message text\n> changes, but what do other people think?\n\nI think we should definitely do the docs, I'm 50% on the message.  You \ncould argue that is is a behavioral change, but it is pretty important info.+1 on the docs.Is the changes to the messages going to cause issues or weirdness for translators? That would be a reason not to backpatch it. Without that, I'm leaning towards backpatching it. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 29 Mar 2019 13:54:40 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" }, { "msg_contents": "On 2019-03-29 13:54, Magnus Hagander wrote:\n> Is the changes to the messages going to cause issues or weirdness for\n> translators? That would be a reason not to backpatch it. Without that,\n> I'm leaning towards backpatching it. \n\nNote that the messages refer to recovery.signal, so a backpatch would\nhave to rephrase the message.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 Mar 2019 13:56:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add exclusive backup deprecation notes to documentation" } ]
[ { "msg_contents": "Greetings,\n\nI'm pleased to announce that we have been accepted by Google to\nparticipate in the Summer of Code (GSoC) 2019 program. This will be the\n12th time that the PostgreSQL Project will provide mentorship for\nstudents to help develop new features for PostgreSQL. We have the chance\nto accept student projects that will be developed from May to August.\n\nIf you are a student, and want to participate in this year's GSoC,\nplease watch this Wiki page: https://wiki.postgresql.org/wiki/GSoC\n\nIf you are interested in mentoring a student, you can add your own idea\nto the project list. Please reach out to the PG GSoC admins, listed\nhere: https://wiki.postgresql.org/wiki/GSoC\n\nAnd finally, we ask everyone to reach out to students and universities\nand let them know about GSoC.\n\nThanks!\n\nStephen\nPostgreSQL GSoC 2019 Administrator", "msg_date": "Thu, 28 Feb 2019 10:22:13 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "PostgreSQL Participates in GSoC 2019!" }, { "msg_contents": "Hi,\n\nI have applied and submitted an initial draft of my proposal for GSoC 2019\nthrough the Summer of Code site. The project is titled 'pgAdmin4 Graphing\nQuery Tool'.\n\nI would like to get some feedback for the same so that I can improve on\nmaking the final proposal better. The link to the draft is - GSoC 2019\nProposal\n<https://docs.google.com/document/d/10x_c3HsOEQuDbL6zB8fjk6_9BqP8L1zvhe62h613Ruc/edit?usp=sharing>\n\nThanks,\nRitom\n\nOn Thu, 28 Feb, 2019, 8:52 PM Stephen Frost, <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> I'm pleased to announce that we have been accepted by Google to\n> participate in the Summer of Code (GSoC) 2019 program. This will be the\n> 12th time that the PostgreSQL Project will provide mentorship for\n> students to help develop new features for PostgreSQL. We have the chance\n> to accept student projects that will be developed from May to August.\n>\n> If you are a student, and want to participate in this year's GSoC,\n> please watch this Wiki page: https://wiki.postgresql.org/wiki/GSoC\n>\n> If you are interested in mentoring a student, you can add your own idea\n> to the project list. Please reach out to the PG GSoC admins, listed\n> here: https://wiki.postgresql.org/wiki/GSoC\n>\n> And finally, we ask everyone to reach out to students and universities\n> and let them know about GSoC.\n>\n> Thanks!\n>\n> Stephen\n> PostgreSQL GSoC 2019 Administrator\n>\n\nHi,I have applied and submitted an initial draft of my proposal for GSoC 2019 through the Summer of Code site. The project is titled 'pgAdmin4 Graphing Query Tool'.I would like to get some feedback for the same so that I can improve on making the final proposal better. The link to the draft is - GSoC 2019 ProposalThanks,RitomOn Thu, 28 Feb, 2019, 8:52 PM Stephen Frost, <sfrost@snowman.net> wrote:Greetings,\n\nI'm pleased to announce that we have been accepted by Google to\nparticipate in the Summer of Code (GSoC) 2019 program. This will be the\n12th time that the PostgreSQL Project will provide mentorship for\nstudents to help develop new features for PostgreSQL. We have the chance\nto accept student projects that will be developed from May to August.\n\nIf you are a student, and want to participate in this year's GSoC,\nplease watch this Wiki page: https://wiki.postgresql.org/wiki/GSoC\n\nIf you are interested in mentoring a student, you can add your own idea\nto the project list. Please reach out to the PG GSoC admins, listed\nhere: https://wiki.postgresql.org/wiki/GSoC\n\nAnd finally, we ask everyone to reach out to students and universities\nand let them know about GSoC.\n\nThanks!\n\nStephen\nPostgreSQL GSoC 2019 Administrator", "msg_date": "Tue, 26 Mar 2019 22:09:33 +0530", "msg_from": "Ritom Sonowal <ritomsonowal99@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Participates in GSoC 2019!" } ]
[ { "msg_contents": "Hi\n\none user of plpgsql_check reported interesting error message\n\ncreate or replace function omega.foo(a int)\nreturns int as $$\ndeclare offset integer := 0;\nbegin\n return offset + 1;\nend;\n$$ language plpgsql;\n\npostgres=# select omega.foo(10);\nERROR: query \"SELECT offset + 1\" returned 0 columns\nCONTEXT: PL/pgSQL function omega.foo(integer) line 4 at RETURN\n\nMaybe we should to disallow variables named as sql reserved keyword.\n\nRegards\n\nPavel\n\nHione user of plpgsql_check reported interesting error messagecreate or replace function omega.foo(a int)\nreturns int as $$\ndeclare offset integer := 0;\nbegin \n  return offset + 1;\nend;\n$$ language plpgsql;postgres=# select omega.foo(10);\nERROR:  query \"SELECT offset + 1\" returned 0 columns\nCONTEXT:  PL/pgSQL function omega.foo(integer) line 4 at RETURN\nMaybe we should to disallow variables named as sql reserved keyword.RegardsPavel", "msg_date": "Thu, 28 Feb 2019 18:45:23 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "plpgsql variable named as SQL keyword" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Maybe we should to disallow variables named as sql reserved keyword.\n\nThat would just break existing code. There are lots of other\nexamples where you can get away with such things.\n\nWe've expended quite a lot of sweat to avoid reserving more names than\nwe had to in plpgsql. I'm disinclined to throw that away just because\nsomebody found an error message confusing. It's not like reserving\n\"offset\" would cause this case to work.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 28 Feb 2019 13:20:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plpgsql variable named as SQL keyword" }, { "msg_contents": "čt 28. 2. 2019 v 19:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > Maybe we should to disallow variables named as sql reserved keyword.\n>\n> That would just break existing code. There are lots of other\n> examples where you can get away with such things.\n>\n> We've expended quite a lot of sweat to avoid reserving more names than\n> we had to in plpgsql. I'm disinclined to throw that away just because\n> somebody found an error message confusing. It's not like reserving\n> \"offset\" would cause this case to work.\n>\n\npartially I solved it with new warning in plpgsql_check\n\nhttps://github.com/okbob/plpgsql_check/commit/5b9ef57d570c1d11fb92b9cff76655a03767f662\n\npostgres=# select * from plpgsql_check_function('omega.foo(int, int,\nint)');\n+-------------------------------------------------------------------------------+\n\n|\nplpgsql_check_function |\n+-------------------------------------------------------------------------------+\n\n| warning:00000:3:statement block:name of variable \"offset\" is reserved\nkeyword |\n| Detail: The reserved keyword was used as variable\nname. |\n| error:42601:4:RETURN:query \"SELECT offset + 1\" returned 0\ncolumns |\n+-------------------------------------------------------------------------------+\n\n(3 rows)\n\nI understand so it has not simple solution (or had not solution). I\nreported it +/- for record.\n\nThank you for reply\n\nPavel\n\n\n> regards, tom lane\n>\n\nčt 28. 2. 2019 v 19:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Maybe we should to disallow variables named as sql reserved keyword.\n\nThat would just break existing code.  There are lots of other\nexamples where you can get away with such things.\n\nWe've expended quite a lot of sweat to avoid reserving more names than\nwe had to in plpgsql.  I'm disinclined to throw that away just because\nsomebody found an error message confusing.  It's not like reserving\n\"offset\" would cause this case to work.partially I solved it with new warning in plpgsql_checkhttps://github.com/okbob/plpgsql_check/commit/5b9ef57d570c1d11fb92b9cff76655a03767f662postgres=# select * from plpgsql_check_function('omega.foo(int, int, int)');\n+-------------------------------------------------------------------------------+\n|                            plpgsql_check_function                             |\n+-------------------------------------------------------------------------------+\n| warning:00000:3:statement block:name of variable \"offset\" is reserved keyword |\n| Detail: The reserved keyword was used as variable name.                       |\n| error:42601:4:RETURN:query \"SELECT offset + 1\" returned 0 columns             |\n+-------------------------------------------------------------------------------+\n(3 rows)\nI understand so it has not simple solution (or had not solution). I reported it +/- for record. Thank you for replyPavel\n\n                        regards, tom lane", "msg_date": "Thu, 28 Feb 2019 19:25:20 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plpgsql variable named as SQL keyword" } ]
[ { "msg_contents": "Dear SMEs\n\nI have finally decided to move forward after great hospitality in Version\n9.2.24 :-)\n\nFirst i attempted to upgrade from 9.2.24 to 10.7, but its failed with\nfollowing error during Check Mode.\n\ncould not load library \"$libdir/hstore\": ERROR: could not access file\n\"$libdir/hstore\": No such file or directory\ncould not load library \"$libdir/adminpack\": ERROR: could not access file\n\"$libdir/adminpack\": No such file or directory\ncould not load library \"$libdir/uuid-ossp\": ERROR: could not access file\n\"$libdir/uuid-ossp\": No such file or directory\n\nObservation : the above Libraries are present in 9.2 whereas its mising in\n10.7. So i decided to go with lower version.\n\nSecond i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but its\nfailed with following error during Check Mode.\n\ncould not load library \"$libdir/pg_reorg\":\nERROR: could not access file \"$libdir/pg_reorg\": No such file or directory\n\nObservation : In this case , pg_reorg is not present on both Source and\nTarget . But strange its failing.\n\n\nMethod Used : pg_upgrade\n\nCould you please share some light here to get rid of library issue .\n\nThanks, in advance ,\nRaju\n\nDear SMEsI have finally decided to move forward after great hospitality in Version 9.2.24 :-)First i attempted to upgrade from 9.2.24 to 10.7, but its failed with following error during Check Mode.could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directorycould not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directorycould not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directoryObservation : the above Libraries are present in 9.2 whereas its mising in 10.7. So i decided to go with lower version.Second  i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but its failed with following error during Check Mode.could not load library \"$libdir/pg_reorg\":ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directoryObservation : In this case , pg_reorg is not present on both Source and Target . But strange its failing.Method Used : pg_upgradeCould you please share some light here to get rid of  library issue .Thanks, in advance ,Raju", "msg_date": "Thu, 28 Feb 2019 10:13:58 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hi\nPlease try with below commands.\n\nLet we want to upgrade v6 to v11.\nNote: I installed my binary inside result folder.\n\nexport OLDCLUSTER=./6_EDBAS/EDBAS/result\nexport NEWCLUSTER=./11_EDBAS/EDBAS/result\n./11_EDBAS/EDBAS/result/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin\n--new-bindir=$NEWCLUSTER/bin --old-datadir=$OLDCLUSTER/bin/data\n--new-datadir=$NEWCLUSTER/bin/data\n\nNote: old server should be in running state and new server should not be in\nrunning state.\n\nThanks and Regards\nMahendra\n\nOn Thu, 28 Feb 2019 at 23:44, Perumal Raj <perucinci@gmail.com> wrote:\n\n> Dear SMEs\n>\n> I have finally decided to move forward after great hospitality in Version\n> 9.2.24 :-)\n>\n> First i attempted to upgrade from 9.2.24 to 10.7, but its failed with\n> following error during Check Mode.\n>\n> could not load library \"$libdir/hstore\": ERROR: could not access file\n> \"$libdir/hstore\": No such file or directory\n> could not load library \"$libdir/adminpack\": ERROR: could not access file\n> \"$libdir/adminpack\": No such file or directory\n> could not load library \"$libdir/uuid-ossp\": ERROR: could not access file\n> \"$libdir/uuid-ossp\": No such file or directory\n>\n> Observation : the above Libraries are present in 9.2 whereas its mising in\n> 10.7. So i decided to go with lower version.\n>\n> Second i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but\n> its failed with following error during Check Mode.\n>\n> could not load library \"$libdir/pg_reorg\":\n> ERROR: could not access file \"$libdir/pg_reorg\": No such file or directory\n>\n> Observation : In this case , pg_reorg is not present on both Source and\n> Target . But strange its failing.\n>\n>\n> Method Used : pg_upgrade\n>\n> Could you please share some light here to get rid of library issue .\n>\n> Thanks, in advance ,\n> Raju\n>\n>\n\nHiPlease try with below commands.Let we want to upgrade v6 to v11.Note: I installed my binary inside result folder.export OLDCLUSTER=./6_EDBAS/EDBAS/resultexport NEWCLUSTER=./11_EDBAS/EDBAS/result./11_EDBAS/EDBAS/result/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin --new-bindir=$NEWCLUSTER/bin --old-datadir=$OLDCLUSTER/bin/data --new-datadir=$NEWCLUSTER/bin/dataNote: old server should be in running state and new server should not be in running state.Thanks and RegardsMahendraOn Thu, 28 Feb 2019 at 23:44, Perumal Raj <perucinci@gmail.com> wrote:Dear SMEsI have finally decided to move forward after great hospitality in Version 9.2.24 :-)First i attempted to upgrade from 9.2.24 to 10.7, but its failed with following error during Check Mode.could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directorycould not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directorycould not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directoryObservation : the above Libraries are present in 9.2 whereas its mising in 10.7. So i decided to go with lower version.Second  i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but its failed with following error during Check Mode.could not load library \"$libdir/pg_reorg\":ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directoryObservation : In this case , pg_reorg is not present on both Source and Target . But strange its failing.Method Used : pg_upgradeCould you please share some light here to get rid of  library issue .Thanks, in advance ,Raju", "msg_date": "Thu, 28 Feb 2019 23:52:47 +0530", "msg_from": "Mahendra Singh <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Thanks Mahendra for quick response.\n\nI have followed same way, only difference i didn't bringup Source ( 9.2),\nBut not sure how that will resolve libraries issue.\nAll i tried with --check mode only\n\nThanks,\n\n\nOn Thu, Feb 28, 2019 at 10:23 AM Mahendra Singh <mahi6run@gmail.com> wrote:\n\n> Hi\n> Please try with below commands.\n>\n> Let we want to upgrade v6 to v11.\n> Note: I installed my binary inside result folder.\n>\n> export OLDCLUSTER=./6_EDBAS/EDBAS/result\n> export NEWCLUSTER=./11_EDBAS/EDBAS/result\n> ./11_EDBAS/EDBAS/result/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin\n> --new-bindir=$NEWCLUSTER/bin --old-datadir=$OLDCLUSTER/bin/data\n> --new-datadir=$NEWCLUSTER/bin/data\n>\n> Note: old server should be in running state and new server should not be\n> in running state.\n>\n> Thanks and Regards\n> Mahendra\n>\n> On Thu, 28 Feb 2019 at 23:44, Perumal Raj <perucinci@gmail.com> wrote:\n>\n>> Dear SMEs\n>>\n>> I have finally decided to move forward after great hospitality in Version\n>> 9.2.24 :-)\n>>\n>> First i attempted to upgrade from 9.2.24 to 10.7, but its failed with\n>> following error during Check Mode.\n>>\n>> could not load library \"$libdir/hstore\": ERROR: could not access file\n>> \"$libdir/hstore\": No such file or directory\n>> could not load library \"$libdir/adminpack\": ERROR: could not access file\n>> \"$libdir/adminpack\": No such file or directory\n>> could not load library \"$libdir/uuid-ossp\": ERROR: could not access file\n>> \"$libdir/uuid-ossp\": No such file or directory\n>>\n>> Observation : the above Libraries are present in 9.2 whereas its mising\n>> in 10.7. So i decided to go with lower version.\n>>\n>> Second i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but\n>> its failed with following error during Check Mode.\n>>\n>> could not load library \"$libdir/pg_reorg\":\n>> ERROR: could not access file \"$libdir/pg_reorg\": No such file or\n>> directory\n>>\n>> Observation : In this case , pg_reorg is not present on both Source and\n>> Target . But strange its failing.\n>>\n>>\n>> Method Used : pg_upgrade\n>>\n>> Could you please share some light here to get rid of library issue .\n>>\n>> Thanks, in advance ,\n>> Raju\n>>\n>>\n\nThanks Mahendra for quick response.I have followed same way, only difference i didn't bringup Source ( 9.2), But not sure how that will resolve libraries issue.All i tried with --check mode only Thanks,On Thu, Feb 28, 2019 at 10:23 AM Mahendra Singh <mahi6run@gmail.com> wrote:HiPlease try with below commands.Let we want to upgrade v6 to v11.Note: I installed my binary inside result folder.export OLDCLUSTER=./6_EDBAS/EDBAS/resultexport NEWCLUSTER=./11_EDBAS/EDBAS/result./11_EDBAS/EDBAS/result/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin --new-bindir=$NEWCLUSTER/bin --old-datadir=$OLDCLUSTER/bin/data --new-datadir=$NEWCLUSTER/bin/dataNote: old server should be in running state and new server should not be in running state.Thanks and RegardsMahendraOn Thu, 28 Feb 2019 at 23:44, Perumal Raj <perucinci@gmail.com> wrote:Dear SMEsI have finally decided to move forward after great hospitality in Version 9.2.24 :-)First i attempted to upgrade from 9.2.24 to 10.7, but its failed with following error during Check Mode.could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directorycould not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directorycould not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directoryObservation : the above Libraries are present in 9.2 whereas its mising in 10.7. So i decided to go with lower version.Second  i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but its failed with following error during Check Mode.could not load library \"$libdir/pg_reorg\":ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directoryObservation : In this case , pg_reorg is not present on both Source and Target . But strange its failing.Method Used : pg_upgradeCould you please share some light here to get rid of  library issue .Thanks, in advance ,Raju", "msg_date": "Thu, 28 Feb 2019 10:26:53 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hello\n\npgsql-hackers seems wrong list for such question.\n\n> could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directory\n> could not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directory\n> could not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directory\n>\n> Observation : the above Libraries are present in 9.2 whereas its mising in 10.7. So i decided to go with lower version.\n\nThis is contrib modules. They can be shipped in separate package, postgresql10-contrib.x86_64 for example (in centos repo)\n\n> Second  i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but its failed with following error during Check Mode.\n>\n> could not load library \"$libdir/pg_reorg\":\n> ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directory\n>\n> Observation : In this case , pg_reorg is not present on both Source and Target . But strange its failing.\n\nThis is 3rd-party extension. Best way would be drop this extension on old cluster and perform upgrade. pg_reorg is abandoned for years, pg_repack is live fork if you need such tool.\n\nregards, Sergei\n\n", "msg_date": "Thu, 28 Feb 2019 21:27:28 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Thank you very much Sergei,\n\nYes, i want to get rid of old extension, Could you please share the query\nto find extension which is using pg_reorg.\n\nRegards,\n\n\n\n\nOn Thu, Feb 28, 2019 at 10:27 AM Sergei Kornilov <sk@zsrv.org> wrote:\n\n> Hello\n>\n> pgsql-hackers seems wrong list for such question.\n>\n> > could not load library \"$libdir/hstore\": ERROR: could not access file\n> \"$libdir/hstore\": No such file or directory\n> > could not load library \"$libdir/adminpack\": ERROR: could not access\n> file \"$libdir/adminpack\": No such file or directory\n> > could not load library \"$libdir/uuid-ossp\": ERROR: could not access\n> file \"$libdir/uuid-ossp\": No such file or directory\n> >\n> > Observation : the above Libraries are present in 9.2 whereas its mising\n> in 10.7. So i decided to go with lower version.\n>\n> This is contrib modules. They can be shipped in separate package,\n> postgresql10-contrib.x86_64 for example (in centos repo)\n>\n> > Second i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but\n> its failed with following error during Check Mode.\n> >\n> > could not load library \"$libdir/pg_reorg\":\n> > ERROR: could not access file \"$libdir/pg_reorg\": No such file or\n> directory\n> >\n> > Observation : In this case , pg_reorg is not present on both Source and\n> Target . But strange its failing.\n>\n> This is 3rd-party extension. Best way would be drop this extension on old\n> cluster and perform upgrade. pg_reorg is abandoned for years, pg_repack is\n> live fork if you need such tool.\n>\n> regards, Sergei\n>\n\nThank you very much Sergei,Yes, i want to get rid of old extension, Could you please share the query to find extension which is using pg_reorg.Regards,On Thu, Feb 28, 2019 at 10:27 AM Sergei Kornilov <sk@zsrv.org> wrote:Hello\n\npgsql-hackers seems wrong list for such question.\n\n> could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directory\n> could not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directory\n> could not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directory\n>\n> Observation : the above Libraries are present in 9.2 whereas its mising in 10.7. So i decided to go with lower version.\n\nThis is contrib modules. They can be shipped in separate package, postgresql10-contrib.x86_64 for example (in centos repo)\n\n> Second  i tried to attempt to upgrade from 9.2.24 to 9.6.12,9.4,9.3 but its failed with following error during Check Mode.\n>\n> could not load library \"$libdir/pg_reorg\":\n> ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directory\n>\n> Observation : In this case , pg_reorg is not present on both Source and Target . But strange its failing.\n\nThis is 3rd-party extension. Best way would be drop this extension on old cluster and perform upgrade. pg_reorg is abandoned for years, pg_repack is live fork if you need such tool.\n\nregards, Sergei", "msg_date": "Thu, 28 Feb 2019 10:29:25 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hi\n\n> Yes, i want to get rid of old extension, Could you please share the query to find extension which is using pg_reorg.\n\npg_reorg is name for both tool and extension.\nCheck every database in cluster with, for example, psql command \"\\dx\" or read pg_dumpall -s output for some CREATE EXTENSION statements to find all installed extensions.\n\nregards, Sergei\n\n", "msg_date": "Thu, 28 Feb 2019 22:04:15 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "here is the data,\n\npostgres=# \\c template1\nYou are now connected to database \"template1\" as user \"postgres\".\ntemplate1=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\n\ntemplate1=# \\c postgres\nYou are now connected to database \"postgres\" as user \"postgres\".\npostgres=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\n\npostgres=# \\c nagdb\nYou are now connected to database \"nagdb\" as user \"postgres\".\nnagdb=# \\dx\n List of installed extensions\n Name | Version | Schema | Description\n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\n\nnagdb=# \\c archive_old\nYou are now connected to database \"books_old\" as user \"postgres\".\nbooks_old=# \\dx\n List of installed extensions\n Name | Version | Schema |\nDescription\n--------------------+---------+------------+-----------------------------------------------------------\n pg_stat_statements | 1.1 | public | track execution\nstatistics of all SQL statements executed\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(2 rows)\n\narchive_old=# \\c production\nYou are now connected to database \"blurb_production\" as user \"postgres\".\nproduction=# \\dx\n List of installed extensions\n Name | Version | Schema |\nDescription\n--------------------+---------+------------+-----------------------------------------------------------\n hstore | 1.1 | public | data type for storing\nsets of (key, value) pairs\n pg_stat_statements | 1.1 | public | track execution\nstatistics of all SQL statements executed\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n uuid-ossp | 1.0 | public | generate universally\nunique identifiers (UUIDs)\n(4 rows)\n\n\nThanks,\n\n\n\nOn Thu, Feb 28, 2019 at 11:04 AM Sergei Kornilov <sk@zsrv.org> wrote:\n\n> Hi\n>\n> > Yes, i want to get rid of old extension, Could you please share the\n> query to find extension which is using pg_reorg.\n>\n> pg_reorg is name for both tool and extension.\n> Check every database in cluster with, for example, psql command \"\\dx\" or\n> read pg_dumpall -s output for some CREATE EXTENSION statements to find all\n> installed extensions.\n>\n> regards, Sergei\n>\n\nhere is the data,postgres=# \\c template1\nYou are now connected to database \"template1\" as user \"postgres\".\ntemplate1=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\ntemplate1=# \\c postgres\nYou are now connected to database \"postgres\" as user \"postgres\".\npostgres=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\npostgres=# \\c nagdb\nYou are now connected to database \"nagdb\" as user \"postgres\".\nnagdb=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\nnagdb=# \\c archive_old\nYou are now connected to database \"books_old\" as user \"postgres\".\nbooks_old=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n--------------------+---------+------------+-----------------------------------------------------------\n pg_stat_statements | 1.1 | public | track execution statistics of all SQL statements executed\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(2 rows)\narchive_old=# \\c production\nYou are now connected to database \"blurb_production\" as user \"postgres\".\nproduction=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n--------------------+---------+------------+-----------------------------------------------------------\n hstore | 1.1 | public | data type for storing sets of (key, value) pairs\n pg_stat_statements | 1.1 | public | track execution statistics of all SQL statements executed\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n uuid-ossp | 1.0 | public | generate universally unique identifiers (UUIDs)\n(4 rows)Thanks,On Thu, Feb 28, 2019 at 11:04 AM Sergei Kornilov <sk@zsrv.org> wrote:Hi\n\n> Yes, i want to get rid of old extension, Could you please share the query to find extension which is using pg_reorg.\n\npg_reorg is name for both tool and extension.\nCheck every database in cluster with, for example, psql command \"\\dx\" or read pg_dumpall -s output for some CREATE EXTENSION statements to find all installed extensions.\n\nregards, Sergei", "msg_date": "Thu, 28 Feb 2019 11:21:37 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hi Sergei and Team\n\nCould you share your observation further.\n\nPerumal Raju\n\n\nOn Thu, Feb 28, 2019, 11:21 AM Perumal Raj <perucinci@gmail.com> wrote:\n\n> here is the data,\n>\n> postgres=# \\c template1\n> You are now connected to database \"template1\" as user \"postgres\".\n> template1=# \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> ---------+---------+------------+------------------------------\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (1 row)\n>\n> template1=# \\c postgres\n> You are now connected to database \"postgres\" as user \"postgres\".\n> postgres=# \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> ---------+---------+------------+------------------------------\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (1 row)\n>\n> postgres=# \\c nagdb\n> You are now connected to database \"nagdb\" as user \"postgres\".\n> nagdb=# \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> ---------+---------+------------+------------------------------\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (1 row)\n>\n> nagdb=# \\c archive_old\n>\n> List of installed extensions\n> Name | Version | Schema | Description\n> --------------------+---------+------------+-----------------------------------------------------------\n> pg_stat_statements | 1.1 | public | track execution statistics of all SQL statements executed\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> (2 rows)\n>\n> archive_old=# \\c production\n> # \\dx\n> List of installed extensions\n> Name | Version | Schema | Description\n> --------------------+---------+------------+-----------------------------------------------------------\n> hstore | 1.1 | public | data type for storing sets of (key, value) pairs\n> pg_stat_statements | 1.1 | public | track execution statistics of all SQL statements executed\n> plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n> uuid-ossp | 1.0 | public | generate universally unique identifiers (UUIDs)\n> (4 rows)\n>\n>\n> Thanks,\n>\n>\n>\n> On Thu, Feb 28, 2019 at 11:04 AM Sergei Kornilov <sk@zsrv.org> wrote:\n>\n>> Hi\n>>\n>> > Yes, i want to get rid of old extension, Could you please share the\n>> query to find extension which is using pg_reorg.\n>>\n>> pg_reorg is name for both tool and extension.\n>> Check every database in cluster with, for example, psql command \"\\dx\" or\n>> read pg_dumpall -s output for some CREATE EXTENSION statements to find all\n>> installed extensions.\n>>\n>> regards, Sergei\n>>\n>\n\nHi Sergei and TeamCould you share your observation further.Perumal RajuOn Thu, Feb 28, 2019, 11:21 AM Perumal Raj <perucinci@gmail.com> wrote:here is the data,postgres=# \\c template1\nYou are now connected to database \"template1\" as user \"postgres\".\ntemplate1=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\ntemplate1=# \\c postgres\nYou are now connected to database \"postgres\" as user \"postgres\".\npostgres=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\npostgres=# \\c nagdb\nYou are now connected to database \"nagdb\" as user \"postgres\".\nnagdb=# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n---------+---------+------------+------------------------------\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(1 row)\nnagdb=# \\c archive_old\n\n List of installed extensions\n Name | Version | Schema | Description \n--------------------+---------+------------+-----------------------------------------------------------\n pg_stat_statements | 1.1 | public | track execution statistics of all SQL statements executed\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n(2 rows)\narchive_old=# \\c production\n# \\dx\n List of installed extensions\n Name | Version | Schema | Description \n--------------------+---------+------------+-----------------------------------------------------------\n hstore | 1.1 | public | data type for storing sets of (key, value) pairs\n pg_stat_statements | 1.1 | public | track execution statistics of all SQL statements executed\n plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n uuid-ossp | 1.0 | public | generate universally unique identifiers (UUIDs)\n(4 rows)Thanks,On Thu, Feb 28, 2019 at 11:04 AM Sergei Kornilov <sk@zsrv.org> wrote:Hi\n\n> Yes, i want to get rid of old extension, Could you please share the query to find extension which is using pg_reorg.\n\npg_reorg is name for both tool and extension.\nCheck every database in cluster with, for example, psql command \"\\dx\" or read pg_dumpall -s output for some CREATE EXTENSION statements to find all installed extensions.\n\nregards, Sergei", "msg_date": "Sat, 2 Mar 2019 06:23:25 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Moving to -general list (-hackers is for development topics like proposed\npatches and patch reviews and beta testing and crash reports).\n\nOn Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:\n> could not load library \"$libdir/pg_reorg\":\n> ERROR: could not access file \"$libdir/pg_reorg\": No such file or directory\n\nAs Sergei said, you can run pg_dump -s and look for references to reorg, and\ndrop them.\n\nOr, you could try this:\nCREATE EXTENSION pg_reorg FROM unpackaged;\n\nOr maybe this:\nCREATE EXTENSION pg_repack FROM unpackaged;\n\nIf that works, you can DROP EXTENSION pg_repack;\n\nOtherwise, I think you can maybe do something like:\nDROP SCHEMA pg_repack CASCADE; -- or,\nDROP SCHEMA pg_reorg CASCADE;\n\nPlease send output of: \\dn\n\n", "msg_date": "Sun, 3 Mar 2019 08:51:05 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Thanks.Will decently try that option and keep you posted.\n\nThanks again for redirecting to right group.\n\n\nPerumal Raju\n\nOn Sun, Mar 3, 2019, 6:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Moving to -general list (-hackers is for development topics like proposed\n> patches and patch reviews and beta testing and crash reports).\n>\n> On Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:\n> > could not load library \"$libdir/pg_reorg\":\n> > ERROR: could not access file \"$libdir/pg_reorg\": No such file or\n> directory\n>\n> As Sergei said, you can run pg_dump -s and look for references to reorg,\n> and\n> drop them.\n>\n> Or, you could try this:\n> CREATE EXTENSION pg_reorg FROM unpackaged;\n>\n> Or maybe this:\n> CREATE EXTENSION pg_repack FROM unpackaged;\n>\n> If that works, you can DROP EXTENSION pg_repack;\n>\n> Otherwise, I think you can maybe do something like:\n> DROP SCHEMA pg_repack CASCADE; -- or,\n> DROP SCHEMA pg_reorg CASCADE;\n>\n> Please send output of: \\dn\n>\n\nThanks.Will decently try that option and keep you posted.Thanks again for redirecting to right group.Perumal RajuOn Sun, Mar 3, 2019, 6:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:Moving to -general list (-hackers is for development topics like proposed\npatches and patch reviews and beta testing and crash reports).\n\nOn Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:\n> could not load library \"$libdir/pg_reorg\":\n> ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directory\n\nAs Sergei said, you can run pg_dump -s and look for references to reorg, and\ndrop them.\n\nOr, you could try this:\nCREATE EXTENSION pg_reorg FROM unpackaged;\n\nOr maybe this:\nCREATE EXTENSION pg_repack FROM unpackaged;\n\nIf that works, you can DROP EXTENSION pg_repack;\n\nOtherwise, I think you can maybe do something like:\nDROP SCHEMA pg_repack CASCADE; -- or,\nDROP SCHEMA pg_reorg CASCADE;\n\nPlease send output of: \\dn", "msg_date": "Sun, 3 Mar 2019 19:38:58 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hi Justin\n\nI could see bunch of functions under reorg schema.\n\nAS '$libdir/pg_reorg', 'reorg_disable_autovacuum';\nAS '$libdir/pg_reorg', 'reorg_get_index_keys';\nAS '$libdir/pg_reorg', 'reorg_apply';\nAS '$libdir/pg_reorg', 'reorg_drop';\nAS '$libdir/pg_reorg', 'reorg_indexdef';\nAS '$libdir/pg_reorg', 'reorg_swap';\nAS '$libdir/pg_reorg', 'reorg_trigger';\nAS '$libdir/pg_reorg', 'reorg_version';\n\nI am not sure about the impact of these functions if i drop .\n\nAre these functions seeded ( default) one ?\n\nRegards,\nRaj\n\n\nOn Sun, Mar 3, 2019 at 7:38 PM Perumal Raj <perucinci@gmail.com> wrote:\n\n> Thanks.Will decently try that option and keep you posted.\n>\n> Thanks again for redirecting to right group.\n>\n>\n> Perumal Raju\n>\n> On Sun, Mar 3, 2019, 6:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>> Moving to -general list (-hackers is for development topics like proposed\n>> patches and patch reviews and beta testing and crash reports).\n>>\n>> On Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:\n>> > could not load library \"$libdir/pg_reorg\":\n>> > ERROR: could not access file \"$libdir/pg_reorg\": No such file or\n>> directory\n>>\n>> As Sergei said, you can run pg_dump -s and look for references to reorg,\n>> and\n>> drop them.\n>>\n>> Or, you could try this:\n>> CREATE EXTENSION pg_reorg FROM unpackaged;\n>>\n>> Or maybe this:\n>> CREATE EXTENSION pg_repack FROM unpackaged;\n>>\n>> If that works, you can DROP EXTENSION pg_repack;\n>>\n>> Otherwise, I think you can maybe do something like:\n>> DROP SCHEMA pg_repack CASCADE; -- or,\n>> DROP SCHEMA pg_reorg CASCADE;\n>>\n>> Please send output of: \\dn\n>>\n>\n\nHi JustinI could see bunch of functions under reorg schema.AS '$libdir/pg_reorg', 'reorg_disable_autovacuum';AS '$libdir/pg_reorg', 'reorg_get_index_keys';AS '$libdir/pg_reorg', 'reorg_apply';AS '$libdir/pg_reorg', 'reorg_drop';AS '$libdir/pg_reorg', 'reorg_indexdef';AS '$libdir/pg_reorg', 'reorg_swap';AS '$libdir/pg_reorg', 'reorg_trigger';AS '$libdir/pg_reorg', 'reorg_version';I am not sure about the impact of these functions if i drop .Are these functions seeded ( default) one ?Regards,Raj On Sun, Mar 3, 2019 at 7:38 PM Perumal Raj <perucinci@gmail.com> wrote:Thanks.Will decently try that option and keep you posted.Thanks again for redirecting to right group.Perumal RajuOn Sun, Mar 3, 2019, 6:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:Moving to -general list (-hackers is for development topics like proposed\npatches and patch reviews and beta testing and crash reports).\n\nOn Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:\n> could not load library \"$libdir/pg_reorg\":\n> ERROR:  could not access file \"$libdir/pg_reorg\": No such file or directory\n\nAs Sergei said, you can run pg_dump -s and look for references to reorg, and\ndrop them.\n\nOr, you could try this:\nCREATE EXTENSION pg_reorg FROM unpackaged;\n\nOr maybe this:\nCREATE EXTENSION pg_repack FROM unpackaged;\n\nIf that works, you can DROP EXTENSION pg_repack;\n\nOtherwise, I think you can maybe do something like:\nDROP SCHEMA pg_repack CASCADE; -- or,\nDROP SCHEMA pg_reorg CASCADE;\n\nPlease send output of: \\dn", "msg_date": "Mon, 4 Mar 2019 13:37:30 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "On Mon, Mar 04, 2019 at 01:37:30PM -0800, Perumal Raj wrote:\n> I could see bunch of functions under reorg schema.\n\nThose functions are the ones preventing you from upgrading.\nYou should drop schema pg_reorg cascade.\nYou can run it in a transaction first to see what it will drop.\nBut after the upgrade, you can CREATE EXTENSION pg_repack, which is a fork of\npg_reorg, which is itself no longer maintained.\n\nJustin\n\n", "msg_date": "Mon, 4 Mar 2019 15:45:01 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hi Justin\n\nDoes it mean that these functions are default and came with 9.2 ?\nI am wondering how these functions are created in the DB as the\nlibrary($libdir/pg_reorg) is not exists in system\n\nNote:\nMy schema name is reorg not pg_reorg\n\n\n\n\nOn Mon, Mar 4, 2019 at 1:45 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Mar 04, 2019 at 01:37:30PM -0800, Perumal Raj wrote:\n> > I could see bunch of functions under reorg schema.\n>\n> Those functions are the ones preventing you from upgrading.\n> You should drop schema pg_reorg cascade.\n> You can run it in a transaction first to see what it will drop.\n> But after the upgrade, you can CREATE EXTENSION pg_repack, which is a fork\n> of\n> pg_reorg, which is itself no longer maintained.\n>\n> Justin\n>\n\nHi JustinDoes it mean that these functions are default and came with 9.2 ? I am wondering how these functions are created in the DB as the library($libdir/pg_reorg)  is not exists in system Note:My schema name is reorg not pg_reorgOn Mon, Mar 4, 2019 at 1:45 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Mar 04, 2019 at 01:37:30PM -0800, Perumal Raj wrote:\n> I could see bunch of functions under reorg schema.\n\nThose functions are the ones preventing you from upgrading.\nYou should drop schema pg_reorg cascade.\nYou can run it in a transaction first to see what it will drop.\nBut after the upgrade, you can CREATE EXTENSION pg_repack, which is a fork of\npg_reorg, which is itself no longer maintained.\n\nJustin", "msg_date": "Mon, 4 Mar 2019 14:21:11 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "On Mon, Mar 04, 2019 at 02:21:11PM -0800, Perumal Raj wrote:\n> Does it mean that these functions are default and came with 9.2 ?\n> I am wondering how these functions are created in the DB as the\n> library($libdir/pg_reorg) is not exists in system\n\nI don't think it's default.\nBut was probably installed by running some SQL script.\n\nIt tentatively sounds safe to me to drop, but you should take a backup and\ninspect and double check your pg_dump output and output of \"begin; drop schema\npgreorg cascade\".\n\nJustin\n\n", "msg_date": "Mon, 4 Mar 2019 16:32:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Hi\n\nseems this is unpackaged extension, usually installed prior 9.1 release. Maybe reorg even does not support \"create extension\" syntax. That was long ago and project homepage is unavailable now. pg_repack documentation mention \"support for PostgreSQL 9.2 and EXTENSION packaging\" as improvements.\n\n> Are these functions seeded ( default) one ?\n\nNo its not default.\n\nregards, Sergei\n\n", "msg_date": "Tue, 05 Mar 2019 10:42:47 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Thanks Sergei/Justin for the continues update.\n\nSo reorg Schema might be created as part of some scripts prior to 9.2\nVersion ?\nThese are the functions in DB not the Extension. However these functions\nwill not run as the associated libraries are not exists in System now (9.2)\nand I hope no impact to system.\n\nAS '$libdir/pg_reorg', 'reorg_disable_autovacuum';\nAS '$libdir/pg_reorg', 'reorg_get_index_keys';\nAS '$libdir/pg_reorg', 'reorg_apply';\nAS '$libdir/pg_reorg', 'reorg_drop';\nAS '$libdir/pg_reorg', 'reorg_indexdef';\nAS '$libdir/pg_reorg', 'reorg_swap';\nAS '$libdir/pg_reorg', 'reorg_trigger';\nAS '$libdir/pg_reorg', 'reorg_version';\n\nWill continue 9.6 upgrade after dropping reorg schema.\n\nOne Question need your address,\n\nPrior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed\nsimilar error(you can refer beginning o the post ).\n\n> could not load library \"$libdir/hstore\": ERROR: could not access file\n\"$libdir/hstore\": No such file or directory\n> could not load library \"$libdir/adminpack\": ERROR: could not access file\n\"$libdir/adminpack\": No such file or directory\n> could not load library \"$libdir/uuid-ossp\": ERROR: could not access file\n\"$libdir/uuid-ossp\": No such file or directory\n\nThese Extension seems to be standard. What is the use of these function and\ndo we have any alternative in Higher version or Enhanced object if i drop\nit in 9.2 and continue upgrade to 10.7 Version.\n\nThanks and Regards,\n\nOn Mon, Mar 4, 2019 at 11:42 PM Sergei Kornilov <sk@zsrv.org> wrote:\n\n> Hi\n>\n> seems this is unpackaged extension, usually installed prior 9.1 release.\n> Maybe reorg even does not support \"create extension\" syntax. That was long\n> ago and project homepage is unavailable now. pg_repack documentation\n> mention \"support for PostgreSQL 9.2 and EXTENSION packaging\" as\n> improvements.\n>\n> > Are these functions seeded ( default) one ?\n>\n> No its not default.\n>\n> regards, Sergei\n>\n\nThanks Sergei/Justin for the continues update.So reorg Schema might be created as part of some scripts prior to 9.2 Version ?These are the functions in DB not the Extension. However these functions will not run as the associated libraries are not exists in System now (9.2) and I hope no impact to system.AS '$libdir/pg_reorg', 'reorg_disable_autovacuum';AS '$libdir/pg_reorg', 'reorg_get_index_keys';AS '$libdir/pg_reorg', 'reorg_apply';AS '$libdir/pg_reorg', 'reorg_drop';AS '$libdir/pg_reorg', 'reorg_indexdef';AS '$libdir/pg_reorg', 'reorg_swap';AS '$libdir/pg_reorg', 'reorg_trigger';AS '$libdir/pg_reorg', 'reorg_version';Will continue 9.6 upgrade after dropping reorg schema.One Question need your address,Prior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed similar error(you can refer beginning o the post ).> could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directory> could not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directory> could not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directoryThese Extension seems to be standard. What is the use of these function and do we have any alternative in Higher version or Enhanced object if i drop it in 9.2 and continue upgrade to 10.7 Version.Thanks and Regards,On Mon, Mar 4, 2019 at 11:42 PM Sergei Kornilov <sk@zsrv.org> wrote:Hi\n\nseems this is unpackaged extension, usually installed prior 9.1 release. Maybe reorg even does not support \"create extension\" syntax. That was long ago and project homepage is unavailable now. pg_repack documentation mention \"support for PostgreSQL 9.2 and EXTENSION packaging\" as improvements.\n\n> Are these functions seeded ( default) one ?\n\nNo its not default.\n\nregards, Sergei", "msg_date": "Tue, 5 Mar 2019 08:09:12 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "On Tue, Mar 05, 2019 at 08:09:12AM -0800, Perumal Raj wrote:\n> Thanks Sergei/Justin for the continues update.\n> \n> So reorg Schema might be created as part of some scripts prior to 9.2\n> Version ?\n\nI'm guessing they were probably created in 9.2.\n\n> These are the functions in DB not the Extension. However these functions\n> will not run as the associated libraries are not exists in System now (9.2)\n> and I hope no impact to system.\n\nI guess someone installed pgreorg, ran its scripts to install its functions\ninto the DB, and then removed pgreorg without removing its scripts.\n\n> One Question need your address,\n> \n> Prior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed\n> similar error(you can refer beginning o the post ).\n> \n> > could not load library \"$libdir/hstore\": ERROR: could not access file \"$libdir/hstore\": No such file or directory\n> > could not load library \"$libdir/adminpack\": ERROR: could not access file \"$libdir/adminpack\": No such file or directory\n> > could not load library \"$libdir/uuid-ossp\": ERROR: could not access file \"$libdir/uuid-ossp\": No such file or directory\n> \n> These Extension seems to be standard. What is the use of these function and\n> do we have any alternative in Higher version or Enhanced object if i drop\n> it in 9.2 and continue upgrade to 10.7 Version.\n\nSee Sergei's response:\nhttps://www.postgresql.org/message-id/7164691551378448%40myt3-1179f584969c.qloud-c.yandex.net\n\nYou probably want to install this package for the new version (9.6 or 10 or\n11).\n\n[pryzbyj@TS-DB ~]$ rpm -ql postgresql11-contrib |grep -E '(uuid-ossp|adminpack|hstore)\\.control'\n/usr/pgsql-11/share/extension/adminpack.control\n/usr/pgsql-11/share/extension/hstore.control\n/usr/pgsql-11/share/extension/uuid-ossp.control\n\nJustin\n\n", "msg_date": "Tue, 5 Mar 2019 10:21:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Awesome, thanks Sergei and Justin,\n\nFinally, I am able to upgrade the DB from 9.2 to 9.6 successfully after\ndropping Schema (reorg) without library issue.\nAlso , I have installed -Contrib. package for Version:10 and upgraded to\nversion 10.7 too.\n\nOn both the cases , I have used --link option and it took just fraction of\nseconds ( I feel 'Zero' Downtime effect )\n\nAny pointers for pg_repack schema creation ?\nWill there be any impact in the future , Since i used --link option ?\n\nRegards,\nRaju\n\n\n\n\n\n\n\n\nOn Tue, Mar 5, 2019 at 8:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Mar 05, 2019 at 08:09:12AM -0800, Perumal Raj wrote:\n> > Thanks Sergei/Justin for the continues update.\n> >\n> > So reorg Schema might be created as part of some scripts prior to 9.2\n> > Version ?\n>\n> I'm guessing they were probably created in 9.2.\n>\n> > These are the functions in DB not the Extension. However these functions\n> > will not run as the associated libraries are not exists in System now\n> (9.2)\n> > and I hope no impact to system.\n>\n> I guess someone installed pgreorg, ran its scripts to install its functions\n> into the DB, and then removed pgreorg without removing its scripts.\n>\n> > One Question need your address,\n> >\n> > Prior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed\n> > similar error(you can refer beginning o the post ).\n> >\n> > > could not load library \"$libdir/hstore\": ERROR: could not access file\n> \"$libdir/hstore\": No such file or directory\n> > > could not load library \"$libdir/adminpack\": ERROR: could not access\n> file \"$libdir/adminpack\": No such file or directory\n> > > could not load library \"$libdir/uuid-ossp\": ERROR: could not access\n> file \"$libdir/uuid-ossp\": No such file or directory\n> >\n> > These Extension seems to be standard. What is the use of these function\n> and\n> > do we have any alternative in Higher version or Enhanced object if i drop\n> > it in 9.2 and continue upgrade to 10.7 Version.\n>\n> See Sergei's response:\n>\n> https://www.postgresql.org/message-id/7164691551378448%40myt3-1179f584969c.qloud-c.yandex.net\n>\n> You probably want to install this package for the new version (9.6 or 10 or\n> 11).\n>\n> [pryzbyj@TS-DB ~]$ rpm -ql postgresql11-contrib |grep -E\n> '(uuid-ossp|adminpack|hstore)\\.control'\n> /usr/pgsql-11/share/extension/adminpack.control\n> /usr/pgsql-11/share/extension/hstore.control\n> /usr/pgsql-11/share/extension/uuid-ossp.control\n>\n> Justin\n>\n\nAwesome, thanks Sergei and Justin,Finally, I am able to upgrade the DB from 9.2 to 9.6 successfully  after dropping Schema (reorg) without library issue.Also , I have installed -Contrib. package for Version:10 and upgraded to version 10.7 too.On both the cases , I have used  --link option and it took just fraction of seconds ( I feel 'Zero' Downtime effect )Any pointers for pg_repack schema creation ?Will there be any impact in the future , Since i used --link option ?Regards,RajuOn Tue, Mar 5, 2019 at 8:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Mar 05, 2019 at 08:09:12AM -0800, Perumal Raj wrote:\n> Thanks Sergei/Justin for the continues update.\n> \n> So reorg Schema might be created as part of some scripts prior to 9.2\n> Version ?\n\nI'm guessing they were probably created in 9.2.\n\n> These are the functions in DB not the Extension. However these functions\n> will not run as the associated libraries are not exists in System now (9.2)\n> and I hope no impact to system.\n\nI guess someone installed pgreorg, ran its scripts to install its functions\ninto the DB, and then removed pgreorg without removing its scripts.\n\n> One Question need your address,\n> \n> Prior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed\n> similar error(you can refer beginning o the post ).\n> \n> > could not load library \"$libdir/hstore\": ERROR:  could not access file \"$libdir/hstore\": No such file or directory\n> > could not load library \"$libdir/adminpack\": ERROR:  could not access file \"$libdir/adminpack\": No such file or directory\n> > could not load library \"$libdir/uuid-ossp\": ERROR:  could not access file \"$libdir/uuid-ossp\": No such file or directory\n> \n> These Extension seems to be standard. What is the use of these function and\n> do we have any alternative in Higher version or Enhanced object if i drop\n> it in 9.2 and continue upgrade to 10.7 Version.\n\nSee Sergei's response:\nhttps://www.postgresql.org/message-id/7164691551378448%40myt3-1179f584969c.qloud-c.yandex.net\n\nYou probably want to install this package for the new version (9.6 or 10 or\n11).\n\n[pryzbyj@TS-DB ~]$ rpm -ql postgresql11-contrib |grep -E '(uuid-ossp|adminpack|hstore)\\.control'\n/usr/pgsql-11/share/extension/adminpack.control\n/usr/pgsql-11/share/extension/hstore.control\n/usr/pgsql-11/share/extension/uuid-ossp.control\n\nJustin", "msg_date": "Wed, 6 Mar 2019 21:44:16 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "On Wed, Mar 06, 2019 at 09:44:16PM -0800, Perumal Raj wrote:\n> Any pointers for pg_repack schema creation ?\n\nWith recent postgres, you should use just: \"CREATE EXTENSION pg_repack\", which\ndoes all that for you.\n\n> Will there be any impact in the future , Since i used --link option ?\n\nYou probably have an old DB directory laying around which is (at least\npartially) hardlinks. You should remove it .. but be careful to remove the\ncorrect dir. My scripts always rename the old dir before running pg_upgrade,\nso it's less scary to rm -fr it later.\n\nJustin\n\n", "msg_date": "Thu, 7 Mar 2019 04:32:39 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Question about pg_upgrade from 9.2 to X.X" }, { "msg_contents": "Thanks again.\n\nPerumal Raju\n\nOn Thu, Mar 7, 2019, 2:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Mar 06, 2019 at 09:44:16PM -0800, Perumal Raj wrote:\n> > Any pointers for pg_repack schema creation ?\n>\n> With recent postgres, you should use just: \"CREATE EXTENSION pg_repack\",\n> which\n> does all that for you.\n>\n> > Will there be any impact in the future , Since i used --link option ?\n>\n> You probably have an old DB directory laying around which is (at least\n> partially) hardlinks. You should remove it .. but be careful to remove the\n> correct dir. My scripts always rename the old dir before running\n> pg_upgrade,\n> so it's less scary to rm -fr it later.\n>\n> Justin\n>\n\nThanks again.Perumal RajuOn Thu, Mar 7, 2019, 2:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Mar 06, 2019 at 09:44:16PM -0800, Perumal Raj wrote:\n> Any pointers for pg_repack schema creation ?\n\nWith recent postgres, you should use just: \"CREATE EXTENSION pg_repack\", which\ndoes all that for you.\n\n> Will there be any impact in the future , Since i used --link option ?\n\nYou probably have an old DB directory laying around which is (at least\npartially) hardlinks.  You should remove it .. but be careful to remove the\ncorrect dir.  My scripts always rename the old dir before running pg_upgrade,\nso it's less scary to rm -fr it later.\n\nJustin", "msg_date": "Thu, 7 Mar 2019 05:39:49 -0800", "msg_from": "Perumal Raj <perucinci@gmail.com>", "msg_from_op": true, "msg_subject": "Resolved: Question about pg_upgrade from 9.2 to X.X" } ]
[ { "msg_contents": "*We use logical replication from a PG version 10.6 to a 11.2. Both are Ubuntu\n16.04.We have a hundred schemas with more or less a hundred tables, so\nnumber of tables is about 10.000. All replication is ok but when we try to\ndo a REFRESH SUBSCRIPTION because we added a new schema, it takes hours and\ndoesn´t finish. Then, if I go to our master server and do a select * from\npg_publication_tables it doesn´t respond too. Then, analysing the source of\nview pg_publication_tables ...*\ncreate view pg_publication_tables as SELECT p.pubname, n.nspname AS\nschemaname, c.relname AS tablename FROM pg_publication p, (pg_class c JOIN\npg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.oid IN (SELECT\npg_get_publication_tables.relid FROM pg_get_publication_tables((p.pubname)\n:: text) pg_get_publication_tables (relid)));\nIf we run both statements of that view separately \nSELECT string_agg(pg_get_publication_tables.relid::text,',') FROM\npg_get_publication_tables(('MyPublication')::text) pg_get_publication_tables\n(relid);\n*put all those oids retrieved on that IN of the view*\nselect * from pg_Class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE\nc.oid IN (\n*OIDs List*\n);\n*Then it responds immediatelly*\nSo, the question is .. can we change this view to select faster ? Just\nrewriting that view to a better select will solve ?Is this view used by\nREFRESH SUBSCRIPTION ? We think yes because if we run refresh subscription\nor select from view it doesn´t respond, so ...\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html\nWe use logical replication from a PG version 10.6 to a 11.2. Both are Ubuntu 16.04.\nWe have a hundred schemas with more or less a hundred tables, so number of tables is about 10.000. All replication is ok but when we try to do a REFRESH SUBSCRIPTION because we added a new schema, it takes hours and doesn´t finish. Then, if I go to our master server and do a select * from pg_publication_tables it doesn´t respond too. Then, analysing the source of view pg_publication_tables ...\n\ncreate view pg_publication_tables as\n SELECT p.pubname, n.nspname AS schemaname, c.relname AS tablename FROM pg_publication p,\n (pg_class c JOIN pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.oid IN (SELECT pg_get_publication_tables.relid FROM pg_get_publication_tables((p.pubname) :: text) pg_get_publication_tables (relid)));\n\nIf we run both statements of that view separately \n\nSELECT string_agg(pg_get_publication_tables.relid::text,',') FROM pg_get_publication_tables(('MyPublication')::text) pg_get_publication_tables (relid);\n\nput all those oids retrieved on that IN of the view\n\nselect * from pg_Class c JOIN pg_namespace n ON n.oid = c.relnamespace \nWHERE c.oid IN (OIDs List);\nThen it responds immediatelly\n\nSo, the question is .. can we change this view to select faster ? Just rewriting that view to a better select will solve ?\nIs this view used by REFRESH SUBSCRIPTION ? We think yes because if we run refresh subscription or select from view it doesn´t respond, so ...\n\n\n\t\n\t\n\t\n\nSent from the PostgreSQL - general mailing list archive at Nabble.com.", "msg_date": "Thu, 28 Feb 2019 13:23:46 -0700 (MST)", "msg_from": "PegoraroF10 <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Refresh_Publication_takes_hours_and_doesn=C2=B4t_finish?=" }, { "msg_contents": "I tried sometime ago ... but with no responses, I ask you again.\npg_publication_tables is a view that is used to refresh publication, but as\nwe have 15.000 tables, it takes hours and doesn´t complete. If I change that\nview I can have an immediate result. The question is: Can I change that view\n? There is some trouble changing those system views ?\n\nOriginal View is ...\ncreate view pg_catalog.pg_publication_tables as\nSELECT p.pubname, n.nspname AS schemaname, c.relname AS tablename FROM\npg_publication p,\n(pg_class c JOIN pg_namespace n ON ((n.oid = c.relnamespace))) \nWHERE (c.oid IN (SELECT pg_get_publication_tables.relid FROM\npg_get_publication_tables((p.pubname)::text)\npg_get_publication_tables(relid)));\nThis way it takes 45 minutes to respond.\n\nI changed it to ... \ncreate or replace pg_catalog.view pg_publication_tables as SELECT p.pubname,\nn.nspname AS schemaname, c.relname AS tablename from pg_publication p inner\njoin pg_get_publication_tables(p.pubname) pt on true inner join pg_class c\non pt.relid = c.oid inner join pg_namespace n ON (n.oid = c.relnamespace);\nThis one takes just one or two seconds.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html\n\n\n", "msg_date": "Mon, 20 May 2019 13:18:00 -0700 (MST)", "msg_from": "PegoraroF10 <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_Refresh_Publication_takes_hours_and_doesn=C2=B4t_finish?=" }, { "msg_contents": "Em seg, 20 de mai de 2019 às 17:18, PegoraroF10 <marcos@f10.com.br>\nescreveu:\n>\n> I tried sometime ago ... but with no responses, I ask you again.\n> pg_publication_tables is a view that is used to refresh publication, but\nas\n> we have 15.000 tables, it takes hours and doesn´t complete. If I change\nthat\n> view I can have an immediate result. The question is: Can I change that\nview\n> ? There is some trouble changing those system views ?\n>\n\nYou really need a publication with a lot of relations??? If you can split\nit in several publications your life should be easy.\n\n>\n> Original View is ...\n> create view pg_catalog.pg_publication_tables as\n> SELECT p.pubname, n.nspname AS schemaname, c.relname AS tablename FROM\n> pg_publication p,\n> (pg_class c JOIN pg_namespace n ON ((n.oid = c.relnamespace)))\n> WHERE (c.oid IN (SELECT pg_get_publication_tables.relid FROM\n> pg_get_publication_tables((p.pubname)::text)\n> pg_get_publication_tables(relid)));\n> This way it takes 45 minutes to respond.\n>\n\nI really don't know why we did it... because pg_get_publication_tables\ndoesn't have any special behavior different than get relations assigned to\npublications.\n\n\n>\n> I changed it to ...\n> create or replace pg_catalog.view pg_publication_tables as SELECT\np.pubname,\n> n.nspname AS schemaname, c.relname AS tablename from pg_publication p\ninner\n> join pg_get_publication_tables(p.pubname) pt on true inner join pg_class c\n> on pt.relid = c.oid inner join pg_namespace n ON (n.oid = c.relnamespace);\n> This one takes just one or two seconds.\n>\n\nEven better, you can go direct by system catalogs:\n\n SELECT p.pubname,\n n.nspname AS schemaname,\n c.relname AS tablename\n FROM pg_publication p\n JOIN pg_publication_rel pr ON pr.prpubid = p.oid\n JOIN pg_class c ON c.oid = pr.prrelid\n JOIN pg_namespace n ON n.oid = c.relnamespace;\n\nTo change it, before you'll need to set \"allow_system_table_mods=on\" and\nrestart PostgreSQL.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nEm seg, 20 de mai de 2019 às 17:18, PegoraroF10 <marcos@f10.com.br> escreveu:>> I tried sometime ago ... but with no responses, I ask you again.> pg_publication_tables is a view that is used to refresh publication, but as> we have 15.000 tables, it takes hours and doesn´t complete. If I change that> view I can have an immediate result. The question is: Can I change that view> ? There is some trouble changing those system views ?>You really need a publication with a lot of relations??? If you can split it in several publications your life should be easy. >> Original View is ...> create view pg_catalog.pg_publication_tables as> SELECT p.pubname, n.nspname AS schemaname, c.relname AS tablename FROM> pg_publication p,> (pg_class c JOIN pg_namespace n ON ((n.oid = c.relnamespace)))> WHERE (c.oid IN (SELECT pg_get_publication_tables.relid FROM> pg_get_publication_tables((p.pubname)::text)> pg_get_publication_tables(relid)));> This way it takes 45 minutes to respond.>I really don't know why we did it... because pg_get_publication_tables doesn't have any special behavior different than get relations assigned to publications. >> I changed it to ...> create or replace pg_catalog.view pg_publication_tables as SELECT p.pubname,> n.nspname AS schemaname, c.relname AS tablename from pg_publication p inner> join pg_get_publication_tables(p.pubname) pt on true inner join pg_class c> on pt.relid = c.oid inner join pg_namespace n ON (n.oid = c.relnamespace);> This one takes just one or two seconds.>Even better, you can go direct by system catalogs: SELECT p.pubname,    n.nspname AS schemaname,    c.relname AS tablename   FROM pg_publication p     JOIN pg_publication_rel pr ON pr.prpubid = p.oid     JOIN pg_class c ON c.oid = pr.prrelid     JOIN pg_namespace n ON n.oid = c.relnamespace;To change it, before you'll need to set \"allow_system_table_mods=on\" and restart PostgreSQL.Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 20 May 2019 18:25:49 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t_fin?=\n\t=?UTF-8?Q?ish?=" }, { "msg_contents": "PegoraroF10 <marcos@f10.com.br> writes:\n> I tried sometime ago ... but with no responses, I ask you again.\n> pg_publication_tables is a view that is used to refresh publication, but as\n> we have 15.000 tables, it takes hours and doesn´t complete. If I change that\n> view I can have an immediate result. The question is: Can I change that view\n> ? There is some trouble changing those system views ?\n\n> Original View is ...\n> create view pg_catalog.pg_publication_tables as\n> SELECT p.pubname, n.nspname AS schemaname, c.relname AS tablename FROM\n> pg_publication p,\n> (pg_class c JOIN pg_namespace n ON ((n.oid = c.relnamespace))) \n> WHERE (c.oid IN (SELECT pg_get_publication_tables.relid FROM\n> pg_get_publication_tables((p.pubname)::text)\n> pg_get_publication_tables(relid)));\n> This way it takes 45 minutes to respond.\n\n> I changed it to ... \n> create or replace pg_catalog.view pg_publication_tables as SELECT p.pubname,\n> n.nspname AS schemaname, c.relname AS tablename from pg_publication p inner\n> join pg_get_publication_tables(p.pubname) pt on true inner join pg_class c\n> on pt.relid = c.oid inner join pg_namespace n ON (n.oid = c.relnamespace);\n> This one takes just one or two seconds.\n\nHmm ... given that pg_get_publication_tables() shouldn't return any\nduplicate OIDs, it does seem unnecessarily inefficient to put it in\nan IN-subselect condition. Peter, is there a reason why this isn't\na straight lateral join? I get a much saner-looking plan from\n\n FROM pg_publication P, pg_class C\n- JOIN pg_namespace N ON (N.oid = C.relnamespace)\n- WHERE C.oid IN (SELECT relid FROM pg_get_publication_tables(P.pubname));\n+ JOIN pg_namespace N ON (N.oid = C.relnamespace),\n+ LATERAL pg_get_publication_tables(P.pubname)\n+ WHERE C.oid = pg_get_publication_tables.relid;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 May 2019 17:30:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re:_Refresh_Publication_takes_hours_and_doesn=C2=B4t_finish?=" }, { "msg_contents": "Em seg, 20 de mai de 2019 às 18:30, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> Hmm ... given that pg_get_publication_tables() shouldn't return any\n> duplicate OIDs, it does seem unnecessarily inefficient to put it in\n> an IN-subselect condition. Peter, is there a reason why this isn't\n> a straight lateral join? I get a much saner-looking plan from\n>\n> FROM pg_publication P, pg_class C\n> - JOIN pg_namespace N ON (N.oid = C.relnamespace)\n> - WHERE C.oid IN (SELECT relid FROM\npg_get_publication_tables(P.pubname));\n> + JOIN pg_namespace N ON (N.oid = C.relnamespace),\n> + LATERAL pg_get_publication_tables(P.pubname)\n> + WHERE C.oid = pg_get_publication_tables.relid;\n>\n\nAnd why not just JOIN direct with pg_publication_rel ?\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nEm seg, 20 de mai de 2019 às 18:30, Tom Lane <tgl@sss.pgh.pa.us> escreveu:>> Hmm ... given that pg_get_publication_tables() shouldn't return any> duplicate OIDs, it does seem unnecessarily inefficient to put it in> an IN-subselect condition.  Peter, is there a reason why this isn't> a straight lateral join?  I get a much saner-looking plan from>>     FROM pg_publication P, pg_class C> -        JOIN pg_namespace N ON (N.oid = C.relnamespace)> -   WHERE C.oid IN (SELECT relid FROM pg_get_publication_tables(P.pubname));> +        JOIN pg_namespace N ON (N.oid = C.relnamespace),> +        LATERAL pg_get_publication_tables(P.pubname)> +   WHERE C.oid = pg_get_publication_tables.relid;>And why not just JOIN direct with pg_publication_rel ?Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 20 May 2019 18:37:16 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t?=\n\t=?UTF-8?Q?_finish?=" }, { "msg_contents": "I cannot because we created a replication for ALL TABLES\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html\n\n\n", "msg_date": "Tue, 21 May 2019 10:05:11 -0700 (MST)", "msg_from": "PegoraroF10 <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_Re:_Refresh_Publication_takes_hours_and_doesn=C2=B4t_finish?=" }, { "msg_contents": "Restart Postgres means exactly what ? We tried just restart the service but\nwe tried to refresh publication the old view was used because it took 2hours\nand gave us a timeout.\n\nI found some people talking that I need to initdb, but initdb means recreate\nentirely my database or just reinstall my postgres server ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html\n\n\n", "msg_date": "Tue, 21 May 2019 10:16:47 -0700 (MST)", "msg_from": "PegoraroF10 <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_Refresh_Publication_takes_hours_and_doesn=C2=B4t_finish?=" }, { "msg_contents": "Em ter, 21 de mai de 2019 às 14:17, PegoraroF10 <marcos@f10.com.br>\nescreveu:\n>\n> Restart Postgres means exactly what ? We tried just restart the service\nbut\n> we tried to refresh publication the old view was used because it took\n2hours\n> and gave us a timeout.\n>\n\nAs I said before to change system catalog you should set\n\"allow_system_table_mods=on\" and restart PostgreSQL service.\n\nAfter that you'll able to recreate the \"pg_catalog.pg_publication_tables\"\nsystem view. (You can use the Tom's suggestion using LATERAL)\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nEm ter, 21 de mai de 2019 às 14:17, PegoraroF10 <marcos@f10.com.br> escreveu:>> Restart Postgres means exactly what ? We tried just restart the service but> we tried to refresh publication the old view was used because it took 2hours> and gave us a timeout.>As I said before to change system catalog you should set \"allow_system_table_mods=on\" and restart PostgreSQL service. After that you'll able to recreate the \"pg_catalog.pg_publication_tables\" system view. (You can use the Tom's suggestion using LATERAL)Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 21 May 2019 14:27:01 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t_fin?=\n\t=?UTF-8?Q?ish?=" }, { "msg_contents": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br> writes:\n> As I said before to change system catalog you should set\n> \"allow_system_table_mods=on\" and restart PostgreSQL service.\n> After that you'll able to recreate the \"pg_catalog.pg_publication_tables\"\n> system view. (You can use the Tom's suggestion using LATERAL)\n\nIt's a view, not a table, so I don't think you need\nallow_system_table_mods. A quick test here says that being\nsuperuser is enough to do a CREATE OR REPLACE VIEW on it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 May 2019 13:41:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t_fin?=\n =?UTF-8?Q?ish?=" }, { "msg_contents": "Em ter, 21 de mai de 2019 às 14:41, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br> writes:\n> > As I said before to change system catalog you should set\n> > \"allow_system_table_mods=on\" and restart PostgreSQL service.\n> > After that you'll able to recreate the\n\"pg_catalog.pg_publication_tables\"\n> > system view. (You can use the Tom's suggestion using LATERAL)\n>\n> It's a view, not a table, so I don't think you need\n> allow_system_table_mods. A quick test here says that being\n> superuser is enough to do a CREATE OR REPLACE VIEW on it.\n>\n\nInteresting, I tried the following commands and got error:\n\npostgres=# SELECT version();\n version\n\n----------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 11.3 (Debian 11.3-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled\nby gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n(1 row)\n\npostgres=# SELECT session_user;\n session_user\n--------------\n postgres\n(1 row)\n\npostgres=# SHOW allow_system_table_mods ;\n allow_system_table_mods\n-------------------------\n off\n(1 row)\n\npostgres=# CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\npostgres-# SELECT\npostgres-# P.pubname AS pubname,\npostgres-# N.nspname AS schemaname,\npostgres-# C.relname AS tablename\npostgres-# FROM pg_publication P, pg_class C\npostgres-# JOIN pg_namespace N ON (N.oid = C.relnamespace),\npostgres-# LATERAL pg_get_publication_tables(P.pubname)\npostgres-# WHERE C.oid = pg_get_publication_tables.relid;\nERROR: permission denied: \"pg_publication_tables\" is a system catalog\n\nBut changing \"allow_system_table_mods=on\" works as expected:\n\npostgres=# SHOW allow_system_table_mods ;\n allow_system_table_mods\n-------------------------\n on\n(1 row)\n\npostgres=# CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables AS\nSELECT\n P.pubname AS pubname,\n N.nspname AS schemaname,\n C.relname AS tablename\nFROM pg_publication P, pg_class C\n JOIN pg_namespace N ON (N.oid = C.relnamespace),\n LATERAL pg_get_publication_tables(P.pubname)\nWHERE C.oid = pg_get_publication_tables.relid;\nCREATE VIEW\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nEm ter, 21 de mai de 2019 às 14:41, Tom Lane <tgl@sss.pgh.pa.us> escreveu:>> =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br> writes:> > As I said before to change system catalog you should set> > \"allow_system_table_mods=on\" and restart PostgreSQL service.> > After that you'll able to recreate the \"pg_catalog.pg_publication_tables\"> > system view. (You can use the Tom's suggestion using LATERAL)>> It's a view, not a table, so I don't think you need> allow_system_table_mods.  A quick test here says that being> superuser is enough to do a CREATE OR REPLACE VIEW on it.>Interesting, I tried the following commands and got error:postgres=# SELECT version();                                                             version                                                              ---------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 11.3 (Debian 11.3-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit(1 row)postgres=# SELECT session_user; session_user -------------- postgres(1 row)postgres=# SHOW allow_system_table_mods ; allow_system_table_mods ------------------------- off(1 row)postgres=# CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables ASpostgres-# SELECTpostgres-#     P.pubname AS pubname,postgres-#     N.nspname AS schemaname,postgres-#     C.relname AS tablenamepostgres-# FROM pg_publication P, pg_class Cpostgres-#      JOIN pg_namespace N ON (N.oid = C.relnamespace),postgres-#      LATERAL pg_get_publication_tables(P.pubname)postgres-# WHERE C.oid = pg_get_publication_tables.relid;ERROR:  permission denied: \"pg_publication_tables\" is a system catalogBut changing \"allow_system_table_mods=on\" works as expected:postgres=# SHOW allow_system_table_mods ; allow_system_table_mods ------------------------- on(1 row)postgres=# CREATE OR REPLACE VIEW pg_catalog.pg_publication_tables ASSELECT    P.pubname AS pubname,    N.nspname AS schemaname,    C.relname AS tablenameFROM pg_publication P, pg_class C     JOIN pg_namespace N ON (N.oid = C.relnamespace),     LATERAL pg_get_publication_tables(P.pubname)WHERE C.oid = pg_get_publication_tables.relid;CREATE VIEWRegards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 21 May 2019 14:57:25 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t?=\n\t=?UTF-8?Q?_finish?=" }, { "msg_contents": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabrizio@timbira.com.br> writes:\n> Em ter, 21 de mai de 2019 às 14:41, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> It's a view, not a table, so I don't think you need\n>> allow_system_table_mods. A quick test here says that being\n>> superuser is enough to do a CREATE OR REPLACE VIEW on it.\n\n> Interesting, I tried the following commands and got error:\n\nOh, huh, this is something that changed recently in HEAD ---\nsince commit 2d7d946cd, stuff created by system_views.sql\nis not protected as though it were a system catalog.\n\nSo in released versions, yes you need allow_system_table_mods=on.\nSorry for the misinformation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 May 2019 14:27:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re=3A_Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t?=\n =?UTF-8?Q?_finish?=" }, { "msg_contents": "[ redirecting to pgsql-hackers as the more relevant list ]\n\nI wrote:\n> PegoraroF10 <marcos@f10.com.br> writes:\n>> I tried sometime ago ... but with no responses, I ask you again.\n>> pg_publication_tables is a view that is used to refresh publication, but as\n>> we have 15.000 tables, it takes hours and doesn't complete. If I change that\n>> view I can have an immediate result. The question is: Can I change that view\n>> ? There is some trouble changing those system views ?\n\n> Hmm ... given that pg_get_publication_tables() shouldn't return any\n> duplicate OIDs, it does seem unnecessarily inefficient to put it in\n> an IN-subselect condition. Peter, is there a reason why this isn't\n> a straight lateral join? I get a much saner-looking plan from\n\n> FROM pg_publication P, pg_class C\n> - JOIN pg_namespace N ON (N.oid = C.relnamespace)\n> - WHERE C.oid IN (SELECT relid FROM pg_get_publication_tables(P.pubname));\n> + JOIN pg_namespace N ON (N.oid = C.relnamespace),\n> + LATERAL pg_get_publication_tables(P.pubname)\n> + WHERE C.oid = pg_get_publication_tables.relid;\n\nFor the record, the attached seems like what to do here. It's easy\nto show that there's a big performance gain even for normal numbers\nof tables, eg if you do\n\n\tCREATE PUBLICATION mypub FOR ALL TABLES;\n\tSELECT * FROM pg_publication_tables;\n\nin the regression database, the time for the select drops from ~360ms\nto ~6ms on my machine. The existing view's performance will drop as\nO(N^2) the more publishable tables you have ...\n\nGiven that this change impacts the regression test results, project\nrules say that it should come with a catversion bump. Since we are\ncertainly going to have a catversion bump before beta2 because of\nthe pg_statistic_ext permissions business, that doesn't seem like\na reason not to push it into v12 --- any objections?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 21 May 2019 15:42:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?Re:_Refresh_Publication_takes_hours_and_doesn=C2=B4t_finish?=" }, { "msg_contents": "On Tue, May 21, 2019 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> [ redirecting to pgsql-hackers as the more relevant list ]\n>\n> I wrote:\n> > PegoraroF10 <marcos@f10.com.br> writes:\n> >> I tried sometime ago ... but with no responses, I ask you again.\n> >> pg_publication_tables is a view that is used to refresh publication,\nbut as\n> >> we have 15.000 tables, it takes hours and doesn't complete. If I\nchange that\n> >> view I can have an immediate result. The question is: Can I change\nthat view\n> >> ? There is some trouble changing those system views ?\n>\n> > Hmm ... given that pg_get_publication_tables() shouldn't return any\n> > duplicate OIDs, it does seem unnecessarily inefficient to put it in\n> > an IN-subselect condition. Peter, is there a reason why this isn't\n> > a straight lateral join? I get a much saner-looking plan from\n>\n> > FROM pg_publication P, pg_class C\n> > - JOIN pg_namespace N ON (N.oid = C.relnamespace)\n> > - WHERE C.oid IN (SELECT relid FROM\npg_get_publication_tables(P.pubname));\n> > + JOIN pg_namespace N ON (N.oid = C.relnamespace),\n> > + LATERAL pg_get_publication_tables(P.pubname)\n> > + WHERE C.oid = pg_get_publication_tables.relid;\n>\n> For the record, the attached seems like what to do here. It's easy\n> to show that there's a big performance gain even for normal numbers\n> of tables, eg if you do\n>\n> CREATE PUBLICATION mypub FOR ALL TABLES;\n> SELECT * FROM pg_publication_tables;\n>\n> in the regression database, the time for the select drops from ~360ms\n> to ~6ms on my machine. The existing view's performance will drop as\n> O(N^2) the more publishable tables you have ...\n>\n> Given that this change impacts the regression test results, project\n> rules say that it should come with a catversion bump. Since we are\n> certainly going to have a catversion bump before beta2 because of\n> the pg_statistic_ext permissions business, that doesn't seem like\n> a reason not to push it into v12 --- any objections?\n>\n\nI completely agree to push it into v12.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Tue, May 21, 2019 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> [ redirecting to pgsql-hackers as the more relevant list ]>> I wrote:> > PegoraroF10 <marcos@f10.com.br> writes:> >> I tried sometime ago ... but with no responses, I ask you again.> >> pg_publication_tables is a view that is used to refresh publication, but as> >> we have 15.000 tables, it takes hours and doesn't complete. If I change that> >> view I can have an immediate result. The question is: Can I change that view> >> ? There is some trouble changing those system views ?>> > Hmm ... given that pg_get_publication_tables() shouldn't return any> > duplicate OIDs, it does seem unnecessarily inefficient to put it in> > an IN-subselect condition.  Peter, is there a reason why this isn't> > a straight lateral join?  I get a much saner-looking plan from>> >     FROM pg_publication P, pg_class C> > -        JOIN pg_namespace N ON (N.oid = C.relnamespace)> > -   WHERE C.oid IN (SELECT relid FROM pg_get_publication_tables(P.pubname));> > +        JOIN pg_namespace N ON (N.oid = C.relnamespace),> > +        LATERAL pg_get_publication_tables(P.pubname)> > +   WHERE C.oid = pg_get_publication_tables.relid;>> For the record, the attached seems like what to do here.  It's easy> to show that there's a big performance gain even for normal numbers> of tables, eg if you do>>         CREATE PUBLICATION mypub FOR ALL TABLES;>         SELECT * FROM pg_publication_tables;>> in the regression database, the time for the select drops from ~360ms> to ~6ms on my machine.  The existing view's performance will drop as> O(N^2) the more publishable tables you have ...>> Given that this change impacts the regression test results, project> rules say that it should come with a catversion bump.  Since we are> certainly going to have a catversion bump before beta2 because of> the pg_statistic_ext permissions business, that doesn't seem like> a reason not to push it into v12 --- any objections?>I completely agree to push it into v12.Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 21 May 2019 16:47:28 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Re=3A_Refresh_Publication_takes_hours_and_doesn=C2=B4t?=\n\t=?UTF-8?Q?_finish?=" }, { "msg_contents": "On 2019-05-20 23:30, Tom Lane wrote:\n> Hmm ... given that pg_get_publication_tables() shouldn't return any\n> duplicate OIDs, it does seem unnecessarily inefficient to put it in\n> an IN-subselect condition. Peter, is there a reason why this isn't\n> a straight lateral join? I get a much saner-looking plan from\n> \n> FROM pg_publication P, pg_class C\n> - JOIN pg_namespace N ON (N.oid = C.relnamespace)\n> - WHERE C.oid IN (SELECT relid FROM pg_get_publication_tables(P.pubname));\n> + JOIN pg_namespace N ON (N.oid = C.relnamespace),\n> + LATERAL pg_get_publication_tables(P.pubname)\n> + WHERE C.oid = pg_get_publication_tables.relid;\n\nNo reason I think, just didn't quite manage to recognize the possibility\nof using LATERAL at the time.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 May 2019 15:08:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re:_Refresh_Publication_takes_hours_and_doesn=c2=b4t_fini?=\n =?UTF-8?Q?sh?=" } ]
[ { "msg_contents": "Unfortunately, contrib/jsonb_plpython still contain a lot of problems in error\nhandling that can lead to memory leaks:\n - not all Python function calls are checked for the success\n - not in all places PG exceptions are caught to release Python references\nBut it seems that this errors can happen only in OOM case.\n\nAttached patch with the fix. Back-patch for PG11 is needed.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 1 Mar 2019 05:24:39 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Fix memleaks and error handling in jsonb_plpython" }, { "msg_contents": "On Fri, Mar 01, 2019 at 05:24:39AM +0300, Nikita Glukhov wrote:\n> Unfortunately, contrib/jsonb_plpython still contain a lot of problems in error\n> handling that can lead to memory leaks:\n> - not all Python function calls are checked for the success\n> - not in all places PG exceptions are caught to release Python references\n> But it seems that this errors can happen only in OOM case.\n> \n> Attached patch with the fix. Back-patch for PG11 is needed.\n\nThat looks right to me. Here are some comments.\n\nOne thing to be really careful of when using PG_TRY/PG_CATCH blocks is\nthat variables modified in the try block and then referenced in the\ncatch block need to be marked as volatile. If you don't do that, the\nvalue when reaching the catch part is indeterminate.\n\nWith your patch the result variable used in two places of\nPLyObject_FromJsonbContainer() is not marked as volatile. Similarly,\nit seems that \"items\" in PLyMapping_ToJsonbValue() and \"seq\" in\n\"PLySequence_ToJsonbValue\" should be volatile because they get changed\nin the try loop, and referenced afterwards.\n\nAnother issue: in ltree_plpython we don't check the return state of\nPyList_SetItem(), which we should complain about I think.\n--\nMichael", "msg_date": "Tue, 5 Mar 2019 12:45:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix memleaks and error handling in jsonb_plpython" }, { "msg_contents": "On 05.03.2019 6:45, Michael Paquier wrote:\n\n> On Fri, Mar 01, 2019 at 05:24:39AM +0300, Nikita Glukhov wrote:\n>> Unfortunately, contrib/jsonb_plpython still contain a lot of problems in error\n>> handling that can lead to memory leaks:\n>> - not all Python function calls are checked for the success\n>> - not in all places PG exceptions are caught to release Python references\n>> But it seems that this errors can happen only in OOM case.\n>>\n>> Attached patch with the fix. Back-patch for PG11 is needed.\n> That looks right to me. Here are some comments.\n>\n> One thing to be really careful of when using PG_TRY/PG_CATCH blocks is\n> that variables modified in the try block and then referenced in the\n> catch block need to be marked as volatile. If you don't do that, the\n> value when reaching the catch part is indeterminate.\n>\n> With your patch the result variable used in two places of\n> PLyObject_FromJsonbContainer() is not marked as volatile. Similarly,\n> it seems that \"items\" in PLyMapping_ToJsonbValue() and \"seq\" in\n> \"PLySequence_ToJsonbValue\" should be volatile because they get changed\n> in the try loop, and referenced afterwards.\n\nI known about this volatility issues, but maybe I incorrectly understand what\nshould be marked as volatile for pointer variables: the pointer itself and/or\nthe memory referenced by it. I thought that only pointer needs to be marked,\nand also there is message [1] clearly describing what needs to be marked.\n\n\nPreviously in PLyMapping_ToJsonbValue() the whole contents of PyObject was\nmarked as volatile, not the pointer itself which is not modified in PG_TRY:\n\n-\t/* We need it volatile, since we use it after longjmp */\n-\tvolatile PyObject *items_v = NULL;\n\nSo, I removed volatile qualifier here.\n\nVariable 'result' is also not modified in PG_TRY, it is also non-volatile.\n\n\nI marked only 'key' variable in PLyObject_FromJsonbContainer() as volatile,\nbecause it is really modified in the loop inside PG_TRY(), and\nPLyObject_FromJsonbValue(&v2) call after its assignment can throw PG\nexception:\n+ PyObject *volatile key = NULL;\n\n\nAlso I have idea to introduce a global list of Python objects that need to be\ndereferenced in PG_CATCH inside PLy_exec_function() in the case of exception.\nThen typical code will be look like that:\n\n PyObject *list = PLy_RegisterObject(PyList_New());\n\n if (!list)\n return NULL;\n\n ... code that can throw PG exception, PG_TRY/PG_CATCH is not needed ...\n\n return PLy_UnregisterObject(list); /* returns list */\n\n> Another issue: in ltree_plpython we don't check the return state of\n> PyList_SetItem(), which we should complain about I think.\n\nYes, PyList_SetItem() and PyString_FromStringAndSize() should be checked,\nbut CPython's PyList_SetItem() really should not fail because list storage\nis preallocated:\n\nint\nPyList_SetItem(PyObject *op, Py_ssize_t i, PyObject *newitem)\n{\n PyObject **p;\n if (!PyList_Check(op)) {\n Py_XDECREF(newitem);\n PyErr_BadInternalCall();\n return -1;\n }\n if (!valid_index(i, Py_SIZE(op))) {\n Py_XDECREF(newitem);\n PyErr_SetString(PyExc_IndexError,\n \"list assignment index out of range\");\n return -1;\n }\n p = ((PyListObject *)op) -> ob_item + i;\n Py_XSETREF(*p, newitem);\n return 0;\n}\n\n\n[1] https://www.postgresql.org/message-id/31436.1483415248%40sss.pgh.pa.us\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 05.03.2019 6:45, Michael Paquier wrote:\n\n\nOn Fri, Mar 01, 2019 at 05:24:39AM +0300, Nikita Glukhov wrote:\n\n\nUnfortunately, contrib/jsonb_plpython still contain a lot of problems in error\nhandling that can lead to memory leaks:\n - not all Python function calls are checked for the success\n - not in all places PG exceptions are caught to release Python references\nBut it seems that this errors can happen only in OOM case.\n\nAttached patch with the fix. Back-patch for PG11 is needed.\n\n\n\nThat looks right to me. Here are some comments.\n\nOne thing to be really careful of when using PG_TRY/PG_CATCH blocks is\nthat variables modified in the try block and then referenced in the\ncatch block need to be marked as volatile. If you don't do that, the\nvalue when reaching the catch part is indeterminate.\n\nWith your patch the result variable used in two places of\nPLyObject_FromJsonbContainer() is not marked as volatile. Similarly,\nit seems that \"items\" in PLyMapping_ToJsonbValue() and \"seq\" in\n\"PLySequence_ToJsonbValue\" should be volatile because they get changed\nin the try loop, and referenced afterwards.\n\n\nI known about this volatility issues, but maybe I incorrectly understand what \nshould be marked as volatile for pointer variables: the pointer itself and/or \nthe memory referenced by it. I thought that only pointer needs to be marked, \nand also there is message [1] clearly describing what needs to be marked.\n\n\nPreviously in PLyMapping_ToJsonbValue() the whole contents of PyObject was \nmarked as volatile, not the pointer itself which is not modified in PG_TRY:\n-\t/* We need it volatile, since we use it after longjmp */\n-\tvolatile PyObject *items_v = NULL;\nSo, I removed volatile qualifier here.\n\nVariable 'result' is also not modified in PG_TRY, it is also non-volatile.\n\n\nI marked only 'key' variable in PLyObject_FromJsonbContainer() as volatile,\nbecause it is really modified in the loop inside PG_TRY(), and\nPLyObject_FromJsonbValue(&v2) call after its assignment can throw PG \nexception:\n+ PyObject *volatile key = NULL;\n\n\nAlso I have idea to introduce a global list of Python objects that need to be \ndereferenced in PG_CATCH inside PLy_exec_function() in the case of exception.\nThen typical code will be look like that:\n\n PyObject *list = PLy_RegisterObject(PyList_New());\n\n if (!list)\n return NULL;\n\n ... code that can throw PG exception, PG_TRY/PG_CATCH is not needed ...\n\n return PLy_UnregisterObject(list); /* returns list */\n\n\n\nAnother issue: in ltree_plpython we don't check the return state of\nPyList_SetItem(), which we should complain about I think.\n\n\nYes, PyList_SetItem() and PyString_FromStringAndSize() should be checked,\nbut CPython's PyList_SetItem() really should not fail because list storage \nis preallocated:\nint\nPyList_SetItem(PyObject *op, Py_ssize_t i, PyObject *newitem)\n{\n PyObject **p;\n if (!PyList_Check(op)) {\n Py_XDECREF(newitem);\n PyErr_BadInternalCall();\n return -1;\n }\n if (!valid_index(i, Py_SIZE(op))) {\n Py_XDECREF(newitem);\n PyErr_SetString(PyExc_IndexError,\n \"list assignment index out of range\");\n return -1;\n }\n p = ((PyListObject *)op) -> ob_item + i;\n Py_XSETREF(*p, newitem);\n return 0;\n}\n\n\n[1] https://www.postgresql.org/message-id/31436.1483415248%40sss.pgh.pa.us\n-- \n Nikita Glukhov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company", "msg_date": "Tue, 5 Mar 2019 14:10:01 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix memleaks and error handling in jsonb_plpython" }, { "msg_contents": "On Tue, Mar 05, 2019 at 02:10:01PM +0300, Nikita Glukhov wrote:\n> I known about this volatility issues, but maybe I incorrectly understand what\n> should be marked as volatile for pointer variables: the pointer itself and/or\n> the memory referenced by it. I thought that only pointer needs to be marked,\n> and also there is message [1] clearly describing what needs to be marked.\n\nYeah, sorry for bringing some confusion.\n\n> Previously in PLyMapping_ToJsonbValue() the whole contents of PyObject was\n> marked as volatile, not the pointer itself which is not modified in PG_TRY:\n> \n> -\t/* We need it volatile, since we use it after longjmp */\n> -\tvolatile PyObject *items_v = NULL;\n> \n> So, I removed volatile qualifier here.\n\nOkay, this one looks correct to me. Well the whole variable has been\nremoved.\n\n> Variable 'result' is also not modified in PG_TRY, it is also non-volatile.\n\nFine here as well.\n\n> I marked only 'key' variable in PLyObject_FromJsonbContainer() as volatile,\n> because it is really modified in the loop inside PG_TRY(), and\n> PLyObject_FromJsonbValue(&v2) call after its assignment can throw PG\n> exception:\n> + PyObject *volatile key = NULL;\n\nOne thing that you are missing here is that key can become NULL when\nreaching the catch block, so Py_XDECREF() should be called on it only\nwhen the value is not NULL. And actually, looking closer, you don't\nneed to have that volatile variable at all, no? Why not just\ndeclaring it as a PyObject in the while loop?\n\nAlso here, key and val can be NULL, so we had better only call\nPy_XDECREF() when they are not. On top of that, potential errors on\nPyDict_SetItem() not be simply ignored, so the loop should only break\nwhen the key or the value is NULL, but not when PyDict_SetItem() has a\nproblem.\n\n> Also I have idea to introduce a global list of Python objects that need to be\n> dereferenced in PG_CATCH inside PLy_exec_function() in the case of exception.\n> Then typical code will be look like that:\n\nPerhaps we could do that, but let's not juggle with the code more than\nnecessary for a bug fix.\n\n> Yes, PyList_SetItem() and PyString_FromStringAndSize() should be checked,\n> but CPython's PyList_SetItem() really should not fail because list storage\n> is preallocated:\n\nHm. We could add an elog() here for safety I think. That's not a big\ndeal either.\n\nAnother thing is that you cannot just return within a try block with\nwhat is added in PLyObject_FromJsonbContainer, or the error stack is\nnot reset properly. So they should be replaced by breaks.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 11:04:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix memleaks and error handling in jsonb_plpython" }, { "msg_contents": "On Wed, Mar 06, 2019 at 11:04:23AM +0900, Michael Paquier wrote:\n> Another thing is that you cannot just return within a try block with\n> what is added in PLyObject_FromJsonbContainer, or the error stack is\n> not reset properly. So they should be replaced by breaks.\n\nSo, I have been poking at this stuff, and I am finishing with the\nattached. The origin of the issue comes from PLyObject_ToJsonbValue()\nand PLyObject_FromJsonbValue() which could result in problems when\nworking on PyObject which it may allocate. So this has resulted in\nmore refactoring of the code than I expected first. I also decided to\nnot keep the additional errors which have been added in the previous\nversion of the patch. From my understanding of the code, these cannot\nactually happen, so replacing them by assertions is enough in my\nopinion.\n\nWhile on it, I also noticed that hstore_plpython does not actually\nneed a volatile pointer for plpython_to_hstore(). Also, as all those\nproblems are really unlikely going to happen in real-life cases,\nimproving this code only on HEAD looks enough to me.\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 14:59:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix memleaks and error handling in jsonb_plpython" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Mar 06, 2019 at 11:04:23AM +0900, Michael Paquier wrote:\n>> Another thing is that you cannot just return within a try block with\n>> what is added in PLyObject_FromJsonbContainer, or the error stack is\n>> not reset properly. So they should be replaced by breaks.\n\n> So, I have been poking at this stuff, and I am finishing with the\n> attached.\n\nThis patch had bit-rotted due to somebody else fooling with the\nvolatile-qualifiers situation. I fixed it up, tweaked a couple of\nthings, and pushed it.\n\n> Also, as all those\n> problems are really unlikely going to happen in real-life cases,\n> improving this code only on HEAD looks enough to me.\n\nYeah, I concur.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Apr 2019 17:56:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix memleaks and error handling in jsonb_plpython" }, { "msg_contents": "On Sat, Apr 06, 2019 at 05:56:24PM -0400, Tom Lane wrote:\n> This patch had bit-rotted due to somebody else fooling with the\n> volatile-qualifiers situation. I fixed it up, tweaked a couple of\n> things, and pushed it.\n\nThanks, Tom!\n--\nMichael", "msg_date": "Sun, 7 Apr 2019 10:58:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix memleaks and error handling in jsonb_plpython" } ]
[ { "msg_contents": "Hi all,\n\nJoe's message here has reminded me that we have lacked a lot of error\nhandling around CloseTransientFile():\nhttps://www.postgresql.org/message-id/c49b69ec-e2f7-ff33-4f17-0eaa4f2cef27@joeconway.com\n\nThis has been mentioned by Alvaro a couple of months ago (cannot find\nthe thread about that at quick glance), and I just forgot about it at\nthat time. Anyway, attached is a patch to do some cleanup for all\nthat:\n- Switch OpenTransientFile to read-only where sufficient.\n- Add more error handling for CloseTransientFile\nA major take of this patch is to make sure that the new error messages\ngenerated have an elevel consistent with their neighbors.\n\nJust on time for this last CF. Thoughts?\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 11:33:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On 2/28/19 9:33 PM, Michael Paquier wrote:\n> Hi all,\n> \n> Joe's message here has reminded me that we have lacked a lot of error\n> handling around CloseTransientFile():\n> https://www.postgresql.org/message-id/c49b69ec-e2f7-ff33-4f17-0eaa4f2cef27@joeconway.com\n> \n> This has been mentioned by Alvaro a couple of months ago (cannot find\n> the thread about that at quick glance), and I just forgot about it at\n> that time. Anyway, attached is a patch to do some cleanup for all\n> that:\n> - Switch OpenTransientFile to read-only where sufficient.\n> - Add more error handling for CloseTransientFile\n> A major take of this patch is to make sure that the new error messages\n> generated have an elevel consistent with their neighbors.\n> \n> Just on time for this last CF. Thoughts?\n\nSeems like it would be better to modify the arguments to\nCloseTransientFile() to include the filename being closed, errorlevel,\nand fail_on_error or something similar. Then all the repeated ereport\nstanzas could be eliminated.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Fri, 1 Mar 2019 17:05:54 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On Fri, Mar 01, 2019 at 05:05:54PM -0500, Joe Conway wrote:\n> Seems like it would be better to modify the arguments to\n> CloseTransientFile() to include the filename being closed, errorlevel,\n> and fail_on_error or something similar. Then all the repeated ereport\n> stanzas could be eliminated.\n\nSure. Now some code paths close file descriptors without having at\nhand the file name, which would mean that we'd need to pass NULL as\nargument in this case. That's not really elegant in my opinion. And\nhaving a consistent mapping with the system's close() is not really\nbad to me either..\n--\nMichael", "msg_date": "Sat, 2 Mar 2019 09:40:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "Overall the patch looks good and according to the previous discussion fulfils its purpose.\r\n\r\nIt might be worthwhile to also check for errors on close in SaveSlotToPath().\r\n\r\n pgstat_report_wait_end();\r\n\r\n CloseTransientFile(fd);\r\n\r\n /* rename to permanent file, fsync file and directory */\r\n if (rename(tmppath, path) != 0)", "msg_date": "Wed, 06 Mar 2019 14:54:52 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On Fri, Mar 1, 2019 at 5:06 PM Joe Conway <mail@joeconway.com> wrote:\n> Seems like it would be better to modify the arguments to\n> CloseTransientFile() to include the filename being closed, errorlevel,\n> and fail_on_error or something similar. Then all the repeated ereport\n> stanzas could be eliminated.\n\nHmm. I'm not sure that really saves much in terms of notation, and\nit's less flexible.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 6 Mar 2019 16:09:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On Wed, Mar 06, 2019 at 02:54:52PM +0000, Georgios Kokolatos wrote:\n> Overall the patch looks good and according to the previous\n> discussion fulfils its purpose. \n> \n> It might be worthwhile to also check for errors on close in\n> SaveSlotToPath().\n\nThanks for the feedback, added. I have spent some time\ndouble-checking this stuff, and noticed that the new errors in\nStartupReplicationOrigin() and CheckPointReplicationOrigin() should be\nswitched from ERROR to PANIC to be consistent. One message in\ndsm_impl_mmap() was not consistent either.\n\nAre there any objections if I commit this patch?\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 10:56:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThe second version of this patch seems to be in order and ready for committer.\r\n\r\nThank you for taking the time to code!\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 07 Mar 2019 08:14:53 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On 2019-Mar-07, Michael Paquier wrote:\n\n> #else\n> -\tclose(fd);\n> +\tif (close(fd))\n> +\t{\n> +\t\tfprintf(stderr, _(\"%s: could not close file \\\"%s\\\": %s\"),\n> +\t\t\t\tprogname, ControlFilePath, strerror(errno));\n> +\t\texit(EXIT_FAILURE);\n> +\t}\n> #endif\n\nI think this one needs a terminating \\n.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 22:00:05 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On Thu, Mar 07, 2019 at 10:00:05PM -0300, Alvaro Herrera wrote:\n> I think this one needs a terminating \\n.\n\nArgh... Thanks for the lookup, Alvaro.\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 10:23:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On Fri, Mar 08, 2019 at 10:23:24AM +0900, Michael Paquier wrote:\n> Argh... Thanks for the lookup, Alvaro.\n\nAnd committed, after an extra pass to beautify the whole experience.\n--\nMichael", "msg_date": "Sat, 9 Mar 2019 08:53:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "Hi,\n\nOn 2019-03-07 10:56:25 +0900, Michael Paquier wrote:\n> diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c\n> index f5cf9ffc9c..bce4274362 100644\n> --- a/src/backend/access/heap/rewriteheap.c\n> +++ b/src/backend/access/heap/rewriteheap.c\n> @@ -1202,7 +1202,10 @@ heap_xlog_logical_rewrite(XLogReaderState *r)\n> \t\t\t\t errmsg(\"could not fsync file \\\"%s\\\": %m\", path)));\n> \tpgstat_report_wait_end();\n>\n> -\tCloseTransientFile(fd);\n> +\tif (CloseTransientFile(fd))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t errmsg(\"could not close file \\\"%s\\\": %m\", path)));\n> }\n...\n> diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\n> index 64679dd2de..21986e48fe 100644\n> --- a/src/backend/access/transam/twophase.c\n> +++ b/src/backend/access/transam/twophase.c\n> @@ -1297,7 +1297,11 @@ ReadTwoPhaseFile(TransactionId xid, bool missing_ok)\n> \t}\n>\n> \tpgstat_report_wait_end();\n> -\tCloseTransientFile(fd);\n> +\n> +\tif (CloseTransientFile(fd))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t errmsg(\"could not close file \\\"%s\\\": %m\", path)));\n>\n> \thdr = (TwoPhaseFileHeader *) buf;\n> \tif (hdr->magic != TWOPHASE_MAGIC)\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index 0fdd82a287..c7047738b6 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -3469,7 +3469,10 @@ XLogFileCopy(XLogSegNo destsegno, TimeLineID srcTLI, XLogSegNo srcsegno,\n> \t\t\t\t(errcode_for_file_access(),\n> \t\t\t\t errmsg(\"could not close file \\\"%s\\\": %m\", tmppath)));\n>\n> -\tCloseTransientFile(srcfd);\n> +\tif (CloseTransientFile(srcfd))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t errmsg(\"could not close file \\\"%s\\\": %m\", path)));\n>\n> \t/*\n> \t * Now move the segment into place with its final name.\n...\n\nThis seems like an odd set of changes to me. What is this supposed to\nbuy us? The commit message says:\n 2) When opening transient files, it is up to the caller to close the\n file descriptors opened. In error code paths, CloseTransientFile() gets\n called to clean up things before issuing an error. However in normal\n exit paths, a lot of callers of CloseTransientFile() never actually\n reported errors, which could leave a file descriptor open without\n knowing about it. This is an issue I complained about a couple of\n times, but never had the courage to write and submit a patch, so here we\n go.\n\nbut that reasoning seems bogus to me. For one, on just about any\nplatform close always closes the fd, even when returning an error\n(unless you pass in a bad fd, in which case it obviously doesn't). So\nthe reasoning that this fixes unnoticed fd leaks doesn't really seem to\nmake sense. For another, even if it did, throwing an ERROR seems to\nachieve very little: We continue with a leaked fd *AND* we cause the\noperation to error out.\n\nI can see reasoning for:\n- LOG, so it can be noticed, but operations continue to work\n- FATAL, to fix the leak\n- PANIC, so we recover from the problem, in case of the close indicating\n a durability issue\n\n commit 9ccdd7f66e3324d2b6d3dec282cfa9ff084083f1\n Author: Thomas Munro <tmunro@postgresql.org>\n Date: 2018-11-19 13:31:10 +1300\n\n PANIC on fsync() failure.\n\nbut ERROR seems to have very little going for it.\n\nThe durability argument doesn't seem to apply for the cases where we\npreviously fsynced the file, a significant fraction of the locations you\ntouched.\n\nAnd if your goal was just to achieve consistency, I also don't\nunderstand, because you left plenty close()'s unchecked? E.g. you added\nan error check in get_controlfile(), but not one in\nReadControlFile(). alterSystemSetConfigFile() writes, but you didn't add\none.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 4 Oct 2019 13:39:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" }, { "msg_contents": "On Fri, Oct 04, 2019 at 01:39:38PM -0700, Andres Freund wrote:\n> but that reasoning seems bogus to me. For one, on just about any\n> platform close always closes the fd, even when returning an error\n> (unless you pass in a bad fd, in which case it obviously doesn't). So\n> the reasoning that this fixes unnoticed fd leaks doesn't really seem to\n> make sense. For another, even if it did, throwing an ERROR seems to\n> achieve very little: We continue with a leaked fd *AND* we cause the\n> operation to error out.\n\nI have read again a couple of times the commit log, and this mentions\nto let users know that a fd is leaking, not that it fixes things.\nStill we get to know about it, while previously it was not possible.\nIn some cases we may see errors in close() after a previous write(2).\nOf course this does not apply to all the code paths patched here, but\nit seems to me that's a good habit to spread, no?\n\n> I can see reasoning for:\n> - LOG, so it can be noticed, but operations continue to work\n> - FATAL, to fix the leak\n> - PANIC, so we recover from the problem, in case of the close indicating\n> a durability issue\n\nLOG or WARNING would not be visible enough and would likely be skipped\nby users. Not sure that this justifies a FATAL either, and PANIC\nwould cause more harm than necessary, so for most of them ERROR\nsounded like a good compromise, still the elevel choice is not\ninnocent depending on the code paths patched, because the elevel used\nis consistent with the error handling of the surroundings.\n\n> And if your goal was just to achieve consistency, I also don't\n> understand, because you left plenty close()'s unchecked? E.g. you added\n> an error check in get_controlfile(), but not one in\n> ReadControlFile(). alterSystemSetConfigFile() writes, but you didn't add\n> one.\n\nBecause I have not considered these when looking at transient files.\nThat may be worth an extra lookup.\n--\nMichael", "msg_date": "Mon, 14 Oct 2019 15:02:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Tighten error control for OpenTransientFile/CloseTransientFile" } ]
[ { "msg_contents": "Hi,\n\nRobert, I CCed you because you are the author of that commit. Before\nthat commit (\"Rewrite the code that applies scan/join targets to\npaths.\"), apply_scanjoin_target_to_paths() had a boolean parameter named\nmodify_in_place, and used apply_projection_to_path(), not\ncreate_projection_path(), to adjust scan/join paths when modify_in_place\nwas true, which allowed us to save cycles at plan creation time by\navoiding creating projection paths, which I think would be a good thing,\nbut that commit removed that. Why?\n\nThe real reason for this question is: I noticed that projection paths\nput on foreign paths will make it hard for FDWs to detect whether there\nis an already-well-enough-sorted remote path in the pathlist for the\nfinal scan/join relation as an input relation to GetForeignUpperPaths()\nfor the UPPERREL_ORDERED step (the IsA(path, ForeignPath) test would not\nwork well enough to detect remote paths!), so I'm wondering whether we\ncould revive that parameter like the attached, to avoid the overhead at\nplan creation time and to make the FDW work easy. Maybe I'm missing\nsomething, though.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 01 Mar 2019 19:45:45 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Question about commit 11cf92f6e2e13c0a6e3f98be3e629e6bd90b74d5" }, { "msg_contents": "On Fri, Mar 1, 2019 at 5:47 AM Etsuro Fujita\n<fujita.etsuro@lab.ntt.co.jp> wrote:\n> Robert, I CCed you because you are the author of that commit. Before\n> that commit (\"Rewrite the code that applies scan/join targets to\n> paths.\"), apply_scanjoin_target_to_paths() had a boolean parameter named\n> modify_in_place, and used apply_projection_to_path(), not\n> create_projection_path(), to adjust scan/join paths when modify_in_place\n> was true, which allowed us to save cycles at plan creation time by\n> avoiding creating projection paths, which I think would be a good thing,\n> but that commit removed that. Why?\n\nOne of the goals of the commit was to properly account for the cost of\ncomputing the target list. Before parallel query and partition-wise\njoin, it didn't really matter what the cost of computing the target\nlist was, because every path was going to have to do the same work, so\nit was just a constant factor getting added to every path. However,\nparallel query and partition-wise join mean that some paths can\ncompute the final target list more cheaply than others, and that turns\nout to be important for things like PostGIS. One of the complaints\nthat provoked that change was that PostGIS was picking non-parallel\nplans even when a parallel plan was substantially superior, because it\nwasn't accounting for the fact that in the parallel plan, the cost of\ncomputing the target-list could be shared across all the workers\nrather than paid entirely by the leader.\n\nIn order to accomplish this goal of properly accounting for the cost\nof computing the target list, we need to create a new path, not just\njam the target list into an already-costed path. Note that we did\nsome performance optimization around the same time to minimize the\nperformance hit here (see d7c19e62a8e0a634eb6b29f8f1111d944e57081f,\nand I think there may have been something else as well although I\ncan't find it right now).\n\n> The real reason for this question is: I noticed that projection paths\n> put on foreign paths will make it hard for FDWs to detect whether there\n> is an already-well-enough-sorted remote path in the pathlist for the\n> final scan/join relation as an input relation to GetForeignUpperPaths()\n> for the UPPERREL_ORDERED step (the IsA(path, ForeignPath) test would not\n> work well enough to detect remote paths!), so I'm wondering whether we\n> could revive that parameter like the attached, to avoid the overhead at\n> plan creation time and to make the FDW work easy. Maybe I'm missing\n> something, though.\n\nI think this would be a bad idea, for the reasons explained above. I\nalso think that it's probably the wrong direction on principle. I\nthink the way we account for target lists is still pretty crude and\nneeds to be thought of as more of a real planning step and less as\nsomething that we can just ignore when it's inconvenient for some\nreason. I think the FDW just needs to look through the projection\npath and see what's underneath, but also take the projection path's\ntarget list into account when decide whether more can be pushed down.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 1 Mar 2019 11:10:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about commit 11cf92f6e2e13c0a6e3f98be3e629e6bd90b74d5" }, { "msg_contents": "(2019/03/02 1:10), Robert Haas wrote:\n> On Fri, Mar 1, 2019 at 5:47 AM Etsuro Fujita\n> <fujita.etsuro@lab.ntt.co.jp> wrote:\n>> Robert, I CCed you because you are the author of that commit. Before\n>> that commit (\"Rewrite the code that applies scan/join targets to\n>> paths.\"), apply_scanjoin_target_to_paths() had a boolean parameter named\n>> modify_in_place, and used apply_projection_to_path(), not\n>> create_projection_path(), to adjust scan/join paths when modify_in_place\n>> was true, which allowed us to save cycles at plan creation time by\n>> avoiding creating projection paths, which I think would be a good thing,\n>> but that commit removed that. Why?\n>\n> One of the goals of the commit was to properly account for the cost of\n> computing the target list. Before parallel query and partition-wise\n> join, it didn't really matter what the cost of computing the target\n> list was, because every path was going to have to do the same work, so\n> it was just a constant factor getting added to every path. However,\n> parallel query and partition-wise join mean that some paths can\n> compute the final target list more cheaply than others, and that turns\n> out to be important for things like PostGIS. One of the complaints\n> that provoked that change was that PostGIS was picking non-parallel\n> plans even when a parallel plan was substantially superior, because it\n> wasn't accounting for the fact that in the parallel plan, the cost of\n> computing the target-list could be shared across all the workers\n> rather than paid entirely by the leader.\n>\n> In order to accomplish this goal of properly accounting for the cost\n> of computing the target list, we need to create a new path, not just\n> jam the target list into an already-costed path. Note that we did\n> some performance optimization around the same time to minimize the\n> performance hit here (see d7c19e62a8e0a634eb6b29f8f1111d944e57081f,\n> and I think there may have been something else as well although I\n> can't find it right now).\n\napply_projection_to_path() not only jams the given tlist into the \nexisting path but updates its tlist eval costs appropriately except for \nthe cases of Gather and GatherMerge:\n\n /*\n * If the path happens to be a Gather or GatherMerge path, we'd like to\n * arrange for the subpath to return the required target list so that\n * workers can help project. But if there is something that is not\n * parallel-safe in the target expressions, then we can't.\n */\n if ((IsA(path, GatherPath) ||IsA(path, GatherMergePath)) &&\n is_parallel_safe(root, (Node *) target->exprs))\n {\n /*\n * We always use create_projection_path here, even if the \nsubpath is\n * projection-capable, so as to avoid modifying the subpath in \nplace.\n * It seems unlikely at present that there could be any other\n * references to the subpath, but better safe than sorry.\n *\n--> * Note that we don't change the parallel path's cost estimates; it\n--> * might be appropriate to do so, to reflect the fact that the \nbulk of\n--> * the target evaluation will happen in workers.\n */\n if (IsA(path, GatherPath))\n {\n GatherPath *gpath = (GatherPath *) path;\n\n gpath->subpath = (Path *)\n create_projection_path(root,\n gpath->subpath->parent,\n gpath->subpath,\n target);\n }\n else\n {\n GatherMergePath *gmpath = (GatherMergePath *) path;\n\n gmpath->subpath = (Path *)\n create_projection_path(root,\n gmpath->subpath->parent,\n gmpath->subpath,\n target);\n }\n }\n\n>> The real reason for this question is: I noticed that projection paths\n>> put on foreign paths will make it hard for FDWs to detect whether there\n>> is an already-well-enough-sorted remote path in the pathlist for the\n>> final scan/join relation as an input relation to GetForeignUpperPaths()\n>> for the UPPERREL_ORDERED step (the IsA(path, ForeignPath) test would not\n>> work well enough to detect remote paths!), so I'm wondering whether we\n>> could revive that parameter like the attached, to avoid the overhead at\n>> plan creation time and to make the FDW work easy. Maybe I'm missing\n>> something, though.\n>\n> I think this would be a bad idea, for the reasons explained above. I\n> also think that it's probably the wrong direction on principle. I\n> think the way we account for target lists is still pretty crude and\n> needs to be thought of as more of a real planning step and less as\n> something that we can just ignore when it's inconvenient for some\n> reason.\n\nI'm not sure I 100% agree with you, but I also think we need to give \nmore thought to the tlist-eval-cost adjustment.\n\n> I think the FDW just needs to look through the projection\n> path and see what's underneath, but also take the projection path's\n> target list into account when decide whether more can be pushed down.\n\nOK, I'll go with that.\n\nThanks for the explanation!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 05 Mar 2019 17:00:43 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: Question about commit 11cf92f6e2e13c0a6e3f98be3e629e6bd90b74d5" }, { "msg_contents": "On Tue, Mar 5, 2019 at 3:00 AM Etsuro Fujita\n<fujita.etsuro@lab.ntt.co.jp> wrote:\n> apply_projection_to_path() not only jams the given tlist into the\n> existing path but updates its tlist eval costs appropriately except for\n> the cases of Gather and GatherMerge:\n\nI had forgotten that detail, but I don't think it changes the basic\npicture. Once you've added a bunch of Paths to a RelOptInfo, it's too\nlate to change their *relative* cost, because add_path() puts the list\nin a certain order, and adjusting the path costs won't change that\nordering. You've got to have the costs already correct at the time\nadd_path() is first called for any given Path.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 5 Mar 2019 11:03:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about commit 11cf92f6e2e13c0a6e3f98be3e629e6bd90b74d5" } ]
[ { "msg_contents": "for a createStmt, it will call transformCreateStmt, and then\nheap_create_with_catalog.\nbut looks it just check the if_not_exists in transformCreateStmt.\n\nso there is a chance that when the transformCreateStmt is called, the table\nis not created, but before the heap_create_with_catalog is called, the\ntable was created. if so, the \"if not exits\" will raise error \"ERROR:\nrelation \"xxxx\" already exists\"\n\nI can reproduce this with gdb,\n\ndemo=# create table if not exists dddd2 (a int);\nERROR: relation \"dddd2\" already exists\n\nis it designed as this on purpose or is it a bug?\n\nI am using the lates commit on github now.\n\nfor a createStmt,  it will call transformCreateStmt,  and then heap_create_with_catalog. but looks it just check the if_not_exists in transformCreateStmt.  so there is a chance that when the transformCreateStmt is called, the table is not created, but before the heap_create_with_catalog is called,  the table was created.  if so, the \"if not exits\" will raise error  \"ERROR:  relation \"xxxx\" already exists\"I can reproduce this with gdb, demo=# create table if not exists dddd2 (a int);ERROR:  relation \"dddd2\" already existsis it designed as this on purpose or is it a bug?I am using the lates commit on github now.", "msg_date": "Fri, 1 Mar 2019 19:17:04 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Looks heap_create_with_catalog ignored the if_not_exists options" }, { "msg_contents": "On Fri, Mar 01, 2019 at 07:17:04PM +0800, Andy Fan wrote:\n> for a createStmt, it will call transformCreateStmt, and then\n> heap_create_with_catalog.\n> but looks it just check the if_not_exists in transformCreateStmt.\n> \n> is it designed as this on purpose or is it a bug?\n\nThat's a bug. Andreas Karlsson and I have been discussing it a couple\nof days ago actually:\nhttps://www.postgresql.org/message-id/20190215081451.GD2240@paquier.xyz\n\nFixing this is not as straight-forward as it seems, as it requires\nshuffling a bit the code related to a CTAS creation so as all code\npaths check at the same time for an existing relation. Based on my\nfirst impressions, I got the feeling that it would be rather invasive\nand not worth a back-patch.\n--\nMichael", "msg_date": "Fri, 1 Mar 2019 21:35:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Looks heap_create_with_catalog ignored the if_not_exists options" }, { "msg_contents": "Thank you Michael!\n\n What can I do if I'm sure I will not use the CTAS creation ? Take a look\nat the \"heap_create_with_catalog\" function, it check it and raise error.\n Even I change it to \"check it && if_not_existing\", raise error, it is\nstill be problematic since we may some other session create between the\ncheck and the real creation end.\n\nLooks we need some locking there, but since PG is processes model, I even\ndon't know how to sync some code among processes in PG (any hint on this\nwill be pretty good as well).\n\nOn Fri, Mar 1, 2019 at 8:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Mar 01, 2019 at 07:17:04PM +0800, Andy Fan wrote:\n> > for a createStmt, it will call transformCreateStmt, and then\n> > heap_create_with_catalog.\n> > but looks it just check the if_not_exists in transformCreateStmt.\n> >\n> > is it designed as this on purpose or is it a bug?\n>\n> That's a bug. Andreas Karlsson and I have been discussing it a couple\n> of days ago actually:\n> https://www.postgresql.org/message-id/20190215081451.GD2240@paquier.xyz\n>\n> Fixing this is not as straight-forward as it seems, as it requires\n> shuffling a bit the code related to a CTAS creation so as all code\n> paths check at the same time for an existing relation. Based on my\n> first impressions, I got the feeling that it would be rather invasive\n> and not worth a back-patch.\n> --\n> Michael\n>\n\nThank you Michael! What can I do if I'm sure I will not use the CTAS creation ?   Take a look at the \"heap_create_with_catalog\" function, it check it and raise error.   Even I  change it to  \"check it && if_not_existing\",  raise error, it is still be problematic since we may some other session create between the check and the real creation end.   Looks we need some locking there, but since PG is processes model,  I even don't know how to sync some code among processes in PG (any hint on this will be pretty good as well).On Fri, Mar 1, 2019 at 8:35 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Mar 01, 2019 at 07:17:04PM +0800, Andy Fan wrote:\n> for a createStmt,  it will call transformCreateStmt,  and then\n> heap_create_with_catalog.\n> but looks it just check the if_not_exists in transformCreateStmt.\n> \n> is it designed as this on purpose or is it a bug?\n\nThat's a bug.  Andreas Karlsson and I have been discussing it a couple\nof days ago actually:\nhttps://www.postgresql.org/message-id/20190215081451.GD2240@paquier.xyz\n\nFixing this is not as straight-forward as it seems, as it requires\nshuffling a bit the code related to a CTAS creation so as all code\npaths check at the same time for an existing relation.  Based on my\nfirst impressions, I got the feeling that it would be rather invasive\nand not worth a back-patch.\n--\nMichael", "msg_date": "Sat, 2 Mar 2019 00:15:19 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Looks heap_create_with_catalog ignored the if_not_exists options" }, { "msg_contents": "On Sat, Mar 02, 2019 at 12:15:19AM +0800, Andy Fan wrote:\n> Looks we need some locking there, but since PG is processes model, I even\n> don't know how to sync some code among processes in PG (any hint on this\n> will be pretty good as well).\n\nNo, you shouldn't need any kind of extra locking here.\n--\nMichael", "msg_date": "Sat, 2 Mar 2019 19:44:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Looks heap_create_with_catalog ignored the if_not_exists options" } ]
[ { "msg_contents": "Hello,\r\n\r\nWe probably identified a bug in the pg_background implementation: https://github.com/vibhorkum/pg_background \r\nIt is a race condition when starting the process and BGWH_STOPPED is returned - see the pull request for more info: https://github.com/RekGRpth/pg_background/pull/1 \r\n\r\nI think I have created a fix (in one of the forks of the original repo, https://github.com/RekGRpth/pg_background, which already addresses some compilation issues), but then again I am not very familiar with PG API and would very much appreciate if anyone could review the bug and approve the solution.\r\n\r\nRegards\r\n\r\nMartin\r\n\r\n-----Original Message-----\r\nFrom: pgsql-hackers-owner@postgresql.org [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of amul sul\r\nSent: Thursday, November 24, 2016 4:47 AM\r\nTo: PostgreSQL-development\r\nSubject: pg_background contrib module proposal\r\n\r\nHi All,\r\n\r\nI would like to take over pg_background patch and repost for discussion and review.\r\n\r\nInitially Robert Haas has share this for parallelism demonstration[1] and abandoned later with summary of open issue[2] with this pg_background patch need to be fixed, most of them seems to be addressed in core except handling of type exists without binary send/recv functions and documentation.\r\nI have added handling for types that don't have binary send/recv functions in the attach patch and will work on documentation at the end.\r\n\r\nOne concern with this patch is code duplication with exec_simple_query(), we could consider Jim Nasby’s patch[3] to overcome this, but certainly we will end up by complicating\r\nexec_simple_query() to make pg_background happy.\r\n\r\nAs discussed previously[1] pg_background is a contrib module that lets you launch arbitrary command in a background worker.\r\n\r\n• VACUUM in background\r\n• Autonomous transaction implementation better than dblink way (i.e.\r\nno separate authentication required).\r\n• Allows to perform task like CREATE INDEX CONCURRENTLY from a procedural language.\r\n\r\nThis module comes with following SQL APIs:\r\n\r\n• pg_background_launch : This API takes SQL command, which user wants to execute, and size of queue buffer.\r\n This function returns the process id of background worker.\r\n• pg_background_result : This API takes the process id as input\r\nparameter and returns the result of command\r\n executed thought the background worker.\r\n• pg_background_detach : This API takes the process id and detach the background process which is waiting for user to read its results.\r\n\r\n\r\nHere's an example of running vacuum and then fetching the results.\r\nNotice that the\r\nnotices from the original session are propagated to our session; if an error had occurred, it would be re-thrown locally when we try to read the results.\r\n\r\npostgres=# create table foo (a int);\r\nCREATE TABLE\r\npostgres=# insert into foo values(generate_series(1,5)); INSERT 0 5\r\n\r\npostgres=# select pg_background_launch('vacuum verbose foo'); pg_background_launch\r\n----------------------\r\n 65427\r\n(1 row)\r\n\r\npostgres=# select * from pg_background_result(65427) as (x text);\r\nINFO: vacuuming \"public.foo\"\r\nINFO: \"foo\": found 0 removable, 5 nonremovable row versions in 1 out of 1 pages\r\nDETAIL: 0 dead row versions cannot be removed yet.\r\nThere were 0 unused item pointers.\r\nSkipped 0 pages due to buffer pins.\r\n0 pages are entirely empty.\r\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\r\n x\r\n--------\r\nVACUUM\r\n(1 row)\r\n\r\n\r\nThanks to Vibhor kumar, Rushabh Lathia and Robert Haas for feedback.\r\n\r\nPlease let me know your thoughts, and thanks for reading.\r\n\r\n[1]. https://www.postgresql.org/message-id/CA%2BTgmoam66dTzCP8N2cRcS6S6dBMFX%2BJMba%2BmDf68H%3DKAkNjPQ%40mail.gmail.com\r\n[2]. https://www.postgresql.org/message-id/CA%2BTgmobPiT_3Qgjeh3_v%2B8Cq2nMczkPyAYernF_7_W9a-6T1PA%40mail.gmail.com\r\n[3]. https://www.postgresql.org/message-id/54541779.1010906%40BlueTreble.com\r\n\r\nRegards,\r\nAmul\r\n", "msg_date": "Fri, 1 Mar 2019 11:25:03 +0000", "msg_from": "=?utf-8?B?xaB2b3JjIE1hcnRpbg==?= <svorc@sefira.cz>", "msg_from_op": true, "msg_subject": "pg_background and BGWH_STOPPED" } ]
[ { "msg_contents": "Hello all!\n\nI am Sumukha PK a student of NITK. I am interested in the WAL-G backup tool. I haven’t been able to catch hold of anyone through the IRC channels so I need someone to point me to appropriate resources so that I can be introduced to it. I am proficient in Golang an would be interested to work on this project.\n\nThanks\n\n\nHello all! I am Sumukha PK a student of NITK. I am interested in the WAL-G backup tool. I haven’t been able to catch hold of anyone through the IRC channels so I need someone to point me to appropriate resources so that I can be introduced to it. I am proficient in Golang an would be interested to work on this project. Thanks", "msg_date": "Fri, 1 Mar 2019 20:47:03 +0530", "msg_from": "Sumukha Pk <sumukhapk46@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC 2019" }, { "msg_contents": "Greetings,\n\n* Sumukha Pk (sumukhapk46@gmail.com) wrote:\n> I am Sumukha PK a student of NITK. I am interested in the WAL-G backup tool. I haven’t been able to catch hold of anyone through the IRC channels so I need someone to point me to appropriate resources so that I can be introduced to it. I am proficient in Golang an would be interested to work on this project.\n\nPlease work with Andrey on coming up with a project plan to be submitted\nformally through the GSoC website.\n\nThanks!\n\nStephen", "msg_date": "Sat, 2 Mar 2019 11:40:09 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC 2019" }, { "msg_contents": "Hello!\n\n> 2 марта 2019 г., в 21:40, Stephen Frost <sfrost@snowman.net> написал(а):\n> \n> Greetings,\n> \n> * Sumukha Pk (sumukhapk46@gmail.com) wrote:\n>> I am Sumukha PK a student of NITK. I am interested in the WAL-G backup tool. I haven’t been able to catch hold of anyone through the IRC channels so I need someone to point me to appropriate resources so that I can be introduced to it. I am proficient in Golang an would be interested to work on this project.\n> \n> Please work with Andrey on coming up with a project plan to be submitted\n> formally through the GSoC website.\n\nThanks, Stephen!\n\nSumukha Pk, that's great that you have subscribed to pgsql-hackers and joined PostgreSQL development list, here's a lot of useful information.\nIn GSOC, WAL-G is under PostgreSQL umbrella project. But usually we are not using this list for WAL-G related discussions.\nWe have a WAL-G Slack channel https://postgresteam.slack.com/messages/CA25P48P2 . To get there you can use this invite app https://postgres-slack.herokuapp.com/\nThere you can ask whatever you want to know about WAL-G or your GSOC proposal.\n\nThanks for your interest in our project!\n\nBest regards, Andrey Borodin.\n", "msg_date": "Sat, 2 Mar 2019 23:27:00 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: GSoC 2019" } ]
[ { "msg_contents": "Dear PostgreSQL community:\n\nI am a MSc student in computer science working on data management research,\nand I will likely graduate this summer. I also was a participant of GSoC in\n2017 working with the NRNB organization on data standards conversion.\n\nI have been a user of pgAdmin 4 briefly, and I am interested in learning\nmore about the project and contribute to it. The projects web page lists 3\npotential projects, but I don't know which one I'm suitable for. Are there\nany suggestions on how to get started on exploring more about the Query\nTool in pgAdmin? For example, use cases with some sample data would be\nnice! I also checked out the pgadmin4 repository from git, and I'll start\nexploring the code shortly.\n\nThank you for your responses!\nHoward\n\nDear PostgreSQL community:I am a MSc student in computer science working on data management research, and I will likely graduate this summer. I also was a participant of GSoC in 2017 working with the NRNB organization on data standards conversion. I have been a user of pgAdmin 4 briefly, and I am interested in learning more about the project and contribute to it. The projects web page lists 3 potential projects, but I don't know which one I'm suitable for. Are there any suggestions on how to get started on exploring more about the Query Tool in pgAdmin? For example, use cases with some sample data would be nice! I also checked out the pgadmin4 repository from git, and I'll start exploring the code shortly. Thank you for your responses!Howard", "msg_date": "Fri, 1 Mar 2019 08:07:44 -0800", "msg_from": "Haoran Yu <haleyyew@gmail.com>", "msg_from_op": true, "msg_subject": "Interested in GSoC projects on pgAdmin 4" }, { "msg_contents": "Hi\n\n[Moving pgsql-hackers to BCC to prevent further cross-posting]\n\nOn Sat, Mar 2, 2019 at 12:36 AM Haoran Yu <haleyyew@gmail.com> wrote:\n\n> Dear PostgreSQL community:\n>\n> I am a MSc student in computer science working on data management\n> research, and I will likely graduate this summer. I also was a participant\n> of GSoC in 2017 working with the NRNB organization on data standards\n> conversion.\n>\n\nCool.\n\n\n>\n> I have been a user of pgAdmin 4 briefly, and I am interested in learning\n> more about the project and contribute to it. The projects web page lists 3\n> potential projects, but I don't know which one I'm suitable for. Are there\n> any suggestions on how to get started on exploring more about the Query\n> Tool in pgAdmin? For example, use cases with some sample data would be\n> nice! I also checked out the pgadmin4 repository from git, and I'll start\n> exploring the code shortly.\n>\n\nThe choice of which project to work on is entirely down to your own\npersonal interests. You can even propose something else if you like, though\nthe listed projects are ones that are likely to be accepted as they're\nknown to be valuable.\n\nThe first project (Query Tool Graphing) has a simple use case of allowing\nany user to quickly render a graph of their data. More specific use cases\ncan be discussed as part of the project, but quite simply the idea is to\nallow users a quick and easy way to visualise their data. It would probably\nhelp to install PostGIS in a database, and then load the test data we used\nto play with it and see how that works (see\nhttps://www.postgresql.org/message-id/CAA7HE_cU7bmQv1kdPB3hiKYGJLaOVVft_XxqcD6ueJpAGfqykQ%40mail.gmail.com\n- specifically, the Google Drive link). The GIS viewer is similar to what I\nhave in mind for this feature, except that instead of drawing maps, we'd\ndraw different types of graphs.\n\nThe second project is about supporting bytea in the Query Tool. Right now\nbytea data isn't rendered when you run a query - we show a placeholder\ninstead. The use case here is for users that store media files in bytea\ncolumns; we want to be able to automatically detect different file types\nand allow them to be viewed (or listened to) in the tool. When running in\nEdit more, the user should be able to add or replace data in a row by\nuploading from the browser. I don't have any sample data for this.\n\nThe final project listed is a long-term design goal of pgAdmin 4 (and\nprobably the hardest project). In pgAdmin 3 we had separate Query Tool and\nView/Edit data tools. In pgAdmin 4, we made them into the same tool, but\nrunning in two separate modes. The use case here is to prevent the need for\nthe user to choose what mode to open the tool in (Query Tool vs. View/Edit\nData), and to automatically detect whether any query would produce an\nupdateable resultset. This would allow the tool to offer all features at\nall times, and simple enable/disable in-place editing of the query results\nif there's no way to automatically generate an update/insert/delete\nstatement. This one is potentially hard as it will likely require some\namount of parsing of the query string to make that determination. You can\nsimply play with any test data to get a feel for this one.\n\nHope this helps.\n\nRegards, Dave.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi[Moving pgsql-hackers to BCC to prevent further cross-posting]On Sat, Mar 2, 2019 at 12:36 AM Haoran Yu <haleyyew@gmail.com> wrote:Dear PostgreSQL community:I am a MSc student in computer science working on data management research, and I will likely graduate this summer. I also was a participant of GSoC in 2017 working with the NRNB organization on data standards conversion. Cool. I have been a user of pgAdmin 4 briefly, and I am interested in learning more about the project and contribute to it. The projects web page lists 3 potential projects, but I don't know which one I'm suitable for. Are there any suggestions on how to get started on exploring more about the Query Tool in pgAdmin? For example, use cases with some sample data would be nice! I also checked out the pgadmin4 repository from git, and I'll start exploring the code shortly. The choice of which project to work on is entirely down to your own personal interests. You can even propose something else if you like, though the listed projects are ones that are likely to be accepted as they're known to be valuable.The first project (Query Tool Graphing) has a simple use case of allowing any user to quickly render a graph of their data. More specific use cases can be discussed as part of the project, but quite simply the idea is to allow users a quick and easy way to visualise their data. It would probably help to install PostGIS in a database, and then load the test data we used to play with it and see how that works (see https://www.postgresql.org/message-id/CAA7HE_cU7bmQv1kdPB3hiKYGJLaOVVft_XxqcD6ueJpAGfqykQ%40mail.gmail.com - specifically, the Google Drive link). The GIS viewer is similar to what I have in mind for this feature, except that instead of drawing maps, we'd draw different types of graphs.The second project is about supporting bytea in the Query Tool. Right now bytea data isn't rendered when you run a query - we show a placeholder instead. The use case here is for users that store media files in bytea columns; we want to be able to automatically detect different file types and allow them to be viewed (or listened to) in the tool. When running in Edit more, the user should be able to add or replace data in a row by uploading from the browser. I don't have any sample data for this.The final project listed is a long-term design goal of pgAdmin 4 (and probably the hardest project). In pgAdmin 3 we had separate Query Tool and View/Edit data tools. In pgAdmin 4, we made them into the same tool, but running in two separate modes. The use case here is to prevent the need for the user to choose what mode to open the tool in (Query Tool vs. View/Edit Data), and to automatically detect whether any query would produce an updateable resultset. This would allow the tool to offer all features at all times, and simple enable/disable in-place editing of the query results if there's no way to automatically generate an update/insert/delete statement. This one is potentially hard as it will likely require some amount of parsing of the query string to make that determination. You can simply play with any test data to get a feel for this one.Hope this helps.Regards, Dave. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 4 Mar 2019 11:15:19 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: Interested in GSoC projects on pgAdmin 4" }, { "msg_contents": "Thanks Dave! I looked at your project descriptions and decided to create an\napplication for Query Tool Graphing. Here's my partially complete proposal:\nhttps://docs.google.com/document/d/1zZhpmZQZBuZNsJA1UeKHrXKFJQCfmKKvTsnNbdmtT_0/edit?usp=sharing\n\nEveryone's welcome to give me feedback/suggestions! Thank you!\n\nHoward\n\nOn Mon, Mar 4, 2019 at 3:15 AM Dave Page <dpage@pgadmin.org> wrote:\n\n> Hi\n>\n> [Moving pgsql-hackers to BCC to prevent further cross-posting]\n>\n> On Sat, Mar 2, 2019 at 12:36 AM Haoran Yu <haleyyew@gmail.com> wrote:\n>\n>> Dear PostgreSQL community:\n>>\n>> I am a MSc student in computer science working on data management\n>> research, and I will likely graduate this summer. I also was a participant\n>> of GSoC in 2017 working with the NRNB organization on data standards\n>> conversion.\n>>\n>\n> Cool.\n>\n>\n>>\n>> I have been a user of pgAdmin 4 briefly, and I am interested in learning\n>> more about the project and contribute to it. The projects web page lists 3\n>> potential projects, but I don't know which one I'm suitable for. Are there\n>> any suggestions on how to get started on exploring more about the Query\n>> Tool in pgAdmin? For example, use cases with some sample data would be\n>> nice! I also checked out the pgadmin4 repository from git, and I'll start\n>> exploring the code shortly.\n>>\n>\n> The choice of which project to work on is entirely down to your own\n> personal interests. You can even propose something else if you like, though\n> the listed projects are ones that are likely to be accepted as they're\n> known to be valuable.\n>\n> The first project (Query Tool Graphing) has a simple use case of allowing\n> any user to quickly render a graph of their data. More specific use cases\n> can be discussed as part of the project, but quite simply the idea is to\n> allow users a quick and easy way to visualise their data. It would probably\n> help to install PostGIS in a database, and then load the test data we used\n> to play with it and see how that works (see\n> https://www.postgresql.org/message-id/CAA7HE_cU7bmQv1kdPB3hiKYGJLaOVVft_XxqcD6ueJpAGfqykQ%40mail.gmail.com\n> - specifically, the Google Drive link). The GIS viewer is similar to what I\n> have in mind for this feature, except that instead of drawing maps, we'd\n> draw different types of graphs.\n>\n> The second project is about supporting bytea in the Query Tool. Right now\n> bytea data isn't rendered when you run a query - we show a placeholder\n> instead. The use case here is for users that store media files in bytea\n> columns; we want to be able to automatically detect different file types\n> and allow them to be viewed (or listened to) in the tool. When running in\n> Edit more, the user should be able to add or replace data in a row by\n> uploading from the browser. I don't have any sample data for this.\n>\n> The final project listed is a long-term design goal of pgAdmin 4 (and\n> probably the hardest project). In pgAdmin 3 we had separate Query Tool and\n> View/Edit data tools. In pgAdmin 4, we made them into the same tool, but\n> running in two separate modes. The use case here is to prevent the need for\n> the user to choose what mode to open the tool in (Query Tool vs. View/Edit\n> Data), and to automatically detect whether any query would produce an\n> updateable resultset. This would allow the tool to offer all features at\n> all times, and simple enable/disable in-place editing of the query results\n> if there's no way to automatically generate an update/insert/delete\n> statement. This one is potentially hard as it will likely require some\n> amount of parsing of the query string to make that determination. You can\n> simply play with any test data to get a feel for this one.\n>\n> Hope this helps.\n>\n> Regards, Dave.\n>\n> --\n> Dave Page\n> Blog: http://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EnterpriseDB UK: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks Dave! I looked at your project descriptions and decided to create an application for Query Tool Graphing. Here's my partially complete proposal:https://docs.google.com/document/d/1zZhpmZQZBuZNsJA1UeKHrXKFJQCfmKKvTsnNbdmtT_0/edit?usp=sharingEveryone's welcome to give me feedback/suggestions! Thank you!HowardOn Mon, Mar 4, 2019 at 3:15 AM Dave Page <dpage@pgadmin.org> wrote:Hi[Moving pgsql-hackers to BCC to prevent further cross-posting]On Sat, Mar 2, 2019 at 12:36 AM Haoran Yu <haleyyew@gmail.com> wrote:Dear PostgreSQL community:I am a MSc student in computer science working on data management research, and I will likely graduate this summer. I also was a participant of GSoC in 2017 working with the NRNB organization on data standards conversion. Cool. I have been a user of pgAdmin 4 briefly, and I am interested in learning more about the project and contribute to it. The projects web page lists 3 potential projects, but I don't know which one I'm suitable for. Are there any suggestions on how to get started on exploring more about the Query Tool in pgAdmin? For example, use cases with some sample data would be nice! I also checked out the pgadmin4 repository from git, and I'll start exploring the code shortly. The choice of which project to work on is entirely down to your own personal interests. You can even propose something else if you like, though the listed projects are ones that are likely to be accepted as they're known to be valuable.The first project (Query Tool Graphing) has a simple use case of allowing any user to quickly render a graph of their data. More specific use cases can be discussed as part of the project, but quite simply the idea is to allow users a quick and easy way to visualise their data. It would probably help to install PostGIS in a database, and then load the test data we used to play with it and see how that works (see https://www.postgresql.org/message-id/CAA7HE_cU7bmQv1kdPB3hiKYGJLaOVVft_XxqcD6ueJpAGfqykQ%40mail.gmail.com - specifically, the Google Drive link). The GIS viewer is similar to what I have in mind for this feature, except that instead of drawing maps, we'd draw different types of graphs.The second project is about supporting bytea in the Query Tool. Right now bytea data isn't rendered when you run a query - we show a placeholder instead. The use case here is for users that store media files in bytea columns; we want to be able to automatically detect different file types and allow them to be viewed (or listened to) in the tool. When running in Edit more, the user should be able to add or replace data in a row by uploading from the browser. I don't have any sample data for this.The final project listed is a long-term design goal of pgAdmin 4 (and probably the hardest project). In pgAdmin 3 we had separate Query Tool and View/Edit data tools. In pgAdmin 4, we made them into the same tool, but running in two separate modes. The use case here is to prevent the need for the user to choose what mode to open the tool in (Query Tool vs. View/Edit Data), and to automatically detect whether any query would produce an updateable resultset. This would allow the tool to offer all features at all times, and simple enable/disable in-place editing of the query results if there's no way to automatically generate an update/insert/delete statement. This one is potentially hard as it will likely require some amount of parsing of the query string to make that determination. You can simply play with any test data to get a feel for this one.Hope this helps.Regards, Dave. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Mar 2019 08:26:28 -0700", "msg_from": "Haoran Yu <haleyyew@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Interested in GSoC projects on pgAdmin 4" }, { "msg_contents": "Dear PostgreSQL community,\n\nI have submitted a proposal for the project pgAdmin 4 bytea support. I\nappreciate your comments and feedback. Thank you!\n\nhttps://docs.google.com/document/d/1ADkdj1Nnhzpy1HTqgs6c6nVPXvPBmwLaysmaNX9eFc0/edit?usp=sharing\n\nHoward\n\nOn Wed, Mar 20, 2019 at 8:26 AM Haoran Yu <haleyyew@gmail.com> wrote:\n\n> Thanks Dave! I looked at your project descriptions and decided to create\n> an application for Query Tool Graphing. Here's my partially complete\n> proposal:\n>\n> https://docs.google.com/document/d/1zZhpmZQZBuZNsJA1UeKHrXKFJQCfmKKvTsnNbdmtT_0/edit?usp=sharing\n>\n> Everyone's welcome to give me feedback/suggestions! Thank you!\n>\n> Howard\n>\n> On Mon, Mar 4, 2019 at 3:15 AM Dave Page <dpage@pgadmin.org> wrote:\n>\n>> Hi\n>>\n>> [Moving pgsql-hackers to BCC to prevent further cross-posting]\n>>\n>> On Sat, Mar 2, 2019 at 12:36 AM Haoran Yu <haleyyew@gmail.com> wrote:\n>>\n>>> Dear PostgreSQL community:\n>>>\n>>> I am a MSc student in computer science working on data management\n>>> research, and I will likely graduate this summer. I also was a participant\n>>> of GSoC in 2017 working with the NRNB organization on data standards\n>>> conversion.\n>>>\n>>\n>> Cool.\n>>\n>>\n>>>\n>>> I have been a user of pgAdmin 4 briefly, and I am interested in learning\n>>> more about the project and contribute to it. The projects web page lists 3\n>>> potential projects, but I don't know which one I'm suitable for. Are there\n>>> any suggestions on how to get started on exploring more about the Query\n>>> Tool in pgAdmin? For example, use cases with some sample data would be\n>>> nice! I also checked out the pgadmin4 repository from git, and I'll start\n>>> exploring the code shortly.\n>>>\n>>\n>> The choice of which project to work on is entirely down to your own\n>> personal interests. You can even propose something else if you like, though\n>> the listed projects are ones that are likely to be accepted as they're\n>> known to be valuable.\n>>\n>> The first project (Query Tool Graphing) has a simple use case of allowing\n>> any user to quickly render a graph of their data. More specific use cases\n>> can be discussed as part of the project, but quite simply the idea is to\n>> allow users a quick and easy way to visualise their data. It would probably\n>> help to install PostGIS in a database, and then load the test data we used\n>> to play with it and see how that works (see\n>> https://www.postgresql.org/message-id/CAA7HE_cU7bmQv1kdPB3hiKYGJLaOVVft_XxqcD6ueJpAGfqykQ%40mail.gmail.com\n>> - specifically, the Google Drive link). The GIS viewer is similar to what I\n>> have in mind for this feature, except that instead of drawing maps, we'd\n>> draw different types of graphs.\n>>\n>> The second project is about supporting bytea in the Query Tool. Right now\n>> bytea data isn't rendered when you run a query - we show a placeholder\n>> instead. The use case here is for users that store media files in bytea\n>> columns; we want to be able to automatically detect different file types\n>> and allow them to be viewed (or listened to) in the tool. When running in\n>> Edit more, the user should be able to add or replace data in a row by\n>> uploading from the browser. I don't have any sample data for this.\n>>\n>> The final project listed is a long-term design goal of pgAdmin 4 (and\n>> probably the hardest project). In pgAdmin 3 we had separate Query Tool and\n>> View/Edit data tools. In pgAdmin 4, we made them into the same tool, but\n>> running in two separate modes. The use case here is to prevent the need for\n>> the user to choose what mode to open the tool in (Query Tool vs. View/Edit\n>> Data), and to automatically detect whether any query would produce an\n>> updateable resultset. This would allow the tool to offer all features at\n>> all times, and simple enable/disable in-place editing of the query results\n>> if there's no way to automatically generate an update/insert/delete\n>> statement. This one is potentially hard as it will likely require some\n>> amount of parsing of the query string to make that determination. You can\n>> simply play with any test data to get a feel for this one.\n>>\n>> Hope this helps.\n>>\n>> Regards, Dave.\n>>\n>> --\n>> Dave Page\n>> Blog: http://pgsnake.blogspot.com\n>> Twitter: @pgsnake\n>>\n>> EnterpriseDB UK: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n\nDear PostgreSQL community,I have submitted a proposal for the project pgAdmin 4 bytea support. I appreciate your comments and feedback. Thank you!https://docs.google.com/document/d/1ADkdj1Nnhzpy1HTqgs6c6nVPXvPBmwLaysmaNX9eFc0/edit?usp=sharingHowardOn Wed, Mar 20, 2019 at 8:26 AM Haoran Yu <haleyyew@gmail.com> wrote:Thanks Dave! I looked at your project descriptions and decided to create an application for Query Tool Graphing. Here's my partially complete proposal:https://docs.google.com/document/d/1zZhpmZQZBuZNsJA1UeKHrXKFJQCfmKKvTsnNbdmtT_0/edit?usp=sharingEveryone's welcome to give me feedback/suggestions! Thank you!HowardOn Mon, Mar 4, 2019 at 3:15 AM Dave Page <dpage@pgadmin.org> wrote:Hi[Moving pgsql-hackers to BCC to prevent further cross-posting]On Sat, Mar 2, 2019 at 12:36 AM Haoran Yu <haleyyew@gmail.com> wrote:Dear PostgreSQL community:I am a MSc student in computer science working on data management research, and I will likely graduate this summer. I also was a participant of GSoC in 2017 working with the NRNB organization on data standards conversion. Cool. I have been a user of pgAdmin 4 briefly, and I am interested in learning more about the project and contribute to it. The projects web page lists 3 potential projects, but I don't know which one I'm suitable for. Are there any suggestions on how to get started on exploring more about the Query Tool in pgAdmin? For example, use cases with some sample data would be nice! I also checked out the pgadmin4 repository from git, and I'll start exploring the code shortly. The choice of which project to work on is entirely down to your own personal interests. You can even propose something else if you like, though the listed projects are ones that are likely to be accepted as they're known to be valuable.The first project (Query Tool Graphing) has a simple use case of allowing any user to quickly render a graph of their data. More specific use cases can be discussed as part of the project, but quite simply the idea is to allow users a quick and easy way to visualise their data. It would probably help to install PostGIS in a database, and then load the test data we used to play with it and see how that works (see https://www.postgresql.org/message-id/CAA7HE_cU7bmQv1kdPB3hiKYGJLaOVVft_XxqcD6ueJpAGfqykQ%40mail.gmail.com - specifically, the Google Drive link). The GIS viewer is similar to what I have in mind for this feature, except that instead of drawing maps, we'd draw different types of graphs.The second project is about supporting bytea in the Query Tool. Right now bytea data isn't rendered when you run a query - we show a placeholder instead. The use case here is for users that store media files in bytea columns; we want to be able to automatically detect different file types and allow them to be viewed (or listened to) in the tool. When running in Edit more, the user should be able to add or replace data in a row by uploading from the browser. I don't have any sample data for this.The final project listed is a long-term design goal of pgAdmin 4 (and probably the hardest project). In pgAdmin 3 we had separate Query Tool and View/Edit data tools. In pgAdmin 4, we made them into the same tool, but running in two separate modes. The use case here is to prevent the need for the user to choose what mode to open the tool in (Query Tool vs. View/Edit Data), and to automatically detect whether any query would produce an updateable resultset. This would allow the tool to offer all features at all times, and simple enable/disable in-place editing of the query results if there's no way to automatically generate an update/insert/delete statement. This one is potentially hard as it will likely require some amount of parsing of the query string to make that determination. You can simply play with any test data to get a feel for this one.Hope this helps.Regards, Dave. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Sat, 30 Mar 2019 16:12:40 -0700", "msg_from": "Haoran Yu <haleyyew@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSoC proposal for pgAdmin 4 bytea support" }, { "msg_contents": "Dear PostgreSQL community,\n\nI have submitted a proposal for the project pgAdmin 4 bytea support. The\nproject discusses storing media content (images, audio, video) as bytea.\nHowever, I have a quick question. What does bytea data look like typically\nwhen storing media content? What I had in mind is, media contents that uses\nMIME type, which are rendered as part of HTML. For example, the following\nis rendered as a red dot:\n\n'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA\nAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO\n9TXL0Y4OHwAAAABJRU5ErkJggg==’\n\nThis string is decoded to bytea, and I stored it in a bytea column.\n\nWhat are some other examples of using bytea to store media content, not\nnecessarily using the MIME type? Is there a way to detect the type of these\nmedia (audio, image) stored in bytea?\n\nAnother question I had is, I read that there are performance-related issues\nfor storing media in bytea. Are there practical ways to store bytea data\nthat does not face performance-related issues? For example, storing large\nmedia content using multiple bytea parts, and reassembling them together\nonce retrieved from the database?\n\nThank you,\nHoward\n\nhttps://docs.google.com/document/d/1ADkdj1Nnhzpy1HTqgs6c6nVPXvPBmwLaysmaNX9eFc0/edit?usp=sharing\n\nDear PostgreSQL community,I have submitted a proposal for the project pgAdmin 4 bytea support. The project discusses storing media content (images, audio, video) as bytea. However, I have a quick question. What does bytea data look like typically when storing media content? What I had in mind is, media contents that uses MIME type, which are rendered as part of HTML. For example, the following is rendered as a red dot:'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==’This string is decoded to bytea, and I stored it in a bytea column.What are some other examples of using bytea to store media content, not necessarily using the MIME type? Is there a way to detect the type of these media (audio, image) stored in bytea?Another question I had is, I read that there are performance-related issues for storing media in bytea. Are there practical ways to store bytea data that does not face performance-related issues? For example, storing large media content using multiple bytea parts, and reassembling them together once retrieved from the database?Thank you,Howardhttps://docs.google.com/document/d/1ADkdj1Nnhzpy1HTqgs6c6nVPXvPBmwLaysmaNX9eFc0/edit?usp=sharing", "msg_date": "Sun, 31 Mar 2019 19:12:32 -0700", "msg_from": "Haoran Yu <haleyyew@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC proposal for pgAdmin 4 bytea support" }, { "msg_contents": "Hi\n\nOn Mon, Apr 1, 2019 at 3:12 AM Haoran Yu <haleyyew@gmail.com> wrote:\n\n> Dear PostgreSQL community,\n>\n> I have submitted a proposal for the project pgAdmin 4 bytea support. The\n> project discusses storing media content (images, audio, video) as bytea.\n> However, I have a quick question. What does bytea data look like typically\n> when storing media content? What I had in mind is, media contents that uses\n> MIME type, which are rendered as part of HTML. For example, the following\n> is rendered as a red dot:\n>\n> 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA\n> AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO\n> 9TXL0Y4OHwAAAABJRU5ErkJggg==’\n>\n> This string is decoded to bytea, and I stored it in a bytea column.\n>\n> What are some other examples of using bytea to store media content, not\n> necessarily using the MIME type? Is there a way to detect the type of these\n> media (audio, image) stored in bytea?\n>\n\nWhen I have stored small media items in bytea columns in the past, I just\nstored the data. I vaguely recall I did store the mime type in another\ncolumn, but that may not be the case in all scenarios (e.g. when a system\nis designed to store only PNGs). I think you should assume it's raw data\nonly, and try to determine the file type by examining the data;\n\ne.g\n\nPNG files have an 8 byte signature:\nhttp://www.libpng.org/pub/png/spec/1.2/PNG-Structure.html\nMPEG files have identifying information in the frame header that you may be\nable to use: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html\nJPEG images have identifying markers:\nhttps://en.wikipedia.org/wiki/JPEG_File_Interchange_Format\n\netc.\n\n\n> Another question I had is, I read that there are performance-related\n> issues for storing media in bytea. Are there practical ways to store bytea\n> data that does not face performance-related issues? For example, storing\n> large media content using multiple bytea parts, and reassembling them\n> together once retrieved from the database?\n>\n\nNot that I'm aware of. For larger objects, most people store them\nexternally (which of course loses ACID properties). There are certainly\napplications for storing smaller objects directly in the database though -\nand some folks have done work in the past with index types and\noperators/functions for finding and comparing images for example, so there\nare also benefits other than ACID to storing data in this way.\n\nBTW; for pgAdmin related GSoC questions, you'd do better to ask on\npgadmin-hackers@postgresql.org.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHiOn Mon, Apr 1, 2019 at 3:12 AM Haoran Yu <haleyyew@gmail.com> wrote:Dear PostgreSQL community,I have submitted a proposal for the project pgAdmin 4 bytea support. The project discusses storing media content (images, audio, video) as bytea. However, I have a quick question. What does bytea data look like typically when storing media content? What I had in mind is, media contents that uses MIME type, which are rendered as part of HTML. For example, the following is rendered as a red dot:'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==’This string is decoded to bytea, and I stored it in a bytea column.What are some other examples of using bytea to store media content, not necessarily using the MIME type? Is there a way to detect the type of these media (audio, image) stored in bytea?When I have stored small media items in bytea columns in the past, I just stored the data. I vaguely recall I did store the mime type in another column, but that may not be the case in all scenarios (e.g. when a system is designed to store only PNGs). I think you should assume it's raw data only, and try to determine the file type by examining the data;e.g PNG files have an 8 byte signature: http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.htmlMPEG files have identifying information in the frame header that you may be able to use: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.htmlJPEG images have identifying markers: https://en.wikipedia.org/wiki/JPEG_File_Interchange_Formatetc. Another question I had is, I read that there are performance-related issues for storing media in bytea. Are there practical ways to store bytea data that does not face performance-related issues? For example, storing large media content using multiple bytea parts, and reassembling them together once retrieved from the database?Not that I'm aware of. For larger objects, most people store them externally (which of course loses ACID properties). There are certainly applications for storing smaller objects directly in the database though - and some folks have done work in the past with index types and operators/functions for finding and comparing images for example, so there are also benefits other than ACID to storing data in this way.BTW; for pgAdmin related GSoC questions, you'd do better to ask on pgadmin-hackers@postgresql.org. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 1 Apr 2019 10:09:05 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: GSoC proposal for pgAdmin 4 bytea support" }, { "msg_contents": "Thank you Dave, I have modified my proposal according to your feedback on\ndetecting media types.\n\nHaoran\n\nOn Mon, Apr 1, 2019 at 2:09 AM Dave Page <dpage@pgadmin.org> wrote:\n\n> Hi\n>\n> On Mon, Apr 1, 2019 at 3:12 AM Haoran Yu <haleyyew@gmail.com> wrote:\n>\n>> Dear PostgreSQL community,\n>>\n>> I have submitted a proposal for the project pgAdmin 4 bytea support. The\n>> project discusses storing media content (images, audio, video) as bytea.\n>> However, I have a quick question. What does bytea data look like typically\n>> when storing media content? What I had in mind is, media contents that uses\n>> MIME type, which are rendered as part of HTML. For example, the following\n>> is rendered as a red dot:\n>>\n>> 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA\n>> AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO\n>> 9TXL0Y4OHwAAAABJRU5ErkJggg==’\n>>\n>> This string is decoded to bytea, and I stored it in a bytea column.\n>>\n>> What are some other examples of using bytea to store media content, not\n>> necessarily using the MIME type? Is there a way to detect the type of these\n>> media (audio, image) stored in bytea?\n>>\n>\n> When I have stored small media items in bytea columns in the past, I just\n> stored the data. I vaguely recall I did store the mime type in another\n> column, but that may not be the case in all scenarios (e.g. when a system\n> is designed to store only PNGs). I think you should assume it's raw data\n> only, and try to determine the file type by examining the data;\n>\n> e.g\n>\n> PNG files have an 8 byte signature:\n> http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.html\n> MPEG files have identifying information in the frame header that you may\n> be able to use: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html\n> JPEG images have identifying markers:\n> https://en.wikipedia.org/wiki/JPEG_File_Interchange_Format\n>\n> etc.\n>\n>\n>> Another question I had is, I read that there are performance-related\n>> issues for storing media in bytea. Are there practical ways to store bytea\n>> data that does not face performance-related issues? For example, storing\n>> large media content using multiple bytea parts, and reassembling them\n>> together once retrieved from the database?\n>>\n>\n> Not that I'm aware of. For larger objects, most people store them\n> externally (which of course loses ACID properties). There are certainly\n> applications for storing smaller objects directly in the database though -\n> and some folks have done work in the past with index types and\n> operators/functions for finding and comparing images for example, so there\n> are also benefits other than ACID to storing data in this way.\n>\n> BTW; for pgAdmin related GSoC questions, you'd do better to ask on\n> pgadmin-hackers@postgresql.org.\n>\n> --\n> Dave Page\n> Blog: http://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EnterpriseDB UK: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThank you Dave, I have modified my proposal according to your feedback on detecting media types.HaoranOn Mon, Apr 1, 2019 at 2:09 AM Dave Page <dpage@pgadmin.org> wrote:HiOn Mon, Apr 1, 2019 at 3:12 AM Haoran Yu <haleyyew@gmail.com> wrote:Dear PostgreSQL community,I have submitted a proposal for the project pgAdmin 4 bytea support. The project discusses storing media content (images, audio, video) as bytea. However, I have a quick question. What does bytea data look like typically when storing media content? What I had in mind is, media contents that uses MIME type, which are rendered as part of HTML. For example, the following is rendered as a red dot:'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==’This string is decoded to bytea, and I stored it in a bytea column.What are some other examples of using bytea to store media content, not necessarily using the MIME type? Is there a way to detect the type of these media (audio, image) stored in bytea?When I have stored small media items in bytea columns in the past, I just stored the data. I vaguely recall I did store the mime type in another column, but that may not be the case in all scenarios (e.g. when a system is designed to store only PNGs). I think you should assume it's raw data only, and try to determine the file type by examining the data;e.g PNG files have an 8 byte signature: http://www.libpng.org/pub/png/spec/1.2/PNG-Structure.htmlMPEG files have identifying information in the frame header that you may be able to use: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.htmlJPEG images have identifying markers: https://en.wikipedia.org/wiki/JPEG_File_Interchange_Formatetc. Another question I had is, I read that there are performance-related issues for storing media in bytea. Are there practical ways to store bytea data that does not face performance-related issues? For example, storing large media content using multiple bytea parts, and reassembling them together once retrieved from the database?Not that I'm aware of. For larger objects, most people store them externally (which of course loses ACID properties). There are certainly applications for storing smaller objects directly in the database though - and some folks have done work in the past with index types and operators/functions for finding and comparing images for example, so there are also benefits other than ACID to storing data in this way.BTW; for pgAdmin related GSoC questions, you'd do better to ask on pgadmin-hackers@postgresql.org. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 1 Apr 2019 11:31:24 -0700", "msg_from": "Haoran Yu <haleyyew@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSoC proposal for pgAdmin 4 bytea support" } ]
[ { "msg_contents": "Hello,\n\nPostgreSQL FLOAT appears to support +/-Infinity and NaN per the IEEE 754\nstandard, with expressions such as CAST('NaN' AS FLOAT) and CAST('Infinity'\nAS FLOAT) and even supports ordering columns of floats that contain NaN.\n\nHowever the query \"SELECT 1.0/0.0;\" produces an exception:\n\nERROR: division by zero\n\n\nQuestion: If Infinity and NaN are supported, then why throw an exception\nhere, instead of returning Infinity? Is it purely for historical reasons,\nor if it could all be done again, would an exception still be preferred?\n\nFor purely integer arithmetic, I can see how an exception would make sense.\nHowever for FLOAT, I would expect/prefer Infinity to be returned.\n\nBest regards,\nMatt\n\nHello,PostgreSQL FLOAT appears to support +/-Infinity and NaN per the IEEE 754 standard, with expressions such as CAST('NaN' AS FLOAT) and CAST('Infinity' AS FLOAT) and even supports ordering columns of floats that contain NaN.However the query \"SELECT 1.0/0.0;\" produces an exception:ERROR:  division by zeroQuestion: If Infinity and NaN are supported, then why throw an exception here, instead of returning Infinity? Is it purely for historical reasons, or if it could all be done again, would an exception still be preferred?For purely integer arithmetic, I can see how an exception would make sense. However for FLOAT, I would expect/prefer Infinity to be returned.Best regards,Matt", "msg_date": "Fri, 1 Mar 2019 12:46:55 -0500", "msg_from": "Matt Pulver <mpulver@unitytechgroup.com>", "msg_from_op": true, "msg_subject": "Infinity vs Error for division by zero" }, { "msg_contents": ">>>>> \"Matt\" == Matt Pulver <mpulver@unitytechgroup.com> writes:\n\n Matt> ERROR: division by zero\n\n Matt> Question: If Infinity and NaN are supported, then why throw an\n Matt> exception here, instead of returning Infinity?\n\nSpec says so:\n\n 4) The dyadic arithmetic operators <plus sign>, <minus sign>,\n <asterisk>, and <solidus> (+, -, *, and /, respectively) specify\n addition, subtraction, multiplication, and division, respectively.\n If the value of a divisor is zero, then an exception condition is\n raised: data exception -- division by zero.\n\n-- \nAndrew (irc:RhodiumToad)\n\n", "msg_date": "Fri, 01 Mar 2019 17:59:42 +0000", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On Friday, March 1, 2019, Matt Pulver <mpulver@unitytechgroup.com> wrote:\n\n> However the query \"SELECT 1.0/0.0;\" produces an exception:\n>\n> ERROR: division by zero\n>\n>\n> Question: If Infinity and NaN are supported, then why throw an exception\n> here, instead of returning Infinity? Is it purely for historical reasons,\n> or if it could all be done again, would an exception still be preferred?\n>\n> For purely integer arithmetic, I can see how an exception would make\n> sense. However for FLOAT, I would expect/prefer Infinity to be returned.\n>\n\n1/0 is an illegal operation. We could return NaN for it but the choice of\nthrowing an error is just as correct. Returning infinity is strictly\nincorrect.\n\nChanging the behavior is not going to happen for any existing data types.\n\nDavid J.\n\nOn Friday, March 1, 2019, Matt Pulver <mpulver@unitytechgroup.com> wrote:However the query \"SELECT 1.0/0.0;\" produces an exception:ERROR:  division by zeroQuestion: If Infinity and NaN are supported, then why throw an exception here, instead of returning Infinity? Is it purely for historical reasons, or if it could all be done again, would an exception still be preferred?For purely integer arithmetic, I can see how an exception would make sense. However for FLOAT, I would expect/prefer Infinity to be returned.1/0 is an illegal operation.  We could return NaN for it but the choice of throwing an error is just as correct.  Returning infinity is strictly incorrect.Changing the behavior is not going to happen for any existing data types.David J.", "msg_date": "Fri, 1 Mar 2019 11:04:04 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "Hi,\n\nOn 2019-03-01 12:46:55 -0500, Matt Pulver wrote:\n> PostgreSQL FLOAT appears to support +/-Infinity and NaN per the IEEE 754\n> standard, with expressions such as CAST('NaN' AS FLOAT) and CAST('Infinity'\n> AS FLOAT) and even supports ordering columns of floats that contain NaN.\n> \n> However the query \"SELECT 1.0/0.0;\" produces an exception:\n> \n> ERROR: division by zero\n> \n> \n> Question: If Infinity and NaN are supported, then why throw an exception\n> here, instead of returning Infinity? Is it purely for historical reasons,\n> or if it could all be done again, would an exception still be preferred?\n> \n> For purely integer arithmetic, I can see how an exception would make sense.\n> However for FLOAT, I would expect/prefer Infinity to be returned.\n\nIt'd be good for performance reasons to not have to check for that, and\nfor under/overflow. But the historical behaviour has quite some weight,\nand there's some language in the standard that can legitimate be\ninterpreted that both conditions need to be signalled, if I recall\ncorrectly.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Fri, 1 Mar 2019 10:04:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On 3/1/19 1:04 PM, David G. Johnston wrote:\n\n> 1/0 is an illegal operation. We could return NaN for it but the choice of\n> throwing an error is just as correct. Returning infinity is strictly\n> incorrect.\n\nThat differs from my understanding of how the operations are specified\nin IEEE 754 (as summarized in, e.g., [1]).\n\nAndrew posted the relevant part of the SQL spec that requires the\noperation to raise 22012.\n\nThat's a requirement specific to SQL (which is, of course, what matters\nhere.)\n\nBut if someone wanted to write a user-defined division function or\noperator that would return Inf for (anything > 0) / 0 and for\n(anything < 0) / -0, and -Inf for (anything < 0) / 0 and for\n(anything > 0) / -0, and NaN for (either zero) / (either zero), I think\nthat function or operator would be fully in keeping with IEEE 754.\n\n-Chap\n\n\n[1] https://steve.hollasch.net/cgindex/coding/ieeefloat.html#operations\n\n", "msg_date": "Fri, 1 Mar 2019 13:23:54 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On 2019-03-01 11:04:04 -0700, David G. Johnston wrote:\n> Changing the behavior is not going to happen for any existing data types.\n\nFor the overflow case that really sucks, because we're leaving a very\nsignificant amount of performance on the table because we recheck for\noverflow in every op. The actual float operation is basically free, but\nthe overflow check and the calling convention is not. JIT can get of the\nlatter, but not the former. Which is why we spend like 30% in one of the\nTPCH queries doing overflow checks...\n\nI still kinda wonder whether we can make trapping operations work, but\nit's not trivial.\n\n", "msg_date": "Fri, 1 Mar 2019 10:26:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On Friday, March 1, 2019, Chapman Flack <chap@anastigmatix.net> wrote:\n\n>\n> But if someone wanted to write a user-defined division function or\n> operator that would return Inf for (anything > 0) / 0 and for\n> (anything < 0) / -0, and -Inf for (anything < 0) / 0 and for\n> (anything > 0) / -0, and NaN for (either zero) / (either zero), I think\n> that function or operator would be fully in keeping with IEEE 754.\n>\n\nUpon further reading you are correct - IEEE 754 has chosen to treat n/0\ndifferently for n=0 and n<>0 cases. I'm sure they have their reasons but\nwithin the scope of this database, and the core arithmetic functions it\nprovides, those distinctions don't seeming meaningful and having to add\nquery logic to deal with both cases would just be annoying. I don't use,\nor have time for the distraction, to understand why such a decision was\nmade and how it could be useful. Going from an exception to NaN makes\nsense to me, going instead to infinity - outside of limit expressions which\naren't applicable here - does not.\n\nFor my part in the queries I have that encounter divide-by-zero I end up\ntransforming the result to zero which is considerably easier to\npresent/absorb along side other valid fractions in a table or chart.\n\nDavid J.\n\nOn Friday, March 1, 2019, Chapman Flack <chap@anastigmatix.net> wrote:\nBut if someone wanted to write a user-defined division function or\noperator that would return Inf for (anything > 0) / 0 and for\n(anything < 0) / -0, and -Inf for (anything < 0) / 0 and for\n(anything > 0) / -0, and NaN for (either zero) / (either zero), I think\nthat function or operator would be fully in keeping with IEEE 754.\nUpon further reading you are correct - IEEE 754 has chosen to treat n/0 differently for n=0 and n<>0 cases.  I'm sure they have their reasons but within the scope of this database, and the core arithmetic functions it provides, those distinctions don't seeming meaningful and having to add query logic to deal with both cases would just be annoying.  I don't use, or have time for the distraction, to understand why such a decision was made and how it could be useful.  Going from an exception to NaN makes sense to me, going instead to infinity - outside of limit expressions which aren't applicable here - does not.For my part in the queries I have that encounter divide-by-zero I end up transforming the result to zero which is considerably easier to present/absorb along side other valid fractions in a table or chart.David J.", "msg_date": "Fri, 1 Mar 2019 12:26:13 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On 3/1/19 2:26 PM, David G. Johnston wrote:\n\n> Upon further reading you are correct - IEEE 754 has chosen to treat n/0\n> differently for n=0 and n<>0 cases. I'm sure they have their reasons but\n> ... I don't use,\n> or have time for the distraction, to understand why such a decision was\n> made and how it could be useful.\n\nThe answer may be as simple as the inherent difference between\nthe cases.\n\n0/0 is funny because of a uniqueness problem. Try to name q such that\n0/0 = q, rewritten as q × 0 = 0, and the problem you run into is that\nthat's true for any value of q. So you would have to make some\ncompletely arbitrary decision to name any value at all as \"the\" result.\n\n(anything nonzero)/0 is funny because of a representability problem.\nn/0 = q, rewritten as q × 0 = n, only has the problem that it's\nuntrue for every finite value q; they're never big enough. Calling the\nresult infinity is again a definitional decision, but this time it\nis not an arbitrary one among multiple equally good choices; all\nfinite choices are ruled out, and the definitional choice is fully\nconsistent with what you see happening as a divisor *approaches* zero.\n\n> For my part in the queries I have that encounter divide-by-zero I end up\n> transforming the result to zero which is considerably easier to\n> present\n\nEasy to present it may be, but it lacks the mathematical\nmotivation behind the choice IEEE made ... as a value for q, zero\nfails the q × 0 = n test fairly convincingly for nonzero n. :)\n\nRegards,\n-Chap\n\n", "msg_date": "Fri, 1 Mar 2019 15:01:36 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On Fri, Mar 1, 2019 at 12:59 PM Andrew Gierth <andrew@tao11.riddles.org.uk>\nwrote:\n\n> >>>>> \"Matt\" == Matt Pulver <mpulver@unitytechgroup.com> writes:\n>\n> Matt> ERROR: division by zero\n>\n> Matt> Question: If Infinity and NaN are supported, then why throw an\n> Matt> exception here, instead of returning Infinity?\n>\n> Spec says so:\n>\n> 4) The dyadic arithmetic operators <plus sign>, <minus sign>,\n> <asterisk>, and <solidus> (+, -, *, and /, respectively) specify\n> addition, subtraction, multiplication, and division, respectively.\n> If the value of a divisor is zero, then an exception condition is\n> raised: data exception -- division by zero.\n\n\nThank you, that is what I was looking for. In case anyone else is looking\nfor source documentation on the standard, there is a link from\nhttps://en.wikipedia.org/wiki/SQL:2003#Documentation_availability to a zip\nfile of the SQL 2003 draft http://www.wiscorp.com/sql_2003_standard.zip\nwhere one can confirm this (page 242 of 5WD-02-Foundation-2003-09.pdf).\n\n\nOn Fri, Mar 1, 2019 at 2:26 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Friday, March 1, 2019, Chapman Flack <chap@anastigmatix.net> wrote:\n>\n>>\n>> But if someone wanted to write a user-defined division function or\n>> operator that would return Inf for (anything > 0) / 0 and for\n>> (anything < 0) / -0, and -Inf for (anything < 0) / 0 and for\n>> (anything > 0) / -0, and NaN for (either zero) / (either zero), I think\n>> that function or operator would be fully in keeping with IEEE 754.\n>>\n>\n> Upon further reading you are correct - IEEE 754 has chosen to treat n/0\n> differently for n=0 and n<>0 cases. I'm sure they have their reasons but\n> within the scope of this database, and the core arithmetic functions it\n> provides, those distinctions don't seeming meaningful and having to add\n> query logic to deal with both cases would just be annoying. I don't use,\n> or have time for the distraction, to understand why such a decision was\n> made and how it could be useful. Going from an exception to NaN makes\n> sense to me, going instead to infinity - outside of limit expressions which\n> aren't applicable here - does not.\n>\n> For my part in the queries I have that encounter divide-by-zero I end up\n> transforming the result to zero which is considerably easier to\n> present/absorb along side other valid fractions in a table or chart.\n>\n\nIn heavy financial/scientific calculations with tables of data, using inf\nand nan are very useful, much more so than alternatives such as throwing an\nexception (which row(s) included the error?), or replacing them with NULL\nor 0. There are many intermediate values where using inf makes sense and\nresults in finite outcomes at the appropriate limit: atan(1.0/0)=pi/2,\nerf(1.0/0)=1, exp(-1.0/0)=0, etc.\n\nIn contrast, nan represents a mathematically indeterminate form, in which\nthe appropriate limit could not be ascertained. E.g. 0.0/0, inf-inf,\n0.0*inf, etc. In many applications, I would much rather see calculations\ncarried out via IEEE 754 all the way to the end, with nans and infs, which\nprovides much more useful diagnostic information than an exception that\ndoesn't return any rows at all. As Andres Freund pointed out, it is also\nmore expensive to do the intermediate checks. Just let IEEE 754 do its\nthing! (More directed at the SQL standard than to PostgreSQL.)\n\nBest regards,\nMatt\n\nOn Fri, Mar 1, 2019 at 12:59 PM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:>>>>> \"Matt\" == Matt Pulver <mpulver@unitytechgroup.com> writes: Matt> ERROR:  division by zero Matt> Question: If Infinity and NaN are supported, then why throw an Matt> exception here, instead of returning Infinity?Spec says so:  4) The dyadic arithmetic operators <plus sign>, <minus sign>,     <asterisk>, and <solidus> (+, -, *, and /, respectively) specify     addition, subtraction, multiplication, and division, respectively.     If the value of a divisor is zero, then an exception condition is     raised: data exception -- division by zero.Thank you, that is what I was looking for. In case anyone else is looking for source documentation on the standard, there is a link from https://en.wikipedia.org/wiki/SQL:2003#Documentation_availability to a zip file of the SQL 2003 draft http://www.wiscorp.com/sql_2003_standard.zip where one can confirm this (page 242 of 5WD-02-Foundation-2003-09.pdf). On Fri, Mar 1, 2019 at 2:26 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Friday, March 1, 2019, Chapman Flack <chap@anastigmatix.net> wrote:\nBut if someone wanted to write a user-defined division function or\noperator that would return Inf for (anything > 0) / 0 and for\n(anything < 0) / -0, and -Inf for (anything < 0) / 0 and for\n(anything > 0) / -0, and NaN for (either zero) / (either zero), I think\nthat function or operator would be fully in keeping with IEEE 754.\nUpon further reading you are correct - IEEE 754 has chosen to treat n/0 differently for n=0 and n<>0 cases.  I'm sure they have their reasons but within the scope of this database, and the core arithmetic functions it provides, those distinctions don't seeming meaningful and having to add query logic to deal with both cases would just be annoying.  I don't use, or have time for the distraction, to understand why such a decision was made and how it could be useful.  Going from an exception to NaN makes sense to me, going instead to infinity - outside of limit expressions which aren't applicable here - does not.For my part in the queries I have that encounter divide-by-zero I end up transforming the result to zero which is considerably easier to present/absorb along side other valid fractions in a table or chart.In heavy financial/scientific calculations with tables of data, using inf and nan are very useful, much more so than alternatives such as throwing an exception (which row(s) included the error?), or replacing them with NULL or 0. There are many intermediate values where using inf makes sense and results in finite outcomes at the appropriate limit: atan(1.0/0)=pi/2, erf(1.0/0)=1, exp(-1.0/0)=0, etc.In contrast, nan represents a mathematically indeterminate form, in which the appropriate limit could not be ascertained. E.g. 0.0/0, inf-inf, 0.0*inf, etc. In many applications, I would much rather see calculations carried out via IEEE 754 all the way to the end, with nans and infs, which provides much more useful diagnostic information than an exception that doesn't return any rows at all. As Andres Freund pointed out, it is also more expensive to do the intermediate checks. Just let IEEE 754 do its thing! (More directed at the SQL standard than to PostgreSQL.)Best regards,Matt", "msg_date": "Fri, 1 Mar 2019 15:49:00 -0500", "msg_from": "Matt Pulver <mpulver@unitytechgroup.com>", "msg_from_op": true, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On 3/1/19 3:49 PM, Matt Pulver wrote:\n\n> In many applications, I would much rather see calculations carried out\n> via IEEE 754 all the way to the end, with nans and infs, which\n> provides much more useful diagnostic information than an exception that\n> doesn't return any rows at all. As Andres Freund pointed out, it is also\n> more expensive to do the intermediate checks. Just let IEEE 754 do its\n> thing! (More directed at the SQL standard than to PostgreSQL.)\n\nI wanted to try this out a little before assuming it would work,\nand there seems to be no trouble creating a trivial domain over\nfloat8 (say, CREATE DOMAIN ieeedouble AS float8), and then creating\noperators whose operand types are the domain type.\n\nSo it seems an extension could easily do that, and supply happily\ninf-returning and NaN-returning versions of the operators and\nfunctions, and those will be used whenever operands have the domain\ntype.\n\nIt might even be useful and relatively elegant, while leaving the\nSQL-specified base types to have the SQL-specified behavior.\n\n-Chap\n\n", "msg_date": "Fri, 1 Mar 2019 16:51:06 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On Fri, Mar 1, 2019 at 4:51 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 3/1/19 3:49 PM, Matt Pulver wrote:\n>\n> > In many applications, I would much rather see calculations carried out\n> > via IEEE 754 all the way to the end, with nans and infs, which\n> > provides much more useful diagnostic information than an exception that\n> > doesn't return any rows at all. As Andres Freund pointed out, it is also\n> > more expensive to do the intermediate checks. Just let IEEE 754 do its\n> > thing! (More directed at the SQL standard than to PostgreSQL.)\n>\n> I wanted to try this out a little before assuming it would work,\n> and there seems to be no trouble creating a trivial domain over\n> float8 (say, CREATE DOMAIN ieeedouble AS float8), and then creating\n> operators whose operand types are the domain type.\n>\n> So it seems an extension could easily do that, and supply happily\n> inf-returning and NaN-returning versions of the operators and\n> functions, and those will be used whenever operands have the domain\n> type.\n>\n> It might even be useful and relatively elegant, while leaving the\n> SQL-specified base types to have the SQL-specified behavior.\n>\n\nThat would be very useful. I've been wanting this for years, and I'm sure\nthe data users I work with will appreciate it (but don't directly\nunderstand this to be the solution).\n\nThere are issues relating to ordering and aggregation that perhaps are\nalready transparent to you, but I'll mention anyway for the record.\nConceptually, there would be different contexts of ordering:\n\n 1. When writing mathematical functions, <, =, and > are all false when\n comparing to NaN (NaN != NaN is true.)\n 2. In SQL when sorting or aggregating, NaN=NaN. Consider that there are\n 2^53-2 different double precision representations of NaN at the bit level.\n Under the same floating point ordering logic used for finite numbers, when\n applied to inf and nan, we get the following ordering: -nan < -inf < (all\n finite numbers) < inf < nan. When the bit patterns are taken into\n consideration, an efficient sort algorithm can be implemented. (Forgive me\n for stating the obvious, but just mentioning this for whoever is going to\n take this on.)\n\nI would be most interested to hear of and discuss any other unforeseen\ncomplications or side-effects.\n\nBest regards,\nMatt\n\nOn Fri, Mar 1, 2019 at 4:51 PM Chapman Flack <chap@anastigmatix.net> wrote:On 3/1/19 3:49 PM, Matt Pulver wrote:\n\n> In many applications, I would much rather see calculations carried out\n> via IEEE 754 all the way to the end, with nans and infs, which\n> provides much more useful diagnostic information than an exception that\n> doesn't return any rows at all. As Andres Freund pointed out, it is also\n> more expensive to do the intermediate checks. Just let IEEE 754 do its\n> thing! (More directed at the SQL standard than to PostgreSQL.)\n\nI wanted to try this out a little before assuming it would work,\nand there seems to be no trouble creating a trivial domain over\nfloat8 (say, CREATE DOMAIN ieeedouble AS float8), and then creating\noperators whose operand types are the domain type.\n\nSo it seems an extension could easily do that, and supply happily\ninf-returning and NaN-returning versions of the operators and\nfunctions, and those will be used whenever operands have the domain\ntype.\n\nIt might even be useful and relatively elegant, while leaving the\nSQL-specified base types to have the SQL-specified behavior.That would be very useful. I've been wanting this for years, and I'm sure the data users I work with will appreciate it (but don't directly understand this to be the solution).There are issues relating to ordering and aggregation that perhaps are already transparent to you, but I'll mention anyway for the record. Conceptually, there would be different contexts of ordering:When writing mathematical functions, <, =, and > are all false when comparing to NaN (NaN != NaN is true.)In SQL when sorting or aggregating, NaN=NaN. Consider that there are 2^53-2 different double precision representations of NaN at the bit level. Under the same floating point ordering logic used for finite numbers, when applied to inf and nan, we get the following ordering: -nan < -inf < (all finite numbers) < inf < nan. When the bit patterns are taken into consideration, an efficient sort algorithm can be implemented. (Forgive me for stating the obvious, but just mentioning this for whoever is going to take this on.)I would be most interested to hear of and discuss any other unforeseen complications or side-effects.Best regards,Matt", "msg_date": "Fri, 1 Mar 2019 17:19:45 -0500", "msg_from": "Matt Pulver <mpulver@unitytechgroup.com>", "msg_from_op": true, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> I wanted to try this out a little before assuming it would work,\n> and there seems to be no trouble creating a trivial domain over\n> float8 (say, CREATE DOMAIN ieeedouble AS float8), and then creating\n> operators whose operand types are the domain type.\n\nWhile you can do that to some extent, you don't have a lot of control\nover when the parser will use your operator --- basically, it'll only\ndo so if both inputs are exact type matches. Maybe that'd be enough\nbut I think it'd be fragile to use. (See the \"Type Conversion\"\nchapter in the manual for the gory details, and note that domains\nget smashed to their base types mighty readily.)\n\nUsing custom operator names would work better/more reliably.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Mar 2019 17:34:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "On 03/01/19 17:34, Tom Lane wrote:\n\n> but I think it'd be fragile to use. (See the \"Type Conversion\"\n> chapter in the manual for the gory details, and note that domains\n> get smashed to their base types mighty readily.)\n> \n> Using custom operator names would work better/more reliably.\n\nOr a new base type (LIKE float8) rather than a domain?\n\nRegards,\n-Chap\n\n", "msg_date": "Fri, 1 Mar 2019 18:38:39 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/01/19 17:34, Tom Lane wrote:\n>> Using custom operator names would work better/more reliably.\n\n> Or a new base type (LIKE float8) rather than a domain?\n\nYeah, it'd be more work but you would have control over the\ncoercion rules.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Mar 2019 18:46:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Infinity vs Error for division by zero" } ]
[ { "msg_contents": "\n\nOn 2/28/19 10:13 AM, Christoph Berg wrote:\n> Re: Magnus Hagander 2016-04-13 <CABUevEzq8_nSq7fwe0-fbOAK8S2YNN-PkfsamfEvy2-d3dRUoA@mail.gmail.com>\n>>>>>> It's fairly common to see a lot of \"Incomplete startup packet\" in the\n>>>>>> logfiles caused by monitoring or healthcheck connections.\n>>>>> I've also seen it caused by port scanning.\n>>>> Yes, definitely. Question there might be if that's actually a case when\n>>> we\n>>>> *want* that logging?\n>>> I should think someone might. But I doubt we want to introduce another\n>>> GUC for this. Would it be okay to downgrade the message to DEBUG1 if\n>>> zero bytes were received?\n>>>\n>>>\n>> Yeah, that was my suggestion - I think that's a reasonable compromise. And\n>> yes, I agree that a separate GUC for it would be a huge overkill.\n> There have been numerous complaints about that log message, and the\n> usual reply is always something like what Pavel said recently:\n>\n> \"It is garbage. Usually it means nothing, but better to work live\n> without this garbage.\" [1]\n>\n> [1] https://www.postgresql.org/message-id/CAFj8pRDtwsxj63%3DLaWSwA8u7NrU9k9%2BdJtz2gB_0f4SxCM1sQA%40mail.gmail.com\n>\n> Let's get rid of it.\n\n\n\nRight. This has annoyed me and a great many other people for years. I\nthink Robert Haas' argument 3 years ago (!) was on point, and disposes\nof suggestions to keep it:\n\n\n3. The right way to detect attacks is through OS-level monitoring or\nfirewall-level monitoring, and nothing we do in PG is going to come\nclose to the same value.\n\n\nSo I propose shortly to commit this patch unconditionally demoting the\nmessage to DEBUG1.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Mar 2019 18:19:33 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> So I propose shortly to commit this patch unconditionally demoting the\n> message to DEBUG1.\n\nNo patch referenced, but I assume you mean only for the\nzero-bytes-received case, right? No objection if so.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Mar 2019 18:49:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "\nOn 3/1/19 6:49 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> So I propose shortly to commit this patch unconditionally demoting the\n>> message to DEBUG1.\n> No patch referenced, but I assume you mean only for the\n> zero-bytes-received case, right? No objection if so.\n>\n> \t\t\t\n\n\nPatch proposed by Christoph Berg is here:\n\n\nhttps://www.postgresql.org/message-id/20190228151336.GB7550%40msg.df7cb.de\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Mar 2019 20:55:11 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/1/19 6:49 PM, Tom Lane wrote:\n>> No patch referenced, but I assume you mean only for the\n>> zero-bytes-received case, right? No objection if so.\n\n> Patch proposed by Christoph Berg is here:\n> https://www.postgresql.org/message-id/20190228151336.GB7550%40msg.df7cb.de\n\nMeh. That doesn't silence only the zero-bytes case, and I'm also\nrather afraid of the fact that it's changing COMMERROR to something\nelse. I wonder whether (if client_min_messages <= DEBUG1) it could\nresult in trying to send the error message to the already-lost\nconnection. It might be that that can't happen, but I think a fair\namount of rather subtle (and breakable) analysis may be needed.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Mar 2019 22:25:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "I wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Patch proposed by Christoph Berg is here:\n>> https://www.postgresql.org/message-id/20190228151336.GB7550%40msg.df7cb.de\n\n> Meh. That doesn't silence only the zero-bytes case, and I'm also\n> rather afraid of the fact that it's changing COMMERROR to something\n> else. I wonder whether (if client_min_messages <= DEBUG1) it could\n> result in trying to send the error message to the already-lost\n> connection. It might be that that can't happen, but I think a fair\n> amount of rather subtle (and breakable) analysis may be needed.\n\nConcretely, what about doing the following instead? This doesn't provide\nany mechanism for the DBA to adjust the logging behavior; but reducing\nlog_min_messages to DEBUG1 would not be a very pleasant way to monitor for\nzero-data connections either, so I'm not that fussed about just dropping\nthe message period for that case. I kind of like that we no longer need\nthe weird special case for SSLdone.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 03 Mar 2019 15:52:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "\nOn 3/3/19 3:52 PM, Tom Lane wrote:\n> I wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> Patch proposed by Christoph Berg is here:\n>>> https://www.postgresql.org/message-id/20190228151336.GB7550%40msg.df7cb.de\n>> Meh. That doesn't silence only the zero-bytes case, and I'm also\n>> rather afraid of the fact that it's changing COMMERROR to something\n>> else. I wonder whether (if client_min_messages <= DEBUG1) it could\n>> result in trying to send the error message to the already-lost\n>> connection. It might be that that can't happen, but I think a fair\n>> amount of rather subtle (and breakable) analysis may be needed.\n> Concretely, what about doing the following instead? This doesn't provide\n> any mechanism for the DBA to adjust the logging behavior; but reducing\n> log_min_messages to DEBUG1 would not be a very pleasant way to monitor for\n> zero-data connections either, so I'm not that fussed about just dropping\n> the message period for that case. I kind of like that we no longer need\n> the weird special case for SSLdone.\n>\n> \t\t\t\n\n\n\nLooks good to me.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Mar 2019 07:40:45 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "Re: Andrew Dunstan 2019-03-04 <7cc6d2c1-bd87-9890-259d-36739c247b6c@2ndQuadrant.com>\n> Looks good to me.\n\n+1.\n\nChristoph\n\n", "msg_date": "Mon, 4 Mar 2019 13:42:00 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "\nOn 3/4/19 7:42 AM, Christoph Berg wrote:\n> Re: Andrew Dunstan 2019-03-04 <7cc6d2c1-bd87-9890-259d-36739c247b6c@2ndQuadrant.com>\n>> Looks good to me.\n> +1.\n>\n\n\nOK, I think we have agreement on Tom's patch. Do we want to backpatch\nit? It's a change in behaviour, but I find it hard to believe anyone\nrelies on the existence of these annoying messages, so my vote would be\nto backpatch it.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Mar 2019 17:35:40 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "On Tue, Mar 5, 2019 at 5:35 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> OK, I think we have agreement on Tom's patch. Do we want to backpatch\n> it? It's a change in behaviour, but I find it hard to believe anyone\n> relies on the existence of these annoying messages, so my vote would be\n> to backpatch it.\n\nI don't think it's a bug fix, so I don't think it should be\nback-patched. I think trying to guess which behavior changes are\nlikely to bother users is an unwise strategy -- it's very hard to know\nwhat will actually bother people, and it's very easy to let one's own\ndesire to get a fix out the door lead to an unduly rosy view of the\nsituation. Plus, all patches carry some risk, because all developers\nmake mistakes; the fewer things we back-patch, the fewer regressions\nwe'll introduce.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 6 Mar 2019 12:12:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "\nOn 3/6/19 12:12 PM, Robert Haas wrote:\n> On Tue, Mar 5, 2019 at 5:35 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> OK, I think we have agreement on Tom's patch. Do we want to backpatch\n>> it? It's a change in behaviour, but I find it hard to believe anyone\n>> relies on the existence of these annoying messages, so my vote would be\n>> to backpatch it.\n> I don't think it's a bug fix, so I don't think it should be\n> back-patched. I think trying to guess which behavior changes are\n> likely to bother users is an unwise strategy -- it's very hard to know\n> what will actually bother people, and it's very easy to let one's own\n> desire to get a fix out the door lead to an unduly rosy view of the\n> situation. Plus, all patches carry some risk, because all developers\n> make mistakes; the fewer things we back-patch, the fewer regressions\n> we'll introduce.\n>\n\nOK, no back-patching it is.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 Mar 2019 14:56:10 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "On Thu, Mar 7, 2019 at 1:26 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 3/6/19 12:12 PM, Robert Haas wrote:\n> > On Tue, Mar 5, 2019 at 5:35 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> OK, I think we have agreement on Tom's patch. Do we want to backpatch\n> OK, no back-patching it is.\n>\nHowever, Checking whether the port is open is resulting in error log like:\n2019-11-25 14:03:44.414 IST [14475] LOG: invalid length of startup packet\nYes, This is different from \"Incomplete startup packet\" discussed here.\n\nSteps to reproduce:\n$ telnet localhost 5432\n\n\n>\n>\n\nOn Thu, Mar 7, 2019 at 1:26 AM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 3/6/19 12:12 PM, Robert Haas wrote:\n> On Tue, Mar 5, 2019 at 5:35 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> OK, I think we have agreement on Tom's patch. Do we want to backpatch\nOK, no back-patching it is.However, Checking whether the port is open is resulting in error log like:2019-11-25 14:03:44.414 IST [14475] LOG:  invalid length of startup packetYes, This is different from \"Incomplete startup packet\" discussed here.Steps to reproduce:$ telnet localhost 5432", "msg_date": "Mon, 25 Nov 2019 14:25:38 +0530", "msg_from": "Jobin Augustine <jobinau@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "Jobin Augustine <jobinau@gmail.com> writes:\n> However, Checking whether the port is open is resulting in error log like:\n> 2019-11-25 14:03:44.414 IST [14475] LOG: invalid length of startup packet\n> Yes, This is different from \"Incomplete startup packet\" discussed here.\n\n> Steps to reproduce:\n> $ telnet localhost 5432\n>> \n>> \n\nWell, the agreed-to behavior change was to not log anything if the\nconnection is closed without any data having been sent. If the\nclient *does* send something, and it doesn't look like a valid\nconnection request, I think we absolutely should log that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Nov 2019 10:02:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Jobin Augustine <jobinau@gmail.com> writes:\n> > However, Checking whether the port is open is resulting in error log like:\n> > 2019-11-25 14:03:44.414 IST [14475] LOG: invalid length of startup packet\n> > Yes, This is different from \"Incomplete startup packet\" discussed here.\n> \n> > Steps to reproduce:\n> > $ telnet localhost 5432\n> \n> Well, the agreed-to behavior change was to not log anything if the\n> connection is closed without any data having been sent. If the\n> client *does* send something, and it doesn't look like a valid\n> connection request, I think we absolutely should log that.\n\nAgreed.\n\nThanks,\n\nStephen", "msg_date": "Tue, 3 Dec 2019 23:08:12 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Incomplete startup packet errors" } ]
[ { "msg_contents": "_bt_lock_branch_parent() is used by VACUUM during page deletion, and\ncalls _bt_getstackbuf(), which always finishes incomplete page splits\nfor the parent page that it exclusive locks and returns. ISTM that\nthis may be problematic, since it contradicts the general rule that\nVACUUM isn't supposed to finish incomplete page splits. According to\nthe nbtree README:\n\n\"It would seem natural to add the missing downlinks in VACUUM, but since\ninserting a downlink might require splitting a page, it might fail if you\nrun out of disk space. That would be bad during VACUUM - the reason for\nrunning VACUUM in the first place might be that you run out of disk space,\nand now VACUUM won't finish because you're out of disk space. In contrast,\nan insertion can require enlarging the physical file anyway.\"\n\nI'm inclined to note this as an exception in the nbtree README, and\nleave it at that. Interrupted internal page splits are probably very\nrare in practice, so the operational risk of running out of disk space\nlike this is minimal.\n\nFWIW, I notice that the logic that appears after the\n_bt_lock_branch_parent() call to _bt_getstackbuf() anticipates that it\nmust defend against interrupted splits in at least the\ngrandparent-of-leaf page, and maybe even the parent, so it's probably\nnot unworkable to not finish the split:\n\n /*\n * If the target is the rightmost child of its parent, then we can't\n * delete, unless it's also the only child.\n */\n if (poffset >= maxoff)\n {\n /* It's rightmost child... */\n if (poffset == P_FIRSTDATAKEY(opaque))\n {\n /*\n * It's only child, so safe if parent would itself be removable.\n * We have to check the parent itself, and then recurse to test\n * the conditions at the parent's parent.\n */\n if (P_RIGHTMOST(opaque) || P_ISROOT(opaque) ||\n P_INCOMPLETE_SPLIT(opaque))\n {\n _bt_relbuf(rel, pbuf);\n return false;\n }\n\nSeparately, I noticed another minor issue that appears a few lines\nfurther down still:\n\n /*\n * Perform the same check on this internal level that\n * _bt_mark_page_halfdead performed on the leaf level.\n */\n if (_bt_is_page_halfdead(rel, *rightsib))\n {\n elog(DEBUG1, \"could not delete page %u because its\nright sibling %u is half-dead\",\n parent, *rightsib);\n return false;\n }\n\nI thought that internal pages were never half-dead after Postgres 9.4.\nIf that happens, then the check within _bt_pagedel() will throw an\nERRCODE_INDEX_CORRUPTED error, and tell the DBA to REINDEX. Shouldn't\nthis internal level _bt_is_page_halfdead() check contain a \"can't\nhappen\" error, or even a simple assertion?\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Fri, 1 Mar 2019 15:59:29 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "VACUUM can finish an interrupted nbtree page split -- is that okay?" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> _bt_lock_branch_parent() is used by VACUUM during page deletion, and\n> calls _bt_getstackbuf(), which always finishes incomplete page splits\n> for the parent page that it exclusive locks and returns. ISTM that\n> this may be problematic, since it contradicts the general rule that\n> VACUUM isn't supposed to finish incomplete page splits. According to\n> the nbtree README:\n\n> \"It would seem natural to add the missing downlinks in VACUUM, but since\n> inserting a downlink might require splitting a page, it might fail if you\n> run out of disk space. That would be bad during VACUUM - the reason for\n> running VACUUM in the first place might be that you run out of disk space,\n> and now VACUUM won't finish because you're out of disk space. In contrast,\n> an insertion can require enlarging the physical file anyway.\"\n\n> I'm inclined to note this as an exception in the nbtree README, and\n> leave it at that. Interrupted internal page splits are probably very\n> rare in practice, so the operational risk of running out of disk space\n> like this is minimal.\n\nAlso, if your WAL is on the same filesystem as the data, the whole\nthing is pretty much moot anyway since VACUUM is surely going to add\nWAL output. I concur with not sweating over this.\n\n> FWIW, I notice that the logic that appears after the\n> _bt_lock_branch_parent() call to _bt_getstackbuf() anticipates that it\n> must defend against interrupted splits in at least the\n> grandparent-of-leaf page, and maybe even the parent, so it's probably\n> not unworkable to not finish the split:\n\n-ETOOMANYNEGATIVES ... can't quite parse your point here?\n\n> I thought that internal pages were never half-dead after Postgres 9.4.\n> If that happens, then the check within _bt_pagedel() will throw an\n> ERRCODE_INDEX_CORRUPTED error, and tell the DBA to REINDEX. Shouldn't\n> this internal level _bt_is_page_halfdead() check contain a \"can't\n> happen\" error, or even a simple assertion?\n\nI think that code is there to deal with the possibility of finding\nan old half-dead page. Don't know that it's safe to remove it yet.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Mar 2019 19:41:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 4:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > FWIW, I notice that the logic that appears after the\n> > _bt_lock_branch_parent() call to _bt_getstackbuf() anticipates that it\n> > must defend against interrupted splits in at least the\n> > grandparent-of-leaf page, and maybe even the parent, so it's probably\n> > not unworkable to not finish the split:\n>\n> -ETOOMANYNEGATIVES ... can't quite parse your point here?\n\nSorry. :-)\n\nMy point was that it actually seems feasible to not do the split,\nmaking the quoted paragraph from nbtree README correct as-is. But,\nsince we're happy to continue to finish the occasional interrupted\ninternal page split within VACUUM anyway, that isn't an important\npoint.\n\n> I think that code is there to deal with the possibility of finding\n> an old half-dead page. Don't know that it's safe to remove it yet.\n\nI don't know that it is either. My first instinct is to assume that\nit's fine to remove the code, since, as I said, we're treating\ninternal pages that are half-dead as being ipso facto corrupt -- we'll\nthrow an error before too long anyway. However, \"internal + half dead\"\nwas a valid state for an nbtree page prior to 9.4, and we make no\ndistinction about that (versioning nbtree indexes to deal with\ncross-version incompatibilities came only in Postgres v11). Trying to\nanalyze whether or not it's truly safe to just not do this seems very\ndifficult, and I don't think that it's performance critical. This is a\nproblem only because it's distracting and confusing.\n\nI favor keeping the test, but having it throw a\nERRCODE_INDEX_CORRUPTED error, just like _bt_pagedel() does already. A\ncomment could point out that the test is historical/defensive, and\nprobably isn't actually necessary. What do you think of that idea?\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Fri, 1 Mar 2019 17:00:01 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 5:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I favor keeping the test, but having it throw a\n> ERRCODE_INDEX_CORRUPTED error, just like _bt_pagedel() does already. A\n> comment could point out that the test is historical/defensive, and\n> probably isn't actually necessary. What do you think of that idea?\n\nActually, while 9.4 did indeed start treating \"internal + half dead\"\npages as corrupt, it didn't exactly remove the *concept* of a half\ndead internal page. I think that the cross check (the one referenced\nby comments above the corresponding leaf/_bt_mark_page_halfdead() call\nto _bt_is_page_halfdead()) might have problems in the event of an\ninterrupted *multi-level* page deletion. I wonder, is there a subtle\nbug here that bugfix commit 8da31837803 didn't quite manage to\nprevent? (This commit added both of the _bt_is_page_halfdead()\nchecks.)\n\n(Thinks some more...)\n\nActually, I think that bugfix commit 8da31837803 works despite\npossible \"logically half dead internal pages\", because in the event of\nsuch an internal page the sibling would actually have to be the\n*cousin* of the original parent (half dead/leaf page parent), not the\n\"true sibling\" (otherwise, cousin's multi-level page deletion should\nnever have taken place). I think that we'll end up doing the right\nthing with the downlinks in the grandparent page, despite there being\nan interrupted multi-level deletion in the cousin's subtree. Since\ncousin *atomically* removed its downlink in our shared *grandparent*\n(not its parent) at the same time some leaf page was initially marked\nhalf-dead, everything works out.\n\nPage deletion is painfully complicated. Seems wise to keep the\ninternal page test, out of sheer paranoia, while making it an error as\nsuggested earlier. I will definitely want to think about it some more,\nthough.\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Fri, 1 Mar 2019 17:50:22 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Fri, Mar 1, 2019 at 3:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> /*\n> * Perform the same check on this internal level that\n> * _bt_mark_page_halfdead performed on the leaf level.\n> */\n> if (_bt_is_page_halfdead(rel, *rightsib))\n\n> I thought that internal pages were never half-dead after Postgres 9.4.\n> If that happens, then the check within _bt_pagedel() will throw an\n> ERRCODE_INDEX_CORRUPTED error, and tell the DBA to REINDEX. Shouldn't\n> this internal level _bt_is_page_halfdead() check contain a \"can't\n> happen\" error, or even a simple assertion?\n\nI think that we should get rid of this code on HEAD shortly, because\nit's effectively dead code. You don't have to know anything about\nB-Trees to see why this must be true: VACUUM is specifically checking\nif an internal page is half-dead here, even though it's already\ntreating half-dead internal pages as ipso facto corrupt in higher\nlevel code (it's the first thing we check in _bt_pagedel()). This is\nclearly contradictory. If there is a half-dead internal page, then\nthere is no danger of VACUUM not complaining loudly that you need to\nREINDEX. This has been true since the page deletion overhaul that went\ninto 9.4.\n\nAttached patch removes the internal page check, and adds a comment\nthat explains why it's sufficient to check on the leaf level alone.\nAdmittedly, it's much easier to understand why the\n_bt_is_page_halfdead() internal page check is useless than it is to\nunderstand why replacing it with this comment is helpful. My\nobservation that a half-dead leaf page is representative of the\nsubtree whose root the leaf page has stored as its \"top parent\" is\nhardly controversial, though -- that's the whole basis of multi-level\npage deletion. If you *visualize* how multi-level deletion works, and\nconsider its rightmost-in-subtree restriction, then it isn't hard to\nsee why everything works out with just the leaf level\nright-sibling-is-half-dead check:\n\nWe can only have two adjacent \"skinny\" pending-deletion subtrees in\ncases where the removed check might seem to be helpful -- each page is\nboth the leftmost and the rightmost on its level in its subtree. It's\nokay to just check if the leaf is half-dead because it \"owns\" exactly\nthe same range in the keyspace as the internal pages up to and\nincluding its top parent, if any, and because it is marked half-dead\nby the same atomic operation that does initial removal of downlinks in\nan ancestor page.\n\nI'm fine with waiting until we branch-off v12 before pushing the\npatch, even though it seems low risk.\n\n--\nPeter Geoghegan", "msg_date": "Sat, 4 May 2019 15:38:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On 05/05/2019 01:38, Peter Geoghegan wrote:\n> On Fri, Mar 1, 2019 at 3:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> /*\n>> * Perform the same check on this internal level that\n>> * _bt_mark_page_halfdead performed on the leaf level.\n>> */\n>> if (_bt_is_page_halfdead(rel, *rightsib))\n> \n>> I thought that internal pages were never half-dead after Postgres 9.4.\n>> If that happens, then the check within _bt_pagedel() will throw an\n>> ERRCODE_INDEX_CORRUPTED error, and tell the DBA to REINDEX. Shouldn't\n>> this internal level _bt_is_page_halfdead() check contain a \"can't\n>> happen\" error, or even a simple assertion?\n> \n> I think that we should get rid of this code on HEAD shortly, because\n> it's effectively dead code. You don't have to know anything about\n> B-Trees to see why this must be true: VACUUM is specifically checking\n> if an internal page is half-dead here, even though it's already\n> treating half-dead internal pages as ipso facto corrupt in higher\n> level code (it's the first thing we check in _bt_pagedel()). This is\n> clearly contradictory. If there is a half-dead internal page, then\n> there is no danger of VACUUM not complaining loudly that you need to\n> REINDEX. This has been true since the page deletion overhaul that went\n> into 9.4.\n\nI don't understand that reasoning. Yes, _bt_pagedel() will complain if \nit finds a half-dead internal page. But how does that mean that \n_bt_lock_branch_parent() can't encounter one?\n\n- Heikki\n\n\n", "msg_date": "Tue, 7 May 2019 10:27:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Tue, May 7, 2019 at 12:27 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I don't understand that reasoning. Yes, _bt_pagedel() will complain if\n> it finds a half-dead internal page. But how does that mean that\n> _bt_lock_branch_parent() can't encounter one?\n\nI suppose that in theory it could, but only if you allow that any\npossible state could be found -- it doesn't seem any more likely than\nany other random illegal state.\n\nEven when it happens, we'll get a \"failed to re-find parent key\" error\nmessage when we go a bit further. Isn't that a logical outcome?\n\nActually, maybe we won't get that error, because we're talking about a\ncorrupt index, and all bets are off -- no reason to think that the\nhalf-dead internal page would be consistent with other pages in any\nway. But even then, you'll go on to report it in the usual way, since\nVACUUM scans nbtree indexes in physical order.\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 May 2019 09:59:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Tue, May 7, 2019 at 9:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, May 7, 2019 at 12:27 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > I don't understand that reasoning. Yes, _bt_pagedel() will complain if\n> > it finds a half-dead internal page. But how does that mean that\n> > _bt_lock_branch_parent() can't encounter one?\n>\n> I suppose that in theory it could, but only if you allow that any\n> possible state could be found -- it doesn't seem any more likely than\n> any other random illegal state.\n\nTo be fair, I suppose that the code made more sense when it first went\nin, because at the time there was a chance that there could be\nleftover half-dead internal pages. But, that was a long time ago now.\n\nI wonder why the code wasn't complaining about corruption loudly, like\nthe top level code, instead of treating half-dead pages as a\nlegitimate reason to not proceed with multi-level page deletion. That\nwould have been overkill, but it would have made much more sense IMV.\n\nI would like to proceed with pushing this patch to HEAD in the next\nfew days, since it's clearly removing code that can't be useful. Are\nthere any objections?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 May 2019 20:18:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> To be fair, I suppose that the code made more sense when it first went\n> in, because at the time there was a chance that there could be\n> leftover half-dead internal pages. But, that was a long time ago now.\n\nIs there a good reason to assume there are none left anywhere?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2019 23:29:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Mon, May 13, 2019 at 8:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > To be fair, I suppose that the code made more sense when it first went\n> > in, because at the time there was a chance that there could be\n> > leftover half-dead internal pages. But, that was a long time ago now.\n>\n> Is there a good reason to assume there are none left anywhere?\n\nThat is not an assumption that the proposed patch rests upon, though\nit is true that there are probably going to be virtually no half-dead\ninternal pages that make there way on to a Postgres 12 installation.\nYou'd have to do a CREATE INDEX on Postgres 9.3, and then not VACUUM\nor REINDEX the index once it was on a 9.4+ installation. I suppose\nthat a 9.3 -> 12 upgrade is the most plausible scenario in which you\ncould actual get a half-dead internal page on a Postgres 12\ninstallation.\n\nEven when that happens, the index is already considered corrupt by\nVACUUM, so the same VACUUM process that could in theory be adversely\naffected by removing the half-dead internal page check will complain\nabout the page when it gets to it later on -- the user will be told to\nREINDEX. And even then, we will never actually get to apply the check\nthat I propose to remove, since we're already checking the leaf page\nsibling of the leaf-level target -- the leaf-level test that was added\nby efada2b8e92 was clearly necessary. But it was also sufficient (no\nequivalent internal half-dead right sibling test is needed): a 9.3-era\nhalf-dead internal page cannot have more than one child, which must be\nundergoing deletion as well.\n\nIf somebody doubted my rationale for why we don't need to do anything\nmore on internal page levels in installations where the user didn't\npg_upgrade from a version that's < 9.4, then they'd still have to\nexplain why we haven't heard of any problems in 5 years, and probably\noffer some alternative fix that considers \"logically half-dead\ninternal pages\" (i.e. pages that are or will be the top parent in a\ndeletion chain). Because the code that I propose to remove obviously\ncannot be doing much of anything for indexes built on 9.4+.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 May 2019 21:09:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Mon, May 13, 2019 at 9:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Even when that happens, the index is already considered corrupt by\n> VACUUM, so the same VACUUM process that could in theory be adversely\n> affected by removing the half-dead internal page check will complain\n> about the page when it gets to it later on -- the user will be told to\n> REINDEX. And even then, we will never actually get to apply the check\n> that I propose to remove, since we're already checking the leaf page\n> sibling of the leaf-level target -- the leaf-level test that was added\n> by efada2b8e92 was clearly necessary. But it was also sufficient (no\n> equivalent internal half-dead right sibling test is needed): a 9.3-era\n> half-dead internal page cannot have more than one child, which must be\n> undergoing deletion as well.\n\nActually, now that I look back at how page deletion worked 5+ years\nago, I realize that I have this slightly wrong: the leaf level check\nis not sufficient to figure out if the parent's right sibling is\npending deletion (which is represented explicitly as a half-dead\ninternal page prior to 9.4). All the same, I'm going to push ahead\nwith this patch. Bugfix commit efada2b8e92 was always about a bug in\n9.4 -- it had nothing to do with 9.3. And, in the unlikely event that\nthere is a problem on a pg_upgrade'd 9.3 -> 12 database that happens\nto have half-dead internal pages, we'll still get a useful, correct\nerror from VACUUM one way or another. It might be slightly less\nfriendly as error messages about corruption go, but that seems\nacceptable to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 16 May 2019 13:05:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" }, { "msg_contents": "On Thu, May 16, 2019 at 1:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually, now that I look back at how page deletion worked 5+ years\n> ago, I realize that I have this slightly wrong: the leaf level check\n> is not sufficient to figure out if the parent's right sibling is\n> pending deletion (which is represented explicitly as a half-dead\n> internal page prior to 9.4). All the same, I'm going to push ahead\n> with this patch. Bugfix commit efada2b8e92 was always about a bug in\n> 9.4 -- it had nothing to do with 9.3.\n\nI meant bugfix commit 8da31837803 (commit efada2b8e92 was the commit\nthat had the bug in question).\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 16 May 2019 13:11:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: VACUUM can finish an interrupted nbtree page split -- is that\n okay?" } ]
[ { "msg_contents": "Hello everyone,\n\nI am currently writing my proposal for GSoC 2019 for the TOAST'ing in slices idea<https://wiki.postgresql.org/wiki/GSoC_2019#TOAST.27ing_in_slices_.282019.29>. I already have a sketch of the description and approach outline, which I am sending in this e-mail. I would be happy to receive some feedback on it. I've been reading the PostgreSQL source code and documentation in order to understand better how to implement this idea and to write a detailed proposal. I would also be glad to receive some recommendation of documentation or material on TOAST'ing internals as well as some hint on where to look further in the source code.\n\n\nI am looking forward to your feedback!", "msg_date": "Sat, 2 Mar 2019 19:43:37 +0000", "msg_from": "Bruno Hass <bruno_hass@LIVE.COM>", "msg_from_op": true, "msg_subject": "GSoC 2019 - TOAST'ing in slices idea" }, { "msg_contents": "Hello everyone,\n\nI am currently writing my proposal for GSoC 2019 for the TOAST'ing in slices idea<https://wiki.postgresql.org/wiki/GSoC_2019#TOAST.27ing_in_slices_.282019.29>. I already have a sketch of the description and approach outline, which I am sending in this e-mail. I would be happy to receive some feedback on it. I've been reading the PostgreSQL source code and documentation in order to understand better how to implement this idea and to write a detailed proposal. I would also be glad to receive some recommendation of documentation or material on TOAST'ing internals as well as some hint on where to look further in the source code.\n\n\nI am looking forward to your feedback!", "msg_date": "Mon, 4 Mar 2019 22:09:54 +0000", "msg_from": "Bruno Hass <bruno_hass@live.com>", "msg_from_op": false, "msg_subject": "GSoC 2019 - TOAST'ing in slices idea" } ]
[ { "msg_contents": "Hello,\n\nWe corrected our previously held belief that it's safe to retry\nfsync(). We still haven't dealt with the possibility that some\nkernels can forget about write-back errors due to inode cache\npressure, if there is a time when no process has the file open. To\nprevent that we need to hold dirty files open, somehow, until fsync()\nis called.\n\nThe leading idea so far is a scheme to keep the fd that performed the\noldest write around by passing it through a Unix domain socket to the\ncheckpointer. Among the problems with that are (1) there's an as-yet\nunresolved deadlock problem, but let's call that just a SMoP, (2) it\nrelies on atomic Unix socket messages without any guarantee from the\nOS, (3) it relies on being able to keep fds in the Unix domain socket\n(not counted by PostgreSQL), admittedly with socket buffer size as\nback pressure, (4) it relies on the somewhat obscure and badly\ndocumented fd-passing support to work the way we think it works and\nwithout bugs on every OS, (5) it's too complicated to back-patch. In\nthis thread I'd like to discuss a simpler alternative.\n\nA more obvious approach that probably moves us closer to the way\nkernel developers expect us to write programs is to call fsync()\nbefore close() (due to vfd pressure) if you've written. The obvious\nproblem with that is that you could finish up doing loads more\nfsyncing than we're doing today if you're regularly dirtying more than\nmax_files_per_process in the same backend. I wonder if we could make\nit more acceptable like so:\n\n* when performing writes, record the checkpointer's cycle counter in the File\n\n* when closing due to vfd pressure, only call fsync() if the cycle\nhasn't advanced (that is, you get to skip the fsync() if you haven't\nwritten in this sync cycle, because the checkpointer must have taken\ncare of your writes from the previous cycle)\n\n* if you find you're doing this too much (by default, dirtying more\nthan 1k relation segments per checkpoint cycle), maybe log a warning\nthat the user might want to consider increasing max_files_per_process\n(and related OS limits)\n\nA possible improvement, stolen from the fd-passing patch, is to\nintroduce a \"sync cycle\", separate from checkpoint cycle, so that the\ncheckpointer can process its table of pending operations more than\nonce per checkpoint if that would help minimise fsyncing in foreground\nprocesses. I haven't thought much about how exactly that would work.\n\nThere is a less obvious problem with this scheme though:\n\n1. Suppose the operating system has a single error flag for a file no\nmatter how many fds have it open, and it is cleared by the first\nsyscall to return EIO. There is a broken schedule like this (B =\nregular backend, C = checkpointer):\n\nB: fsync() -> EIO # clears error flag\nC: fsync() -> success, log checkpoint\nB: PANIC!\n\n2. On an operating system that has an error counter + seen flag\n(Linux 4.14+), in general you receive the error in all fds, which is a\nvery nice property, but we'd still have a broken schedule involving\nthe seen flag:\n\nB: fsync() -> EIO # clears seen flag\nC: open() # opened after error\nC: fsync() -> success, log checkpoint\nB: PANIC!\n\nHere's one kind of interlocking that might work: Hash pathnames and\nmap to an array of lwlocks + error flags. Any process trying to sync\na file must hold the lock and check for a pre-existing error flag.\nNow a checkpoint cannot succeed if any backend has recently decided to\npanic. You could skip that if data_sync_retry = on.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Mon, 4 Mar 2019 12:30:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Fsync-before-close thought experiment" }, { "msg_contents": "On Sun, Mar 3, 2019 at 6:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> A more obvious approach that probably moves us closer to the way\n> kernel developers expect us to write programs is to call fsync()\n> before close() (due to vfd pressure) if you've written.\n\nInteresting....\n\n> The obvious\n> problem with that is that you could finish up doing loads more\n> fsyncing than we're doing today if you're regularly dirtying more than\n> max_files_per_process in the same backend.\n\nCheck.\n\n> I wonder if we could make\n> it more acceptable like so:\n>\n> * when performing writes, record the checkpointer's cycle counter in the File\n>\n> * when closing due to vfd pressure, only call fsync() if the cycle\n> hasn't advanced (that is, you get to skip the fsync() if you haven't\n> written in this sync cycle, because the checkpointer must have taken\n> care of your writes from the previous cycle)\n\nHmm, OK.\n\n> * if you find you're doing this too much (by default, dirtying more\n> than 1k relation segments per checkpoint cycle), maybe log a warning\n> that the user might want to consider increasing max_files_per_process\n> (and related OS limits)\n>\n> A possible improvement, stolen from the fd-passing patch, is to\n> introduce a \"sync cycle\", separate from checkpoint cycle, so that the\n> checkpointer can process its table of pending operations more than\n> once per checkpoint if that would help minimise fsyncing in foreground\n> processes. I haven't thought much about how exactly that would work.\n\nYeah, that seems worth considering. I suppose that a backend could\nkeep track of how many times it's recorded the current sync cycle in a\nFile that is still open -- this seems like it should be pretty simple\nand cheap, provided we can find the right places to put the counter\nadjustments. If that number gets too big, like say greater than 80%\nof the number of fds, it sends a ping to the checkpointer. I'm not\nsure if that would then immediately trigger a full sync cycle or if\nthere is something more granular we could do.\n\n> There is a less obvious problem with this scheme though:\n>\n> 1. Suppose the operating system has a single error flag for a file no\n> matter how many fds have it open, and it is cleared by the first\n> syscall to return EIO. There is a broken schedule like this (B =\n> regular backend, C = checkpointer):\n>\n> B: fsync() -> EIO # clears error flag\n> C: fsync() -> success, log checkpoint\n> B: PANIC!\n>\n> 2. On an operating system that has an error counter + seen flag\n> (Linux 4.14+), in general you receive the error in all fds, which is a\n> very nice property, but we'd still have a broken schedule involving\n> the seen flag:\n>\n> B: fsync() -> EIO # clears seen flag\n> C: open() # opened after error\n> C: fsync() -> success, log checkpoint\n> B: PANIC!\n>\n> Here's one kind of interlocking that might work: Hash pathnames and\n> map to an array of lwlocks + error flags. Any process trying to sync\n> a file must hold the lock and check for a pre-existing error flag.\n> Now a checkpoint cannot succeed if any backend has recently decided to\n> panic. You could skip that if data_sync_retry = on.\n\nThat might be fine, but I think it might be possible to create\nsomething more light-weight. Suppose we just decide that foreground\nprocesses always win and the checkpointer has to wait before logging\nthe checkpoint. To that end, a foreground process advertises in\nshared memory whether or not it is currently performing an fsync. The\ncheckpointer must observe each process until it sees that process in\nthe not-fsyncing state at least once.\n\nIf a process starts fsync-ing after being observed not-fsyncing, it\njust means that the backend started doing an fsync() after the\ncheckpointer had completed the fsyncs for that checkpoint. And in\nthat case the checkpointer would have observed the EIO for any writes\nprior to the checkpoint, so it's OK to write that checkpoint; it's\nonly the next one that has an issue, and the fact that we're now\nadvertising that we are fsync()-ing again will prevent that one from\ncompleting before we emit any necessary PANIC.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 5 Mar 2019 09:58:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fsync-before-close thought experiment" } ]
[ { "msg_contents": "Hello Hackers,\n\nI'm intending to optimize some varlena data types as my GSoC proposal. That would be done by a smarter way of splitting the TOAST table chunks, depending on its data type. A JSONB would be split considering its internal tree structure, keeping track of which keys are in each chunk. Arrays and text fields are candidates for optimization as well, as I outlined in my proposal draft (attached). Not being very familiar with the source code, I am writing to this list seeking some guidance in where to look in the source code in order to detail my implementation. Furthermore, I have a couple of questions:\n\n * Would it be a good idea to modify the TOAST table row to keep metadata on the data it stores?\n * This proposal will modify how JSONB, array and text fields are TOASTed. Where can I find the code related to that?\n\nKind regards,\n\nBruno Hass", "msg_date": "Mon, 4 Mar 2019 21:59:37 +0000", "msg_from": "Bruno Hass <bruno_hass@LIVE.COM>", "msg_from_op": true, "msg_subject": "[Proposal] TOAST'ing in slices" } ]
[ { "msg_contents": "Hi,\n\nDoc patch, against master. Documents encode() and decode() base64\nformat.\n\nBuilds for me.\n\nAttached: doc_base64_v1.patch\n\nReferences RFC2045 section 6.8 to define base64.\n\nBecause encode() and decode() show up in both the string\nfunctions section and the binary string functions section\nI documented in only the string functions section and hyperlinked\n\"base64\" in both sections to the new text.\n\n\nNote that XML output can also generate base64 data. I suspect\nthis is done via the (different, src/common/base64.c)\npg_b64_encode() function which does not limit line length. \nIn any case this patch does not touch the XML documentation.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Mon, 4 Mar 2019 16:33:47 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n> Doc patch, against master. Documents encode() and decode() base64 \n> format.\n\nIt is already documented. Enhance documentation, though.\n\n> Builds for me.\n\nFor me as well. Looks ok.\n\n> Attached: doc_base64_v1.patch\n>\n> References RFC2045 section 6.8 to define base64.\n\nDid you consider referencing RFC 4648 instead?\n\n-- \nFabien.\n\n", "msg_date": "Tue, 5 Mar 2019 07:09:01 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi Fabien,\n\nOn Tue, 5 Mar 2019 07:09:01 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > Doc patch, against master. Documents encode() and decode() base64 \n> > format. \n> \n> It is already documented. Enhance documentation, though.\n\nRight. I was thinking that there are various implementations\nof the base64 data format and so it needed more than\njust to be named.\n\n> > Attached: doc_base64_v1.patch\n> >\n> > References RFC2045 section 6.8 to define base64. \n> \n> Did you consider referencing RFC 4648 instead?\n\nNot really. What drew me to document was the line\nbreaks every 76 characters. So I pretty much went\nstraight to the MIME RFC which says there should\nbe breaks at 76 characters.\n\nI can see advantages and disadvantages either way.\nMore or less extraneous information either semi\nor not base64 related in either RFC.\nWhich RFC do you think should be referenced?\n\nAttached: doc_base64_v2.patch\n\nThis new version adds a phrase clarifying that\ndecode errors are raised when trailing padding\nis wrong. Seemed like I may as well be explicit.\n\n(I am not entirely pleased with the double dash\nbut can't come up with anything better. And\ncan't make an emdash entity work either.)\n\nThanks for taking a look.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Tue, 5 Mar 2019 07:26:17 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Tue, 5 Mar 2019 07:26:17 -0600\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> (I am not entirely pleased with the double dash\n> but can't come up with anything better. And\n> can't make an emdash entity work either.)\n\nAttached: doc_base64_v3.patch\n\nThere is an mdash entity. This patch uses that.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Tue, 5 Mar 2019 07:46:36 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n> Attached: doc_base64_v3.patch\n\nI'm ok with referencing the historical MIME RFC.\n\n\"RFC2045 section 6.8\" -> \"RFC 2045 Section 6.8\"\n\nyou can link to the RFC directly with:\n\n<ulink url=\"https://tools.ietf.org/html/rfc2045#section-6.8\">RFC 2045 \nSection 6.8</ulink>\n\n-- \nFabien.\n\n", "msg_date": "Tue, 5 Mar 2019 23:02:26 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi Fabien,\n\nOn Tue, 5 Mar 2019 23:02:26 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > Attached: doc_base64_v3.patch \n> \n> I'm ok with referencing the historical MIME RFC.\n\nFor the record, RFC 2045 is updated but not\nyet obsolete. The updates don't invalidate\nsection 6.8.\n\n> \"RFC2045 section 6.8\" -> \"RFC 2045 Section 6.8\"\n> \n> you can link to the RFC directly with:\n> \n> <ulink url=\"https://tools.ietf.org/html/rfc2045#section-6.8\">RFC 2045 \n> Section 6.8</ulink>\n\nDone.\n\nAttached: doc_base64_v4.patch\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Tue, 5 Mar 2019 19:55:22 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Tue, Mar 05, 2019 at 07:55:22PM -0600, Karl O. Pinc wrote:\n> Attached: doc_base64_v4.patch\n\nDetails about the \"escape\" mode are already available within the\ndescription of function \"encode\". Wouldn't we want to consolidate a\ndescription for all the modes at the same place, including some words\nfor hex? Your patch only includes the description of base64, which is\na good addition, still not consistent with the rest. A paragraph\nafter all the functions listed is fine I think as the description is\nlong so it would bloat the table if included directly.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 11:27:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Wed, 6 Mar 2019 11:27:38 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 05, 2019 at 07:55:22PM -0600, Karl O. Pinc wrote:\n> > Attached: doc_base64_v4.patch \n> \n> Details about the \"escape\" mode are already available within the\n> description of function \"encode\". Wouldn't we want to consolidate a\n> description for all the modes at the same place, including some words\n> for hex? Your patch only includes the description of base64, which is\n> a good addition, still not consistent with the rest. A paragraph\n> after all the functions listed is fine I think as the description is\n> long so it would bloat the table if included directly.\n\nMakes sense. (As did hyperlinking to the RFC.)\n\n(No matter how simple I think a patch is going to be it\nalways turns into a project. :)\n\nAttached: doc_base64_v5.patch\n\nMade index entries for hex and escape encodings.\n\nAdded word \"encoding\" to index entries.\n\nMade <varlist> entries with terms for\nbase64, hex, and escape encodings.\n\nAdded documentation for hex and escape encodings,\nincluding output formats and what are acceptable\ninputs.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Tue, 5 Mar 2019 23:23:20 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Tue, 5 Mar 2019 23:23:20 -0600\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> Added documentation for hex and escape encodings,\n> including output formats and what are acceptable\n> inputs.\n\nAttached: doc_base64_v6.patch\n\nAdded index entries for encode and decode functions\nabove the encoding documentation. As index entries\nare currently generated this does not make any\ndifference. But this paragraph is very far\nfrom the other encode/decode index targets on\nthe page.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Wed, 6 Mar 2019 07:09:48 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Wed, 6 Mar 2019 07:09:48 -0600\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> On Tue, 5 Mar 2019 23:23:20 -0600\n> \"Karl O. Pinc\" <kop@meme.com> wrote:\n> \n> > Added documentation for hex and escape encodings,\n> > including output formats and what are acceptable\n> > inputs. \n\nAttached: doc_base64_v7.patch\n\nImproved escape decoding sentence.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Wed, 6 Mar 2019 08:59:31 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\n> Attached: doc_base64_v7.patch\n\nPatch applies cleanly, doc compiles, navigation tested and ok.\n\n\"... section 6.8\" -> \"... Section 6.8\" (capital S).\n\n\"The string and the binary encode and decode functions...\" sentence looks \nstrange to me, especially with the English article that I do not really \nmaster, so maybe it is ok. I'd have written something more \nstraightforward, eg: \"Functions encode and decode support the following \nencodings:\", and also I'd use a direct \"Function <...>decode</...> ...\" \nrather than \"The <function>decode</function> function ...\" (twice).\n\nMaybe I'd use the exact same grammatical structure for all 3 cases, \nstarting with \"The <>whatever</> encoding converts bla bla bla\" instead of \nvarying the sentences.\n\nOtherwise, all explanations look both precise and useful to me.\n\n-- \nFabien.\n\n", "msg_date": "Wed, 6 Mar 2019 19:30:16 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Wed, 6 Mar 2019 19:30:16 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \"... section 6.8\" -> \"... Section 6.8\" (capital S).\n\nFixed.\n\n> \"The string and the binary encode and decode functions...\" sentence\n> looks strange to me, especially with the English article that I do\n> not really master, so maybe it is ok. I'd have written something more \n> straightforward, eg: \"Functions encode and decode support the\n> following encodings:\",\n\nIt is an atypical construction because I want to draw attention that\nthis is documentation not only for the encode() and decode() in\nsection 9.4. String Functions and Operators but also for\nthe encode() and decode in section 9.5. Binary String Functions \nand Operators. Although I can't think of a better approach\nit makes me uncomfortable that documentation written in\none section applies equally to functions in a different section.\n\nDo you think it would be useful to hyperlink the word \"binary\"\nto section 9.5?\n\nThe idiomatic phrasing would be \"Both the string and the binary\nencode and decode functions...\" but the word \"both\" adds\nno information. Shorter is better.\n\n> and also I'd use a direct \"Function\n> <...>decode</...> ...\" rather than \"The <function>decode</function>\n> function ...\" (twice).\n\nThe straightforward English would be \"Decode accepts...\". The problem\nis that this begins the sentence with the name of a function.\nThis does not work very well when the function name is all lower case,\nand can have other problems where clarity is lost depending \non documentation output formatting.\n\nI don't see a better approach.\n\n> Maybe I'd use the exact same grammatical structure for all 3 cases, \n> starting with \"The <>whatever</> encoding converts bla bla bla\"\n> instead of varying the sentences.\n\nAgreed. Good idea. The first paragraph of each term has to \ndo with encoding and the second with decoding. \nUniformity in starting the second paragraphs helps make \nthis clear, even though the first paragraphs are not uniform.\nWith this I am not concerned that the first paragraphs\ndo not have a common phrasing that's very explicit about\nbeing about encoding.\n\nAdjusted.\n\n> Otherwise, all explanations look both precise and useful to me.\n\nWhen writing I was slightly concerned about being overly precise;\npermanently committing to behavior that might (possibly) be an artifact\nof implementation. E.g., that hex decoding accepts both\nupper and lower case A-F characters, what input is ignored\nand what raises an error, etc. But it seems best\nto document existing behavior, all of which has existed so long\nanyway that changing it would be disruptive. If anybody cares\nthey can object.\n\nI wrote the docs by reading the code and did only a little\nactual testing to be sure that what I wrote is correct.\nI also did not check for regression tests which confirm\nthe behavior I'm documenting. (It wouldn't hurt to have\nsuch regression tests, if they don't already exist.\nBut writing regression tests is more than I want to take on \nwith this patch. Feel free to come up with tests. :-)\n\nI'm confident that the behavior I documented is how PG behaves\nbut you should know what I did in case you want further\nvalidation.\n\nAttached: doc_base64_v8.patch\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Wed, 6 Mar 2019 16:37:05 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi Fabien (and Michael),\n\nOn Wed, 6 Mar 2019 16:37:05 -0600\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> I'm confident that the behavior I documented is how PG behaves\n> but you should know what I did in case you want further\n> validation.\n> \n> Attached: doc_base64_v8.patch\n\nFYI. To avoid a stall in the patch submission process.\n\nI notice that nobody has signed up as a reviewer for\nthis patch. When the patch looks \"ready\" it needs\nto be marked as such at the PG commitfest website\nand a committer will consider committing.\n\nThe commitfest URL is:\n\nhttps://commitfest.postgresql.org/23/\n\nNo rush.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n", "msg_date": "Sat, 9 Mar 2019 14:53:52 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\nI registered as a reviewer in the CF app.\n\n>> \"The string and the binary encode and decode functions...\" sentence\n>> looks strange to me, especially with the English article that I do\n>> not really master, so maybe it is ok. I'd have written something more\n>> straightforward, eg: \"Functions encode and decode support the\n>> following encodings:\",\n>\n> It is an atypical construction because I want to draw attention that \n> this is documentation not only for the encode() and decode() in section \n> 9.4. String Functions and Operators but also for the encode() and decode \n> in section 9.5. Binary String Functions and Operators. Although I can't \n> think of a better approach it makes me uncomfortable that documentation \n> written in one section applies equally to functions in a different \n> section.\n\nPeople coming from the binary doc would have no reason to look at the \nstring paragraph anyway.\n\n> Do you think it would be useful to hyperlink the word \"binary\"\n> to section 9.5?\n\nHmmm... I think that the link is needed in the other direction.\n\nI'd suggest (1) to use a simpler and direct sentence in the string \nsection, (2) to simplify/shorten the in cell description in the binary \nsection, and (3) to add an hyperlink from the binary section which would \npoint to the expanded explanation in the string section.\n\n> The idiomatic phrasing would be \"Both the string and the binary\n> encode and decode functions...\" but the word \"both\" adds\n> no information. Shorter is better.\n\nPossibly, although \"Both\" would insist on the fact that it applies to the \ntwo variants, which was your intention.\n\n>> and also I'd use a direct \"Function\n>> <...>decode</...> ...\" rather than \"The <function>decode</function>\n>> function ...\" (twice).\n>\n> The straightforward English would be \"Decode accepts...\". The problem\n> is that this begins the sentence with the name of a function.\n> This does not work very well when the function name is all lower case,\n> and can have other problems where clarity is lost depending\n> on documentation output formatting.\n\nYep.\n\n> I don't see a better approach.\n\nI suggested \"Function <>decode</> ...\", which is the kind of thing we do \nin academic writing to improve precision, because I thought it could be \nbetter:-)\n\n>> Maybe I'd use the exact same grammatical structure for all 3 cases,\n>> starting with \"The <>whatever</> encoding converts bla bla bla\"\n>> instead of varying the sentences.\n>\n> Agreed. Good idea. The first paragraph of each term has to\n> do with encoding and the second with decoding.\n\n\n> Uniformity in starting the second paragraphs helps make\n> this clear, even though the first paragraphs are not uniform.\n> With this I am not concerned that the first paragraphs\n> do not have a common phrasing that's very explicit about\n> being about encoding.\n>\n> Adjusted.\n\nCannot see it fully in the v8 patch:\n\n - The <literal>base64</literal> encoding is\n - <literal>hex</literal> represents\n - <literal>escape</literal> converts\n\n>> Otherwise, all explanations look both precise and useful to me.\n>\n> When writing I was slightly concerned about being overly precise;\n\nHmmm. That is a technical documentation, a significant degree of precision \nis expected.\n\n-- \nFabien.\n\n", "msg_date": "Sun, 10 Mar 2019 08:15:35 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi Fabien,\n\nOn Sun, 10 Mar 2019 08:15:35 +0100 (CET)\nFabien COELHO <coel\n\n> I registered as a reviewer in the CF app.\n\nThanks.\n\nWhat's causing problems here is that the encode and decode\nfunctions are listed in both the string functions section\nand the binary functions section. A related but not-relevant\nproblem is that there are functions listed in the string\nfunction section which take binary input.\n\nI asked about this on IRC and the brief reply was\nunflattering to the existing documentation.\n\nSo I'm going to fix this also. 3 patches attached:\n\ndoc_base64_part1_v9.patch\n\n This moves functions taking bytea and other non-string\n input into the binary string section, and vice versa.\n Eliminates duplicate encode() and decode() documentation.\n\n Affects: convert(bytea, name, name)\n convert_from(bytea, name)\n encode(bytea, text)\n length(bytea, name)\n quote_nullable(anytype)\n to_hex(int or bigint)\n decode(text, text)\n\n Only moves, eliminates duplicates, and adjusts indentation.\n\n\ndoc_base64_part2_v9.patch\n\n Cleanup wording after moving functions between sections.\n\n\ndoc_base64_part3_v9.patch\n\n Documents base64, hex, and escape encode() and decode()\n formats.\n\n> >> \"The string and the binary encode and decode functions...\" sentence\n> >> looks strange to me, especially with the English article that I do\n> >> not really master, so maybe it is ok. I'd have written something\n> >> more straightforward, eg: \"Functions encode and decode support the\n> >> following encodings:\", \n> >\n> > It is an atypical construction because I want to draw attention\n> > that this is documentation not only for the encode() and decode()\n> > in section 9.4. String Functions and Operators but also for the\n> > encode() and decode in section 9.5. Binary String Functions and\n> > Operators. Although I can't think of a better approach it makes me\n> > uncomfortable that documentation written in one section applies\n> > equally to functions in a different section. \n> \n> People coming from the binary doc would have no reason to look at the \n> string paragraph anyway.\n> \n> > Do you think it would be useful to hyperlink the word \"binary\"\n> > to section 9.5? \n> \n> Hmmm... I think that the link is needed in the other direction.\n\nI'm not sure what you mean here or if it's still relevant.\n\n> I'd suggest (1) to use a simpler and direct sentence in the string \n> section, (2) to simplify/shorten the in cell description in the\n> binary section, and (3) to add an hyperlink from the binary section\n> which would point to the expanded explanation in the string section.\n> \n> > The idiomatic phrasing would be \"Both the string and the binary\n> > encode and decode functions...\" but the word \"both\" adds\n> > no information. Shorter is better. \n> \n> Possibly, although \"Both\" would insist on the fact that it applies to\n> the two variants, which was your intention.\n\nI think this is no longer relevant. Although I'm not sure what\nyou mean by 3. The format names already hyperlink back to the\nstring docs.\n\n> >> and also I'd use a direct \"Function\n> >> <...>decode</...> ...\" rather than \"The <function>decode</function>\n> >> function ...\" (twice). \n> >\n> > The straightforward English would be \"Decode accepts...\". The\n> > problem is that this begins the sentence with the name of a\n> > function. This does not work very well when the function name is\n> > all lower case, and can have other problems where clarity is lost\n> > depending on documentation output formatting. \n> \n> Yep.\n> \n> > I don't see a better approach. \n> \n> I suggested \"Function <>decode</> ...\", which is the kind of thing we\n> do in academic writing to improve precision, because I thought it\n> could be better:-)\n\n\"Function <>decode</> ...\" just does not work in English.\n\n> >> Maybe I'd use the exact same grammatical structure for all 3 cases,\n> >> starting with \"The <>whatever</> encoding converts bla bla bla\"\n> >> instead of varying the sentences. \n> >\n> > Agreed. Good idea. The first paragraph of each term has to\n> > do with encoding and the second with decoding. \n> \n> \n> > Uniformity in starting the second paragraphs helps make\n> > this clear, even though the first paragraphs are not uniform.\n> > With this I am not concerned that the first paragraphs\n> > do not have a common phrasing that's very explicit about\n> > being about encoding.\n> >\n> > Adjusted. \n> \n> Cannot see it fully in the v8 patch:\n> \n> - The <literal>base64</literal> encoding is\n> - <literal>hex</literal> represents\n> - <literal>escape</literal> converts\n\nI did only the decode paras. I guess no reason not to make\nthe first paras uniform as well. Done.\n\nI also alphabetized by format name.\n\nI hope that 3 patches will make review easier.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Mon, 11 Mar 2019 15:32:14 -0500", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Er, ping. Nobody has reviewed the latest patchs.\nThey still apply to master...\n\nI am re-attaching the patches. See descriptions\nbelow.\n\nOn Mon, 11 Mar 2019 15:32:14 -0500\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> On Sun, 10 Mar 2019 08:15:35 +0100 (CET)\n> Fabien COELHO <coel\n\n> What's causing problems here is that the encode and decode\n> functions are listed in both the string functions section\n> and the binary functions section. A related but not-relevant\n> problem is that there are functions listed in the string\n> function section which take binary input.\n> \n> I asked about this on IRC and the brief reply was\n> unflattering to the existing documentation.\n> \n> So I'm going to fix this also. 3 patches attached:\n> \n> doc_base64_part1_v9.patch\n> \n> This moves functions taking bytea and other non-string\n> input into the binary string section, and vice versa.\n> Eliminates duplicate encode() and decode() documentation.\n> \n> Affects: convert(bytea, name, name)\n> convert_from(bytea, name)\n> encode(bytea, text)\n> length(bytea, name)\n> quote_nullable(anytype)\n> to_hex(int or bigint)\n> decode(text, text)\n> \n> Only moves, eliminates duplicates, and adjusts indentation.\n> \n> \n> doc_base64_part2_v9.patch\n> \n> Cleanup wording after moving functions between sections.\n> \n> \n> doc_base64_part3_v9.patch\n> \n> Documents base64, hex, and escape encode() and decode()\n> formats.\n> \n> > >> \"The string and the binary encode and decode functions...\"\n> > >> sentence looks strange to me, especially with the English\n> > >> article that I do not really master, so maybe it is ok. I'd have\n> > >> written something more straightforward, eg: \"Functions encode\n> > >> and decode support the following encodings:\", \n> > >\n> > > It is an atypical construction because I want to draw attention\n> > > that this is documentation not only for the encode() and decode()\n> > > in section 9.4. String Functions and Operators but also for the\n> > > encode() and decode in section 9.5. Binary String Functions and\n> > > Operators. Although I can't think of a better approach it makes me\n> > > uncomfortable that documentation written in one section applies\n> > > equally to functions in a different section. \n> > \n> > People coming from the binary doc would have no reason to look at\n> > the string paragraph anyway.\n> > \n> > > Do you think it would be useful to hyperlink the word \"binary\"\n> > > to section 9.5? \n> > \n> > Hmmm... I think that the link is needed in the other direction. \n> \n> I'm not sure what you mean here or if it's still relevant.\n> \n> > I'd suggest (1) to use a simpler and direct sentence in the string \n> > section, (2) to simplify/shorten the in cell description in the\n> > binary section, and (3) to add an hyperlink from the binary section\n> > which would point to the expanded explanation in the string section.\n> > \n> > > The idiomatic phrasing would be \"Both the string and the binary\n> > > encode and decode functions...\" but the word \"both\" adds\n> > > no information. Shorter is better. \n> > \n> > Possibly, although \"Both\" would insist on the fact that it applies\n> > to the two variants, which was your intention. \n> \n> I think this is no longer relevant. Although I'm not sure what\n> you mean by 3. The format names already hyperlink back to the\n> string docs.\n> \n> > >> and also I'd use a direct \"Function\n> > >> <...>decode</...> ...\" rather than \"The\n> > >> <function>decode</function> function ...\" (twice). \n> > >\n> > > The straightforward English would be \"Decode accepts...\". The\n> > > problem is that this begins the sentence with the name of a\n> > > function. This does not work very well when the function name is\n> > > all lower case, and can have other problems where clarity is lost\n> > > depending on documentation output formatting. \n> > \n> > Yep.\n> > \n> > > I don't see a better approach. \n> > \n> > I suggested \"Function <>decode</> ...\", which is the kind of thing\n> > we do in academic writing to improve precision, because I thought it\n> > could be better:-) \n> \n> \"Function <>decode</> ...\" just does not work in English.\n> \n> > >> Maybe I'd use the exact same grammatical structure for all 3\n> > >> cases, starting with \"The <>whatever</> encoding converts bla\n> > >> bla bla\" instead of varying the sentences. \n> > >\n> > > Agreed. Good idea. The first paragraph of each term has to\n> > > do with encoding and the second with decoding. \n> > \n> > \n> > > Uniformity in starting the second paragraphs helps make\n> > > this clear, even though the first paragraphs are not uniform.\n> > > With this I am not concerned that the first paragraphs\n> > > do not have a common phrasing that's very explicit about\n> > > being about encoding.\n> > >\n> > > Adjusted. \n> > \n> > Cannot see it fully in the v8 patch:\n> > \n> > - The <literal>base64</literal> encoding is\n> > - <literal>hex</literal> represents\n> > - <literal>escape</literal> converts \n> \n> I did only the decode paras. I guess no reason not to make\n> the first paras uniform as well. Done.\n> \n> I also alphabetized by format name.\n> \n> I hope that 3 patches will make review easier.\n\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Wed, 8 May 2019 16:10:35 -0500", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\n> Er, ping. Nobody has reviewed the latest patchs.\n\nNext CF is in July, two months away.\n\nYou might consider reviewing other people patches, that is expected to \nmake the overall process work. There are several documentation or \ncomment patches in the queue.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 9 May 2019 06:50:12 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Thu, 9 May 2019 06:50:12 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> You might consider reviewing other people patches, that is expected\n> to make the overall process work. There are several documentation or \n> comment patches in the queue.\n\nUnderstood.\n\nI thought I had built up some reviewing credit, from some time\nago. But perhaps that just made up for previous patches.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Thu, 9 May 2019 08:27:53 -0500", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n> doc_base64_part1_v9.patch\n> Only moves, eliminates duplicates, and adjusts indentation.\n\n> doc_base64_part2_v9.patch\n> Cleanup wording after moving functions between sections.\n>\n> doc_base64_part3_v9.patch\n> Documents base64, hex, and escape encode() and decode()\n> formats.\n\n>> I suggested \"Function <>decode</> ...\", which is the kind of thing we\n>> do in academic writing to improve precision, because I thought it\n>> could be better:-)\n>\n> \"Function <>decode</> ...\" just does not work in English.\n\nIt really works in research papers: \"Theorem X can be proven by applying \nProposition Y. See Figure 2 for details. Algorithm Z describes whatever,\nwhich is listed in Table W...\"\n\n> I also alphabetized by format name.\n\nGood:-)\n\n> I hope that 3 patches will make review easier.\n\nNot really. I'm reviewing the 3 patches put together rather than each one \nindividually, which would require more work.\n\nPatch applies cleanly. Doc build ok.\n\nI looked at the html output, and it seems ok, including navigating to \nconversions or formats explanations.\n\nThis documentation patch is an overall improvement and clarifies things, \nincluding some error conditions.\n\nconvert: I'd merge the 2 first sentences to state that if convert from X \nto Y. The doc does not say explicitely what happens if a character cannot \nbe converted. After testing, an error is raised. The example comment could \nadd \", if possible\".\n\nto_hex: add \".\" at the end of the sentence?\n\nOther descriptions seem ok.\n\nMinor comment: you usually put two spaces between a \".\" and the first \nworld of then next sentence, but not always.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 12 Jul 2019 15:58:21 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi Fabien,\n\nAttached is doc_base64_v10.patch\n\nOn Fri, 12 Jul 2019 15:58:21 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> >> I suggested \"Function <>decode</> ...\", which is the kind of thing\n> >> we do in academic writing to improve precision, because I thought\n> >> it could be better:-) \n> >\n> > \"Function <>decode</> ...\" just does not work in English. \n> \n> It really works in research papers: \"Theorem X can be proven by\n> applying Proposition Y. See Figure 2 for details. Algorithm Z\n> describes whatever, which is listed in Table W...\"\n\nI've not thought about it before but I suppose the difference\nis between declarative and descriptive, the latter being\nmore inviting and better allows for flow between sentences.\nOtherwise you're writing in bullet points. So it is a\nquestion of balance between specification and narration.\nIn regular prose you're always going to see the \"the\"\nunless the sentence starts with the name. The trouble\nis that we can't start sentences with function names\nbecause of capitalization confusion.\n\n> > I hope that 3 patches will make review easier. \n> \n> Not really. I'm reviewing the 3 patches put together rather than each\n> one individually, which would require more work.\n\nI figured with e.g. a separate patch for moving and alphabetizing\nthat it'd be easier to check that nothing got lost. Anyhow,\nJust one patch this time.\n\n> convert: I'd merge the 2 first sentences to state that if convert\n> from X to Y. The doc does not say explicitely what happens if a\n> character cannot be converted. After testing, an error is raised. The\n> example comment could add \", if possible\".\n\nDone. Good idea. I reworked the whole paragraph to shorten and\nclarify since I was altering it anyway. This does introduce\nsome inconsistency with wording that appears elsewhere but it seems\nworth it because the listentry box was getting overfull.\n\n> to_hex: add \".\" at the end of the sentence?\n\nI left as-is, without a \".\". The only consistent rule about\nperiods in the listentrys seems to be that if there's more than\none sentence then there's periods -- I think. In any case a\nlot of them don't have periods and probably don't need\nperiods. I don't know what to do and since the original did\nnot have a period it seems better to leave well enough alone.\n\n> Minor comment: you usually put two spaces between a \".\" and the first \n> world of then next sentence, but not always.\n\nI now always put 2 spaces after the end of a sentence. But\nI've only done this where I've changed text, not when\nmoving pre-existing text around. Again, there seems\nto be no consistency in the original. (I believe docbook\nredoes all inter-sentence spacing anyway.)\n\nThanks for the help.\n\nI'll be sure to sign up to review a patch (or patches) when life\npermits.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Sat, 13 Jul 2019 17:03:37 -0500", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n>> It really works in research papers: \"Theorem X can be proven by\n>> applying Proposition Y. See Figure 2 for details. Algorithm Z\n>> describes whatever, which is listed in Table W...\"\n>\n> I've not thought about it before but I suppose the difference is between \n> declarative and descriptive, the latter being more inviting and better \n> allows for flow between sentences. Otherwise you're writing in bullet \n> points. So it is a question of balance between specification and \n> narration. In regular prose you're always going to see the \"the\" unless \n> the sentence starts with the name. The trouble is that we can't start \n> sentences with function names because of capitalization confusion.\n\nSure. For me \"Function\" would work as a title on its name, as in \"Sir \nSamuel\", \"Doctor Frankenstein\", \"Mister Bean\", \"Professor Layton\"... \n\"Function sqrt\" and solves the casing issue on the function name which is \nbetter not capitalized.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 14 Jul 2019 11:07:17 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n> Attached is doc_base64_v10.patch\n\nPatch applies cleanly. Doc gen ok.\n\nThe patch clarifies the documentation about encode/decode and other \ntext/binary string conversion functions.\n\nNo further comments beyond the title thing (Function x) already discussed, \nwhich is not a stopper.\n\nPatch marked as ready.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 15 Jul 2019 23:00:55 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Mon, 15 Jul 2019 23:00:55 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> The patch clarifies the documentation about encode/decode and other \n> text/binary string conversion functions.\n\nOther notable changes:\n\n Corrects categorization of functions as string or binary.\n\n Reorders functions alphabetically by function name.\n\n\nThanks very much Fabien.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Mon, 15 Jul 2019 18:15:57 -0500", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\"Karl O. Pinc\" <kop@karlpinc.com> writes:\n> On Mon, 15 Jul 2019 23:00:55 +0200 (CEST)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> The patch clarifies the documentation about encode/decode and other \n>> text/binary string conversion functions.\n\n> Other notable changes:\n> Corrects categorization of functions as string or binary.\n> Reorders functions alphabetically by function name.\n\nSo I took a look at this, expecting that after so much discussion it\nought to just be committable ... but I am befuddled by your choices\nabout which functions to move where. It seems entirely crazy that\nencode() and decode() are no longer in the same section, likewise that\nconvert_from() and convert_to() aren't documented together anymore.\nI'm not sure what is the right dividing line between string and binary\nfunctions, but I don't think that anyone is going to find this\ndivision helpful.\n\nI do agree that documenting some functions twice is a bad plan,\nso we need to clean this up somehow.\n\nAfter some thought, it seems like maybe a workable approach would be\nto consider that all conversion functions going between text and\nbytea belong in the binary-string-functions section. I think it's\nreasonable to say that plain \"string functions\" just means stuff\ndealing with text.\n\nPossibly we could make a separate table in the binary-functions\nsection just for conversions, although that feels like it might be\noverkill.\n\nWhile we're on the subject, Table 9.11 (conversion names) seems\nentirely misplaced, and I don't just mean that it would need to\nmigrate to the binary-functions page. I don't think it belongs\nin func.sgml at all. Isn't it pretty duplicative of Table 23.2\n(Client/Server Character Set Conversions)? I think we should\nunify it with that table, or at least put it next to that one.\nPerhaps that's material for a separate patch though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 11:40:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Tue, 30 Jul 2019 11:40:03 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Karl O. Pinc\" <kop@karlpinc.com> writes:\n> > On Mon, 15 Jul 2019 23:00:55 +0200 (CEST)\n> > Fabien COELHO <coelho@cri.ensmp.fr> wrote: \n> >> The patch clarifies the documentation about encode/decode and\n> >> other text/binary string conversion functions. \n> \n> > Other notable changes:\n> > Corrects categorization of functions as string or binary.\n> > Reorders functions alphabetically by function name. \n> \n> So I took a look at this, expecting that after so much discussion it\n> ought to just be committable ...\n\nIt started simple, just changing the base64 function descriptions,\nbut critique drew in additional issues.\n\n> but I am befuddled by your choices\n> about which functions to move where. \n\nThe grouping is by the data type on which each function operates,\nthe data type of the input.\n\nIf there's to be 2 categories, as in the\nexisting docs, it seems to me that you have to categorize either by\nthe data type input or data type output. To categorize by input\nand output together would result 4 (or more?) categories, which\nwould be even crazier.\n\n> It seems entirely crazy that\n> encode() and decode() are no longer in the same section, likewise that\n> convert_from() and convert_to() aren't documented together anymore.\n\nAwkward, yes. But findable if you know what the categories are.\n\nI suppose there could be 3 different categories: those that input\nand output strings, those that input and output binary, and those\nthat convert -- inputting one data type and outputting another.\n\nI'm not sure that this would really address the issue of documenting,\nsay encode() and decode() together. It pretty much makes sense to\nalphabetize the functions _within_ each category, because that's\nabout the only easily defined way to do it. Going \"by feel\" and\nputting encode() and decode() together raises the question of\nwhere they should be together in the overall ordering within\nthe category.\n\n> I'm not sure what is the right dividing line between string and binary\n> functions, but I don't think that anyone is going to find this\n> division helpful.\n\nMaybe there's a way to make more clear what the categories are?\nI could be explicit in the description of the section.\n\n> I do agree that documenting some functions twice is a bad plan,\n> so we need to clean this up somehow.\n>\n> After some thought, it seems like maybe a workable approach would be\n> to consider that all conversion functions going between text and\n> bytea belong in the binary-string-functions section. I think it's\n> reasonable to say that plain \"string functions\" just means stuff\n> dealing with text.\n\nOk. Should the section title remain unchanged?\n\"Binary String Functions and Operators\"\n\nI think the summary description of the section will need\na little clarification.\n\n> Possibly we could make a separate table in the binary-functions\n> section just for conversions, although that feels like it might be\n> overkill.\n\nI have no good answers. An advantage to a separate section\nfor conversions is that you _might_ be able to pair the functions,\nso that encode() and decode() do show up right next to each other.\n\nI'm not sure exactly how to structure \"pairing\". I would have to\nplay around and see what might look good.\n\n> While we're on the subject, Table 9.11 (conversion names) seems\n> entirely misplaced, and I don't just mean that it would need to\n> migrate to the binary-functions page. I don't think it belongs\n> in func.sgml at all. Isn't it pretty duplicative of Table 23.2\n> (Client/Server Character Set Conversions)? I think we should\n> unify it with that table, or at least put it next to that one.\n> Perhaps that's material for a separate patch though.\n\nI don't know. But it does seem something that can be addressed\nin isolation and suitable for it's own patch.\n\nThanks for the help. I will wait for a response to this\nbefore submitting another patch, just in case someone sees any\nideas here to be followed up on.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Tue, 30 Jul 2019 13:27:08 -0500", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "My 0.02 ᅵ\n\n>> It seems entirely crazy that encode() and decode() are no longer in the \n>> same section, likewise that convert_from() and convert_to() aren't \n>> documented together anymore.\n>\n> Awkward, yes. But findable if you know what the categories are.\n>\n> I suppose there could be 3 different categories: those that input\n> and output strings, those that input and output binary, and those\n> that convert -- inputting one data type and outputting another.\n\nPersonnaly, I'd be ok with having a separate \"conversion function\" table, \nand also with Tom suggestion to have string functions with \"only simple \nstring\" functions, and if any binary appears it is moved into the binary \nsection.\n\n-- \nFabien.", "msg_date": "Tue, 30 Jul 2019 23:44:49 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Wed, Jul 31, 2019 at 6:27 AM Karl O. Pinc <kop@karlpinc.com> wrote:\n> On Tue, 30 Jul 2019 11:40:03 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [review]\n\n> Thanks for the help. I will wait for a response to this\n> before submitting another patch, just in case someone sees any\n> ideas here to be followed up on.\n\nBased on the above, I set this back to \"Waiting on Author\", and moved\nit to the September CF.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 23:47:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Tue, 30 Jul 2019 23:44:49 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> Personnaly, I'd be ok with having a separate \"conversion function\"\n> table, and also with Tom suggestion to have string functions with\n> \"only simple string\" functions, and if any binary appears it is moved\n> into the binary section.\n\nI'll make a \"conversion function\" table and put it in the binary\nsection.\n\nBut I'm not happy with putting any function that works with\nbytea into the binary string section. This would mean moving,\nsay, length() out of the regular string section. There's a\nlot of functions that work on both string and bytea inputs\nand most (not all, see below) are functions that people\ntypically associate with string data.\n\nWhat I think I'd like to do is add a column to the table\nin the string section that says whether or not the function\nworks with both string and bytea. The result would be:\n\nThe hash functions (md5(), sha256(), etc.) would move\nto the string section, because they work on both strings\nand binary data. So the binary function table would\nconsiderably shorten.\n\nThere would be a new table for conversions between\nbytea and string (both directions). This would\nbe placed in the binary string section.\n\nSo the binary string section would be \"just simple bytea\", plus\nconversion functions. Kind of the opposite of Tom's\nsuggestion.\n\nPlease let me know what you think. Thanks.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Fri, 2 Aug 2019 09:32:53 -0500", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\"Karl O. Pinc\" <kop@karlpinc.com> writes:\n> But I'm not happy with putting any function that works with\n> bytea into the binary string section. This would mean moving,\n> say, length() out of the regular string section. There's a\n> lot of functions that work on both string and bytea inputs\n> and most (not all, see below) are functions that people\n> typically associate with string data.\n\nWell, there are two different length() functions --- length(text)\nand length(bytea) are entirely different things, they don't even\nmeasure in the same units. I think documenting them separately\nis the right thing to do. I don't really have a problem with\nrepeating the entries for other functions that exist in both\ntext and bytea variants, either. There aren't that many.\n\n> What I think I'd like to do is add a column to the table\n> in the string section that says whether or not the function\n> works with both string and bytea.\n\nMeh. Seems like what that would mostly do is ensure that\nneither page is understandable on its own.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Aug 2019 10:44:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On 8/2/19 10:32 AM, Karl O. Pinc wrote:\n\n> But I'm not happy with putting any function that works with\n> bytea into the binary string section. This would mean moving,\n> say, length() out of the regular string section.\n\nI'm not sure why. The bytea section already has an entry for its\nlength() function.\n\nThere are also length() functions for bit, character, lseg, path,\ntsvector....\n\nI don't really think of those as \"a length() function\" that works\non all those types. I think of a variety of types, many of which\noffer a length() function.\n\nThat seems to be reflected in the way the docs are arranged.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 2 Aug 2019 11:00:42 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Fri, 02 Aug 2019 10:44:43 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I don't really have a problem with\n> repeating the entries for other functions that exist in both\n> text and bytea variants, either.\n\nOk. Thanks. I'll repeat entries then.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Fri, 2 Aug 2019 10:09:36 -0500", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On 2019-Aug-02, Karl O. Pinc wrote:\n\n> On Fri, 02 Aug 2019 10:44:43 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > I don't really have a problem with\n> > repeating the entries for other functions that exist in both\n> > text and bytea variants, either.\n> \n> Ok. Thanks. I'll repeat entries then.\n\nHello Karl,\n\nAre you submitting an updated version soon?\n\nTom, you're still listed as committer for this patch. Just a heads up.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Sep 2019 13:56:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi Alvaro,\n\nOn Mon, 2 Sep 2019 13:56:28 -0400\nAlvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> Are you submitting an updated version soon?\n\nI don't expect to be able to make a new patch for at\nleast another week.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Mon, 2 Sep 2019 13:42:35 -0500", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hi,\n\nAttached is doc_base64_v11.patch\n\nThis addresses Tom's concerns. Functions\nthat operate on both strings and bytea\n(e.g. length(text) and length(bytea))\nare documented separately, one with\nstring functions and one with binary\nstring functions.\n\nIn this iteration I have also:\n\nAdded a sub-section for the functions\nwhich convert between text and bytea.\n\nAdded some index entries.\n\nProvided a link in the hash functions to\nthe text about why md5() returns text\nnot bytea.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Tue, 3 Dec 2019 15:45:11 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n> Attached is doc_base64_v11.patch\n\nPatch applies cleanly and compiles.\n\nI'm in favor of moving and reorganizing these function descriptions, as \nthey are somehow scattered with a unclear logic when you are looking for \nthem.\n\n + <entry><literal><parameter>bytea</parameter> <literal>||</literal>\n + <parameter>bytea</parameter></literal></entry>\n <entry> <type>bytea</type> </entry>\n <entry>\n String concatenation\n\nBytea concatenation?\n\nI'm not keen on calling the parameter the name of its type. I'd suggest to \nkeep \"string\" as a name everywhere, which is not a type name in Pg.\n\nThe functions descriptions are not homogeneous. Some have parameter name & \ntype \"btrim(string bytea, bytes bytea)\" and others only type or parameter \nwith tagged as a parameter \"get_bit(bytea, offset)\" (first param), \n\"sha224(bytea)\".\n\nI'd suggest to be consistent, eg use \"string bytea\" everywhere \nappropriate.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 5 Jan 2020 12:48:59 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hello Fabien,\n\nOn Sun, 5 Jan 2020 12:48:59 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> I'm in favor of moving and reorganizing these function descriptions,\n> as they are somehow scattered with a unclear logic when you are\n> looking for them.\n\nI assume by this you mean you are happy with the organization done\nby the patch.\n\nFor review (I think I've got this right) the organizational\nchanges are:\n\nThe changes suggested by Tom where 2\nfunctions with the same name, one of which takes string \narguments and the other of which takes bytea arguments\nnow show up both in the doc section on string functions and\nin the doc section on bytea functions.\n\nI believe I also alphabetized the binary function ordering. \n\nAnd this patch introduces a separate table/sub-section for \nfunctions which convert between binary and string.)\nThere are much-expanded descriptions of encode()\nand decode(). (Which is how this patch series\nstarted, explaining base64 encoding/decoding.)\n\nFYI. There is also an unusual usage of a hyperlinked\nasterisk following the returned datatype of the hash\nfunctions. The hyperlink leads to the historical\nnote on the datatype used for the md5() function v.s.\nthe other hash functions.\n\n> + <entry><literal><parameter>bytea</parameter>\n> <literal>||</literal>\n> + <parameter>bytea</parameter></literal></entry>\n> <entry> <type>bytea</type> </entry>\n> <entry>\n> String concatenation\n> \n> Bytea concatenation?\n\nDone. (Could just say \"Concatenation\" I suppose. But \"Bytea\nconcatenation\" does not hurt and would be nice if you\never looked at the tables for string operators and bytea\noperators side-by-side.)\n\n> I'm not keen on calling the parameter the name of its type. I'd\n> suggest to keep \"string\" as a name everywhere, which is not a type\n> name in Pg.\n> \n> The functions descriptions are not homogeneous. Some have parameter\n> name & type \"btrim(string bytea, bytes bytea)\" and others only type\n> or parameter with tagged as a parameter \"get_bit(bytea,\n> offset)\" (first param), \"sha224(bytea)\".\n> \n> I'd suggest to be consistent, eg use \"string bytea\" everywhere \n> appropriate.\n\nOk. Done. Except that I've left the encode() function\nas encode(data bytea, format text) because the whole\npoint is to convert _to_ a string/text datatype\nfrom something that's _not_ a string. Calling\nthe input a string just seems wrong. This inconsistency seems ok\nbecause encode() is in the table of string <-> bytea functions,\naway from the other bytea functions.\n\n\nIf you're interested, another possibility would be the\nconsistent use of \"data bytea\" everywhere. I like this\nchoice because it works well to write\nencode(<parameter>data</parameter> bytea,\n <parameter>format</parameter text), and probably\nworks well in other places too. But then the word\n\"string\" does not really fit in a lot of the descriptions.\nSo this choice would involve re-writing descriptions so\nthat the existing description:\n\n btrim(<parameter>string</parameter> bytea,\n <parameter>bytes</parameter> bytea)\n\n Remove the longest string containing only bytes appearing in\n <parameter>bytes</parameter> from the start and end of\n <parameter>string</parameter>\n\n\nWould change to (say):\n\n btrim(<parameter>data</parameter> bytea,\n <parameter>bytes</parameter> bytea)\n\n Remove the longest contiguous sequence of bytes containing only\n those bytes appearing in <parameter>bytes</parameter>\n from the start and end of <parameter>data</parameter>\n\nThe trouble with using \"data bytea\" is that there might\nneed to be adjustments to the word \"string\" elsewhere in\nthe section, not just in the descriptions.\n\nLet me know if you'd prefer \"data bytea\" to \"string bytea\"\nand consequent frobbing of descriptions. That might be\nout-of-scope for this patch. (Which is already\na poster-child for feature-creep.)\n\nAttached is doc_base64_v12.patch.\n\nRegards,\n\nKarl <kop@meme.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Mon, 6 Jan 2020 01:35:00 -0600", "msg_from": "\"Karl O. Pinc\" <kop@meme.com>", "msg_from_op": true, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Mon, 6 Jan 2020 01:35:00 -0600\n\"Karl O. Pinc\" <kop@meme.com> wrote:\n\n> On Sun, 5 Jan 2020 12:48:59 +0100 (CET)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > I'm not keen on calling the parameter the name of its type. I'd\n> > suggest to keep \"string\" as a name everywhere, which is not a type\n> > name in Pg.\n> > \n> > The functions descriptions are not homogeneous. Some have parameter\n> > name & type \"btrim(string bytea, bytes bytea)\" and others only type\n> > or parameter with tagged as a parameter \"get_bit(bytea,\n> > offset)\" (first param), \"sha224(bytea)\".\n> > \n> > I'd suggest to be consistent, eg use \"string bytea\" everywhere \n> > appropriate. \n> \n> Ok. Done. \n\n> If you're interested, another possibility would be the\n> consistent use of \"data bytea\" everywhere.\n\n> But then the word\n> \"string\" does not really fit in a lot of the descriptions.\n> So this choice would involve re-writing descriptions \n...\n\n> The trouble with using \"data bytea\" is that there might\n> need to be adjustments to the word \"string\" elsewhere in\n> the section, not just in the descriptions.\n> \n> Let me know if you'd prefer \"data bytea\" to \"string bytea\"\n> and consequent frobbing of descriptions. That might be\n> out-of-scope for this patch. (Which is already\n> a poster-child for feature-creep.)\n\nAnother option would be to use \"bytes bytea\".\n(The current patch uses \"string bytea\".)\nThis would probably also require some re-wording throughout.\n\nPlease let me know your preference. Thanks.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Wed, 8 Jan 2020 22:32:26 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "\nHello Karl,\n\n> Another option would be to use \"bytes bytea\".\n\n> (The current patch uses \"string bytea\".)\n> This would probably also require some re-wording throughout.\n\n> Please let me know your preference.\n\nI like it, but this is only my own limited opinion, and I'm not a native \nEnglish speaker.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 9 Jan 2020 08:27:28 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Thu, 9 Jan 2020 08:27:28 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > Another option would be to use \"bytes bytea\". \n> \n> > (The current patch uses \"string bytea\".)\n> > This would probably also require some re-wording throughout. \n\n> I like it, but this is only my own limited opinion, and I'm not a\n> native English speaker.\n\nPer your request for consistency I made this change throughout \nthe entire binary string section.\n\nNew patch attached: doc_base64_v13.patch\n\nThis required surprisingly little re-wording.\nAdded word \"binary\" into the descriptions of convert(),\nsubstring(), convert_from(), and convert_to().\n\nI also added data types to the call syntax of set_bit() \nand set_byte().\n\nAnd this patch adds hyperlinks from the get_bit(), get_byte(),\nset_bit(), and set_byte() descriptions to the note\nthat offsets are zero-based.\n\nI also removed the hyperlinked asterisks about the hash\nfunction results and instead hyperlinked the word \"hash\"\nin the descriptions. (Links to the note about md5()\nreturning hex text and the others returning bytea and how\nto convert between the two.)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Thu, 9 Jan 2020 09:23:46 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Hello Karl,\n\n> New patch attached: doc_base64_v13.patch\n>\n> This required surprisingly little re-wording.\n> Added word \"binary\" into the descriptions of convert(),\n> substring(), convert_from(), and convert_to().\n>\n> I also added data types to the call syntax of set_bit()\n> and set_byte().\n>\n> And this patch adds hyperlinks from the get_bit(), get_byte(),\n> set_bit(), and set_byte() descriptions to the note\n> that offsets are zero-based.\n>\n> I also removed the hyperlinked asterisks about the hash\n> function results and instead hyperlinked the word \"hash\"\n> in the descriptions. (Links to the note about md5()\n> returning hex text and the others returning bytea and how\n> to convert between the two.)\n\nPatch applies cleanly and compiles.\n\nMy 0.02€: The overall restructuration and cross references is an \nimprovement.\n\nSome comments about v13:\n\nThe note about get_byte reads:\n\n get_byte and set_byte number the first byte of a binary string as byte\n 0. get_bit and set_bit number bits from the right within each byte; for\n example bit 0 is the least significant bit of the first byte, and bit 15\n is the most significant bit of the second byte.\n\nThe two sentences starts with a lower case letter, which looks strange to \nme. I'd suggest to put \"Functions\" at the beginning of the sentences:\n\n Functions get_byte and set_byte number the first byte of a binary string\n as byte 0. Functions get_bit and set_bit number bits from the right\n within each byte; for example bit 0 is the least significant bit of the\n first byte, and bit 15 is the most significant bit of the second byte.\n\nThe note about hash provides an example for getting the hex representation \nout of sha*. I'd add an exemple to get the bytea representation from md5, \neg \"DECODE(MD5('hello world'), 'hex')\"…\n\nMaybe the encode/decode in the note could be linked to the function\ndescription? Well, they are just after, maybe it is not very useful.\n\nThe \"Binary String Functions and Operators\" 9.5 section has only one \nsubsection, \"9.5.1\", which is about at two thirds of the page. This \nstructure looks weird. ISTM that a subsection is missing for the beginning \nof the page, or that the subsection should just be dropped, because it is \nsomehow redundant with the table title.\n\nThe \"9.4\" section has the same structural weirdness. Either remove the \nsubsection, or add some for the other parts?\n\n-- \nFabien.", "msg_date": "Thu, 16 Jan 2020 14:41:33 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "I just wanted to throw this in the archives; this doesn't need to affect\nyour patch.\n\nBecause of how the new tables look in the PDF docs, I thought it might\nbe a good time to research how to make each function-entry occupy two\nrows: one for prototype, return type and description, and the other for\nthe example and its result. Below is a first cut of how you'd implement\nthat idea -- see colspec/spanspec/spanname ... only the output looks\nalmost as bad (though the benefit is that it doesn't overwrite cell\ncontents anymore).\n\nI think we have two choices. One is to figure out how to make this\nwork (ie. make it pretty; maybe by using alternate cell backgrounds, say\none white and one very light gray; maybe by using thinner/thicker\ninter-cell lines); the other is to forget tables altogether and format\nthe info in some completely different way.\n\n <table id=\"functions-binarystringconversions\">\n <title>Binary/String Conversion Functions</title>\n <tgroup cols=\"4\">\n <colspec colnum=\"1\" colname=\"col1\" colwidth=\"1*\" />\n <colspec colnum=\"2\" colname=\"col2\" colwidth=\"1*\" />\n <colspec colnum=\"3\" colname=\"col3\" colwidth=\"1*\" />\n <colspec colnum=\"4\" colname=\"col4\" colwidth=\"1*\" />\n <spanspec spanname=\"cols12\" namest=\"col1\" nameend=\"col2\" />\n <spanspec spanname=\"cols34\" namest=\"col3\" nameend=\"col4\" />\n\n <thead>\n <row>\n <entry spanname=\"cols12\">Function</entry>\n <entry>Return Type</entry>\n <entry>Description</entry>\n </row>\n <row>\n <entry spanname=\"cols12\">Example</entry>\n <entry spanname=\"cols34\">Result</entry>\n </row>\n </thead>\n\n <tbody>\n <row>\n <entry spanname=\"cols12\">\n <indexterm>\n <primary>convert_from</primary>\n </indexterm>\n <literal><function>convert_from(<parameter>bytes</parameter> <type>bytea</type>,\n <parameter>src_encoding</parameter> <type>name</type>)</function></literal>\n </entry>\n <entry><type>text</type></entry>\n <entry>\n Convert binary string to the database encoding. The original encoding\n is specified by <parameter>src_encoding</parameter>. The\n <parameter>bytes</parameter> must be valid in this encoding. See\n <xref linkend=\"conversion-names\"/> for available conversions.\n </entry>\n </row>\n <row>\n <entry spanname=\"cols12\"><literal>convert_from('text_in_utf8', 'UTF8')</literal></entry>\n <entry spanname=\"cols34\"><literal>text_in_utf8</literal> represented in the current database encoding</entry>\n </row>\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Jan 2020 15:44:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Thu, 16 Jan 2020 14:41:33 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> The \"Binary String Functions and Operators\" 9.5 section has only one \n> subsection, \"9.5.1\", which is about at two thirds of the page. This \n> structure looks weird. ISTM that a subsection is missing for the\n> beginning of the page, or that the subsection should just be dropped,\n> because it is somehow redundant with the table title.\n> \n> The \"9.4\" section has the same structural weirdness. Either remove\n> the subsection, or add some for the other parts?\n\nHi Fabien,\n\ncc-ing the folks who did the work on format(), who added a sub-section\n9.4.1. The whole thread for that is here:\nhttps://www.postgresql.org/message-id/flat/CAFj8pRBjMdAjybSZkczyez0x%3DFhC8WXvgR2wOYGuhrk1TUkraA%40mail.gmail.com\n\nI'm going to dis-agree with you on this. Yes, it's a little odd\nto have only a single sub-section but it does not really bother me.\n\nIf there's a big/important enough chunk of information to present I like\nseeing something in the table of contents. That's the \"big thing\"\nto my mind.\n\nI don't see a good way to get rid of \"9.4.1. format\". Adding\nanother sub-section heading above it just to have 2 seems\npointless.\n\nI really want the \"9.5.1 String to Binary and Binary to String\nConversion\" to show up in the table of contents. Because it\nis not at all obvious that \"9.5. Binary String Functions and\nOperators\" is the place to look for conversions between\nstring and binary. Tom thought that merely having a separate\ntable for string<->binary functions \"could be overkill\"\nso my impression right now is that having an entirely\nseparate section for these would be rejected.\n(See: https://www.postgresql.org/message-id/22540.1564501203@sss.pgh.pa.us)\n\nOtherwise an entirely separate section might be the right approach.\n\nThe following *.1 sections\nin the (devel version) documentation are \"single sub-sections\":\n\n(Er, this is too much but once I started I figured I'd finish.)\n\n 5.10. Inheritance\n 5.10.1. Caveats\n 9.4. String Functions and Operators\n 9.4.1. format\n 9.30. Statistics Information Functions\n 9.30.1. Inspecting MCV Lists\n 15.4. Parallel Safety\n 15.4.1. Parallel Labeling for Functions and Aggregates\n17. Installation from Source Code on Windows\n 17.1. Building with Visual C++ or the Microsoft Windows SDK\n 18.10. Secure TCP/IP Connections with GSSAPI Encryption\n 18.10.1. Basic Setup\n 30.2. Subscription\n 30.2.1. Replication Slot Management\n 30.5. Architecture\n 30.5.1. Initial Snapshot\n 37.13. User-Defined Types\n 37.13.1. TOAST Considerations\n 41. Procedural Languages\n 41.1. Installing Procedural Languages\n 50.5. Planner/Optimizer\n 50.5.1. Generating Possible Plans\n 52.3. SASL Authentication\n 52.3.1. SCRAM-SHA-256 Authentication\n57. Writing a Table Sampling Method\n 57.1. Sampling Method Support Functions\n 58.1. Creating Custom Scan Paths\n 58.1.1. Custom Scan Path Callbacks\n 58.2. Creating Custom Scan Plans\n 58.2.1. Custom Scan Plan Callbacks\n 58.3. Executing Custom Scans\n 58.3.1. Custom Scan Execution Callbacks\n 64.4. Implementation\n 64.4.1. GiST Buffering Build\n 67.1. Introduction\n 67.1.1. Index Maintenance\n 68.6. Database Page Layout\n 68.6.1. Table Row Layout\n G.2. Server Applications\n pg_standby — supports the creation of a PostgreSQL warm standby server\nI. The Source Code Repository\n I.1. Getting the Source via Git\nJ.4. Documentation Authoring\n J.4.1. Emacs\nJ.5. Style Guide\n J.5.1. Reference Pages\n\nI like that I can see these in the documentation.\n\nFYI, the format sub-section, 9.4.1, was first mentioned\nby Dean Rasheed in this email:\nhttps://www.postgresql.org/message-id/CAEZATCWLtRi-Vbh5k_2fYkOAPxas0wZh6a0brOohHtVOtHiddA%40mail.gmail.com\n\n\"I'm thinking perhaps\nformat() should now have its own separate sub-section in the manual,\nrather than trying to cram it's docs into a single table row.\"\n\nThere was never really any further discussion or objection to\nhaving a separate sub-section.\n\nAttaching a new patch to my next email, leaving off the\nfolks cc-ed regarding \"9.4.1 format\".\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Fri, 17 Jan 2020 12:21:47 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Thu, 16 Jan 2020 14:41:33 +0100 (CET)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Some comments about v13:\n> \n> The note about get_byte reads:\n> \n> get_byte and set_byte number the first byte of a binary string as\n> byte 0. get_bit and set_bit number bits from the right within each\n> byte; for example bit 0 is the least significant bit of the first\n> byte, and bit 15 is the most significant bit of the second byte.\n> \n> The two sentences starts with a lower case letter, which looks\n> strange to me. I'd suggest to put \"Functions\" at the beginning of the\n> sentences:\n> \n> Functions get_byte and set_byte number the first byte of a binary\n> string as byte 0. Functions get_bit and set_bit number bits from the\n> right within each byte; for example bit 0 is the least significant\n> bit of the first byte, and bit 15 is the most significant bit of the\n> second byte.\n\nExcellent suggestion, done.\n\n> The note about hash provides an example for getting the hex\n> representation out of sha*. I'd add an exemple to get the bytea\n> representation from md5, eg \"DECODE(MD5('hello world'), 'hex')\"…\n\nOk. Done.\n\n> Maybe the encode/decode in the note could be linked to the function\n> description? Well, they are just after, maybe it is not very useful.\n\nCan't hurt? Done.\n\nPatch attached: doc_base64_v14.patch\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein", "msg_date": "Fri, 17 Jan 2020 12:22:19 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Thu, 16 Jan 2020 15:44:44 -0300\nAlvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> Because of how the new tables look in the PDF docs, I thought it might\n> be a good time to research how to make each function-entry occupy two\n> rows: one for prototype, return type and description, and the other\n> for the example and its result. \n\nAnother approach might be to fix/change the software that generates\nPDFs. Or whatever turns it into latex if that's the\nintermediate and really where the problem lies. (FWIW, I've had luck\nwith dblatex.)\n\n(Maybe best to take this thread to the pgsql-docs mailing list?)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Fri, 17 Jan 2020 12:46:59 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Tom, you're marked as committer for this one in the commitfest app; are\nyou still intending to get it committed? If not, I can.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Jan 2020 18:22:22 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Tom, you're marked as committer for this one in the commitfest app; are\n> you still intending to get it committed? If not, I can.\n\nI've not been paying much attention to this thread, but I'll take\nanother look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jan 2020 17:37:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "I've pushed this patch with some further hacking.\n\nMost notably, I shoved the conversion-names table over to charset.sgml,\nas I'd mused about doing upthread. It no longer had any reason at all\nto be in section 9.4. We could have moved it down to 9.5, but I felt\nthat it would still be wrongly placed there. Since none of these\nfunctions actually take conversion names, it's not very relevant\ndocumentation for them --- you can generally assume that whatever\nconversion you want is available, and you'll usually be right.\n\nAnother point relevant to the discussion is that I dropped the <sect2>\nfor the conversion functions. It seemed to me that giving them their\nown table was enough --- the discussion about them isn't lengthy enough\nto justify a separate section, IMO. I also don't buy the argument that\nwe need a <sect2> to make these things visible in the table of contents.\nWe have the cross-reference from section 9.4, as well as a passel of\nindex entries, to help people who don't know where to look. (Once upon\na time we had a list of tables alongside the TOC; maybe that should be\nresurrected?)\n\nI took the liberty of doing some copy-editing on nearby function\ndescriptions, too, mostly to try to give them more uniform style.\nAnd there were some errors; notably, the patch added descriptions\nfor shaNNN(text), which are functions we do not have AFAICS.\n\nI share Alvaro's feeling that these tables could stand to be reformatted\nso that they're not such a mess when rendered in narrower formats. But\nthat seems like a task for a separate patch, especially since the problem\nis hardly confined to these two sections.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jan 2020 18:05:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" }, { "msg_contents": "On Sat, 18 Jan 2020 18:05:49 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> And there were some errors; notably, the patch added descriptions\n> for shaNNN(text), which are functions we do not have AFAICS.\n\nApologies for that, my mistake.\n\nThank you to Fabien and everybody who helped.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n", "msg_date": "Tue, 21 Jan 2020 19:03:38 -0600", "msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>", "msg_from_op": false, "msg_subject": "Re: Patch to document base64 encoding" } ]
[ { "msg_contents": "Hi,\n\nIn the pluggable storage patch [1], one thing that I'm wondering about\nis how exactly to inherit the storage AM across partitions. I think\nthat's potentially worthy of a discussion with a wider audience than I'd\nget in that thread. It seems also related to the recent discussion in [2]\n\nConsider (excerpted from the tests):\n\nCREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n\nSET default_table_access_method = 'heap';\nCREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n\nSET default_table_access_method = 'heap2';\nCREATE TABLE tableam_parted_b_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('b');\n\nCREATE TABLE tableam_parted_c_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('c') USING heap;\nCREATE TABLE tableam_parted_d_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('d') USING heap2;\n\nIt seems pretty clear that tableam_parted_heap2, tableam_parted_d_heap2\nwould be stored via heap2, and tableam_parted_c_heap2 via heap.\n\nBut for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\nquite as clear. I think it'd both be sensible for new partitions to\ninherit the AM from the root, but it'd also be sensible to use the\ncurrent default.\n\nOut of laziness (it's how it works rn) I'm inclined to to go with using\nthe current default, but I'd be curious if others disagree.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20180703070645.wchpu5muyto5n647%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/201902041630.gpadougzab7v%40alvherre.pgsql\n\n", "msg_date": "Mon, 4 Mar 2019 15:47:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Tue, Mar 5, 2019 at 5:17 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> In the pluggable storage patch [1], one thing that I'm wondering about\n> is how exactly to inherit the storage AM across partitions. I think\n> that's potentially worthy of a discussion with a wider audience than I'd\n> get in that thread. It seems also related to the recent discussion in [2]\n>\n> Consider (excerpted from the tests):\n>\n> CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n>\n> SET default_table_access_method = 'heap';\n> CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n>\n> SET default_table_access_method = 'heap2';\n> CREATE TABLE tableam_parted_b_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('b');\n>\n> CREATE TABLE tableam_parted_c_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('c') USING heap;\n> CREATE TABLE tableam_parted_d_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('d') USING heap2;\n>\n> It seems pretty clear that tableam_parted_heap2, tableam_parted_d_heap2\n> would be stored via heap2, and tableam_parted_c_heap2 via heap.\n>\n> But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n> quite as clear. I think it'd both be sensible for new partitions to\n> inherit the AM from the root, but it'd also be sensible to use the\n> current default.\n>\n\nYeah, we can go either way.\n\n> Out of laziness (it's how it works rn) I'm inclined to to go with using\n> the current default, but I'd be curious if others disagree.\n>\n\nI think using the current default should be okay as that will be the\nbehavior for non-partitioned tables as well. However, if people have\ngood reasons to go other way, then that is fine too.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n", "msg_date": "Tue, 5 Mar 2019 07:58:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On 2019/03/05 8:47, Andres Freund wrote:\n> Hi,\n> \n> In the pluggable storage patch [1], one thing that I'm wondering about\n> is how exactly to inherit the storage AM across partitions. I think\n> that's potentially worthy of a discussion with a wider audience than I'd\n> get in that thread. It seems also related to the recent discussion in [2]\n> \n> Consider (excerpted from the tests):\n> \n> CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n> \n> SET default_table_access_method = 'heap';\n> CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n> \n> SET default_table_access_method = 'heap2';\n> CREATE TABLE tableam_parted_b_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('b');\n> \n> CREATE TABLE tableam_parted_c_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('c') USING heap;\n> CREATE TABLE tableam_parted_d_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('d') USING heap2;\n> \n> It seems pretty clear that tableam_parted_heap2, tableam_parted_d_heap2\n> would be stored via heap2, and tableam_parted_c_heap2 via heap.\n> \n> But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n> quite as clear. I think it'd both be sensible for new partitions to\n> inherit the AM from the root, but it'd also be sensible to use the\n> current default.\n\nGiven that many people expected this behavior to be the sane one in other\ncases that came up, +1 to go this way.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 5 Mar 2019 11:59:07 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Tue, 5 Mar 2019 at 12:47, Andres Freund <andres@anarazel.de> wrote:\n> CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n>\n> SET default_table_access_method = 'heap';\n> CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n\n\n> But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n> quite as clear. I think it'd both be sensible for new partitions to\n> inherit the AM from the root, but it'd also be sensible to use the\n> current default.\n\nI'd suggest it's made to work the same way as ca4103025dfe26 made\ntablespaces work. i.e. if they specify the storage type when creating\nthe partition, then always use that, unless they mention otherwise. If\nnothing was mentioned when they created the partition, then use\ndefault_table_access_method.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Tue, 5 Mar 2019 16:01:50 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Tue, 5 Mar 2019 at 16:01, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I'd suggest it's made to work the same way as ca4103025dfe26 made\n> tablespaces work. i.e. if they specify the storage type when creating\n> the partition, then always use that, unless they mention otherwise. If\n> nothing was mentioned when they created the partition, then use\n> default_table_access_method.\n\n\"when creating the partition\" should read \"when creating the partitioned table\"\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Tue, 5 Mar 2019 16:03:46 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On 2019/03/05 11:59, Amit Langote wrote:\n> On 2019/03/05 8:47, Andres Freund wrote:\n>> Hi,\n>>\n>> In the pluggable storage patch [1], one thing that I'm wondering about\n>> is how exactly to inherit the storage AM across partitions. I think\n>> that's potentially worthy of a discussion with a wider audience than I'd\n>> get in that thread. It seems also related to the recent discussion in [2]\n>>\n>> Consider (excerpted from the tests):\n>>\n>> CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n>>\n>> SET default_table_access_method = 'heap';\n>> CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n>>\n>> SET default_table_access_method = 'heap2';\n>> CREATE TABLE tableam_parted_b_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('b');\n>>\n>> CREATE TABLE tableam_parted_c_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('c') USING heap;\n>> CREATE TABLE tableam_parted_d_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('d') USING heap2;\n>>\n>> It seems pretty clear that tableam_parted_heap2, tableam_parted_d_heap2\n>> would be stored via heap2, and tableam_parted_c_heap2 via heap.\n>>\n>> But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n>> quite as clear. I think it'd both be sensible for new partitions to\n>> inherit the AM from the root, but it'd also be sensible to use the\n>> current default.\n> \n> Given that many people expected this behavior to be the sane one in other\n> cases that came up, +1 to go this way.\n\nReading my own reply again, it may not be clear what I was +1'ing. I\nmeant to vote for the behavior that David described in his reply.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 5 Mar 2019 12:59:56 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Tue, Mar 5, 2019 at 10:47 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> In the pluggable storage patch [1], one thing that I'm wondering about\n> is how exactly to inherit the storage AM across partitions. I think\n> that's potentially worthy of a discussion with a wider audience than I'd\n> get in that thread. It seems also related to the recent discussion in [2]\n>\n> Consider (excerpted from the tests):\n>\n> CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a)\n> USING heap2;\n>\n> SET default_table_access_method = 'heap';\n> CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR\n> VALUES IN ('a');\n>\n> SET default_table_access_method = 'heap2';\n> CREATE TABLE tableam_parted_b_heap2 PARTITION OF tableam_parted_heap2 FOR\n> VALUES IN ('b');\n>\n> CREATE TABLE tableam_parted_c_heap2 PARTITION OF tableam_parted_heap2 FOR\n> VALUES IN ('c') USING heap;\n> CREATE TABLE tableam_parted_d_heap2 PARTITION OF tableam_parted_heap2 FOR\n> VALUES IN ('d') USING heap2;\n>\n> It seems pretty clear that tableam_parted_heap2, tableam_parted_d_heap2\n> would be stored via heap2, and tableam_parted_c_heap2 via heap.\n>\n> But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n> quite as clear. I think it'd both be sensible for new partitions to\n> inherit the AM from the root, but it'd also be sensible to use the\n> current default.\n>\n> Out of laziness (it's how it works rn) I'm inclined to to go with using\n> the current default, but I'd be curious if others disagree.\n>\n\n\nAs other said that, I also agree to go with default_table_access_method to\nbe\npreferred if not explicitly specified the access method during the table\ncreation.\n\nThis discussion raises a point that, in case if the user wants to change the\naccess method of a table later once it is created, currently there is no\noption.\ncurrently there are no other alternative table access methods that are\navailable\nfor the user to switch, but definitely it may be required later.\n\nI will provide a patch to alter the access method of a table for v13.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Mar 5, 2019 at 10:47 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nIn the pluggable storage patch [1], one thing that I'm wondering about\nis how exactly to inherit the storage AM across partitions. I think\nthat's potentially worthy of a discussion with a wider audience than I'd\nget in that thread.  It seems also related to the recent discussion in [2]\n\nConsider (excerpted from the tests):\n\nCREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n\nSET default_table_access_method = 'heap';\nCREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n\nSET default_table_access_method = 'heap2';\nCREATE TABLE tableam_parted_b_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('b');\n\nCREATE TABLE tableam_parted_c_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('c') USING heap;\nCREATE TABLE tableam_parted_d_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('d') USING heap2;\n\nIt seems pretty clear that tableam_parted_heap2, tableam_parted_d_heap2\nwould be stored via heap2, and tableam_parted_c_heap2 via heap.\n\nBut for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\nquite as clear.  I think it'd both be sensible for new partitions to\ninherit the AM from the root, but it'd also be sensible to use the\ncurrent default.\n\nOut of laziness (it's how it works rn) I'm inclined to to go with using\nthe current default, but I'd be curious if others disagree.\nAs other said that, I also agree to go with default_table_access_method to bepreferred if not explicitly specified the access method during the table creation.This discussion raises a point that, in case if the user wants to change theaccess method of a table later once it is created, currently there is no option.currently there are no other alternative table access methods that are availablefor the user to switch, but definitely it may be required later.I will provide a patch to alter the access method of a table for v13.Regards,Haribabu KommiFujitsu Australia", "msg_date": "Tue, 5 Mar 2019 15:49:51 +1100", "msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "Hi,\n\nOn 2019-03-05 16:01:50 +1300, David Rowley wrote:\n> On Tue, 5 Mar 2019 at 12:47, Andres Freund <andres@anarazel.de> wrote:\n> > CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n> >\n> > SET default_table_access_method = 'heap';\n> > CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n> \n> \n> > But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n> > quite as clear. I think it'd both be sensible for new partitions to\n> > inherit the AM from the root, but it'd also be sensible to use the\n> > current default.\n> \n> I'd suggest it's made to work the same way as ca4103025dfe26 made\n> tablespaces work.\n\nHm, is that actually correct? Because as far as I can tell that doesn't\nhave the necessary pg_dump code to make this behaviour persistent:\n\nCREATE TABLESPACE frak LOCATION '/tmp/frak';\nCREATE TABLE test_tablespace (a text, b int) PARTITION BY list (a) TABLESPACE frak ;\nCREATE TABLE test_tablespace_1 PARTITION OF test_tablespace FOR VALUES in ('a');\nCREATE TABLE test_tablespace_2 PARTITION OF test_tablespace FOR VALUES in ('b') TABLESPACE pg_default;\nCREATE TABLE test_tablespace_3 PARTITION OF test_tablespace FOR VALUES in ('c') TABLESPACE frak;\n\nSELECT relname, relkind, reltablespace FROM pg_class WHERE relname LIKE 'test_tablespace%' ORDER BY 1;\n┌───────────────────┬─────────┬───────────────┐\n│ relname │ relkind │ reltablespace │\n├───────────────────┼─────────┼───────────────┤\n│ test_tablespace │ p │ 16384 │\n│ test_tablespace_1 │ r │ 16384 │\n│ test_tablespace_2 │ r │ 0 │\n│ test_tablespace_3 │ r │ 16384 │\n└───────────────────┴─────────┴───────────────┘\n(4 rows)\n\nbut a dump outputs (abbreviated)\n\nSET default_tablespace = frak;\nCREATE TABLE public.test_tablespace (\n a text,\n b integer\n)\nPARTITION BY LIST (a);\nCREATE TABLE public.test_tablespace_1 PARTITION OF public.test_tablespace\nFOR VALUES IN ('a');\nSET default_tablespace = '';\nCREATE TABLE public.test_tablespace_2 PARTITION OF public.test_tablespace\nFOR VALUES IN ('b');\nSET default_tablespace = frak;\nCREATE TABLE public.test_tablespace_3 PARTITION OF public.test_tablespace\nFOR VALUES IN ('c');\n\nwhich restores to:\n\npostgres[32125][1]=# SELECT relname, relkind, reltablespace FROM pg_class WHERE relname LIKE 'test_tablespace%' ORDER BY 1;\n┌───────────────────┬─────────┬───────────────┐\n│ relname │ relkind │ reltablespace │\n├───────────────────┼─────────┼───────────────┤\n│ test_tablespace │ p │ 16384 │\n│ test_tablespace_1 │ r │ 16384 │\n│ test_tablespace_2 │ r │ 16384 │\n│ test_tablespace_3 │ r │ 16384 │\n└───────────────────┴─────────┴───────────────┘\n(4 rows)\n\nbecause public.test_tablespace_2 assumes it's ought to inherit the\ntablespace from the partitioned table.\n\n\nI also find it far from clear that:\n <listitem>\n <para>\n The <replaceable class=\"parameter\">tablespace_name</replaceable> is the name\n of the tablespace in which the new table is to be created.\n If not specified,\n <xref linkend=\"guc-default-tablespace\"/> is consulted, or\n <xref linkend=\"guc-temp-tablespaces\"/> if the table is temporary. For\n partitioned tables, since no storage is required for the table itself,\n the tablespace specified here only serves to mark the default tablespace\n for any newly created partitions when no other tablespace is explicitly\n specified.\n </para>\n </listitem>\nis handled correctly. The above says that the *specified* tablespaces -\nwhich seems to exclude the default tablespace - is what's going to\ndetermine what partitions use as their default tablespace. But in fact\nthat's not true, the partitioned table's pg_class.retablespace is set to\nwhat default_tablespaces was at the time of the creation.\n\n\n> i.e. if they specify the storage type when creating\n> the partition, then always use that, unless they mention otherwise. If\n> nothing was mentioned when they created the partition, then use\n> default_table_access_method.\n\nHm. That'd be doable, but given the above ambiguities I'm not convinced\nthat's the best approach. As far as I can see that'd require:\n\n1) At relation creation, for partitioned tables only, do not take\n default_table_access_method into account.\n\n2) At partition creation, if the AM is not specified and if the\n partitioned table's relam is 0, use the default_table_access_method.\n\n3) At pg_dump, for partitioned tables only, explicitly emit a USING\n ... rather than use the method of manipulating default_table_access_method.\n\nAs far as I can tell, the necessary steps are also what'd need to be\ndone to actually implement the described behaviour for TABLESPACE (with\ns/default_table_access_method/default_tablespace/ and s/USING/TABLESPACE\nof course).\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Mon, 4 Mar 2019 22:08:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On 2019-03-04 22:08:04 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2019-03-05 16:01:50 +1300, David Rowley wrote:\n> > On Tue, 5 Mar 2019 at 12:47, Andres Freund <andres@anarazel.de> wrote:\n> > > CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list (a) USING heap2;\n> > >\n> > > SET default_table_access_method = 'heap';\n> > > CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2 FOR VALUES IN ('a');\n> > \n> > \n> > > But for tableam_parted_a_heap2 tableam_parted_b_heap2 the answer isn't\n> > > quite as clear. I think it'd both be sensible for new partitions to\n> > > inherit the AM from the root, but it'd also be sensible to use the\n> > > current default.\n> > \n> > I'd suggest it's made to work the same way as ca4103025dfe26 made\n> > tablespaces work.\n> \n> Hm, is that actually correct? Because as far as I can tell that doesn't\n> have the necessary pg_dump code to make this behaviour persistent:\n> \n> CREATE TABLESPACE frak LOCATION '/tmp/frak';\n> CREATE TABLE test_tablespace (a text, b int) PARTITION BY list (a) TABLESPACE frak ;\n> CREATE TABLE test_tablespace_1 PARTITION OF test_tablespace FOR VALUES in ('a');\n> CREATE TABLE test_tablespace_2 PARTITION OF test_tablespace FOR VALUES in ('b') TABLESPACE pg_default;\n> CREATE TABLE test_tablespace_3 PARTITION OF test_tablespace FOR VALUES in ('c') TABLESPACE frak;\n> \n> SELECT relname, relkind, reltablespace FROM pg_class WHERE relname LIKE 'test_tablespace%' ORDER BY 1;\n> ┌───────────────────┬─────────┬───────────────┐\n> │ relname │ relkind │ reltablespace │\n> ├───────────────────┼─────────┼───────────────┤\n> │ test_tablespace │ p │ 16384 │\n> │ test_tablespace_1 │ r │ 16384 │\n> │ test_tablespace_2 │ r │ 0 │\n> │ test_tablespace_3 │ r │ 16384 │\n> └───────────────────┴─────────┴───────────────┘\n> (4 rows)\n> \n> but a dump outputs (abbreviated)\n> \n> SET default_tablespace = frak;\n> CREATE TABLE public.test_tablespace (\n> a text,\n> b integer\n> )\n> PARTITION BY LIST (a);\n> CREATE TABLE public.test_tablespace_1 PARTITION OF public.test_tablespace\n> FOR VALUES IN ('a');\n> SET default_tablespace = '';\n> CREATE TABLE public.test_tablespace_2 PARTITION OF public.test_tablespace\n> FOR VALUES IN ('b');\n> SET default_tablespace = frak;\n> CREATE TABLE public.test_tablespace_3 PARTITION OF public.test_tablespace\n> FOR VALUES IN ('c');\n> \n> which restores to:\n> \n> postgres[32125][1]=# SELECT relname, relkind, reltablespace FROM pg_class WHERE relname LIKE 'test_tablespace%' ORDER BY 1;\n> ┌───────────────────┬─────────┬───────────────┐\n> │ relname │ relkind │ reltablespace │\n> ├───────────────────┼─────────┼───────────────┤\n> │ test_tablespace │ p │ 16384 │\n> │ test_tablespace_1 │ r │ 16384 │\n> │ test_tablespace_2 │ r │ 16384 │\n> │ test_tablespace_3 │ r │ 16384 │\n> └───────────────────┴─────────┴───────────────┘\n> (4 rows)\n> \n> because public.test_tablespace_2 assumes it's ought to inherit the\n> tablespace from the partitioned table.\n> \n> \n> I also find it far from clear that:\n> <listitem>\n> <para>\n> The <replaceable class=\"parameter\">tablespace_name</replaceable> is the name\n> of the tablespace in which the new table is to be created.\n> If not specified,\n> <xref linkend=\"guc-default-tablespace\"/> is consulted, or\n> <xref linkend=\"guc-temp-tablespaces\"/> if the table is temporary. For\n> partitioned tables, since no storage is required for the table itself,\n> the tablespace specified here only serves to mark the default tablespace\n> for any newly created partitions when no other tablespace is explicitly\n> specified.\n> </para>\n> </listitem>\n> is handled correctly. The above says that the *specified* tablespaces -\n> which seems to exclude the default tablespace - is what's going to\n> determine what partitions use as their default tablespace. But in fact\n> that's not true, the partitioned table's pg_class.retablespace is set to\n> what default_tablespaces was at the time of the creation.\n> \n> \n> > i.e. if they specify the storage type when creating\n> > the partition, then always use that, unless they mention otherwise. If\n> > nothing was mentioned when they created the partition, then use\n> > default_table_access_method.\n> \n> Hm. That'd be doable, but given the above ambiguities I'm not convinced\n> that's the best approach. As far as I can see that'd require:\n> \n> 1) At relation creation, for partitioned tables only, do not take\n> default_table_access_method into account.\n> \n> 2) At partition creation, if the AM is not specified and if the\n> partitioned table's relam is 0, use the default_table_access_method.\n> \n> 3) At pg_dump, for partitioned tables only, explicitly emit a USING\n> ... rather than use the method of manipulating default_table_access_method.\n> \n> As far as I can tell, the necessary steps are also what'd need to be\n> done to actually implement the described behaviour for TABLESPACE (with\n> s/default_table_access_method/default_tablespace/ and s/USING/TABLESPACE\n> of course).\n\nBased on this mail I'm currently planning to simply forbid specifying\nUSING for partitioned tables. Then we can argue about this later.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Tue, 5 Mar 2019 09:59:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Tue, Mar 5, 2019 at 12:59 PM Andres Freund <andres@anarazel.de> wrote:\n> Based on this mail I'm currently planning to simply forbid specifying\n> USING for partitioned tables. Then we can argue about this later.\n\n+1. I actually think that might be the right thing in the long-term,\nbut it undeniably avoids committing to any particular decision in the\nshort term, which seems good.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 5 Mar 2019 13:19:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Tue, 5 Mar 2019 at 19:08, Andres Freund <andres@anarazel.de> wrote:\r\n>\r\n> On 2019-03-05 16:01:50 +1300, David Rowley wrote:\r\n> > I'd suggest it's made to work the same way as ca4103025dfe26 made\r\n> > tablespaces work.\r\n>\r\n> Hm, is that actually correct? Because as far as I can tell that doesn't\r\n> have the necessary pg_dump code to make this behaviour persistent:\r\n>\r\n> CREATE TABLESPACE frak LOCATION '/tmp/frak';\r\n> CREATE TABLE test_tablespace (a text, b int) PARTITION BY list (a) TABLESPACE frak ;\r\n> CREATE TABLE test_tablespace_1 PARTITION OF test_tablespace FOR VALUES in ('a');\r\n> CREATE TABLE test_tablespace_2 PARTITION OF test_tablespace FOR VALUES in ('b') TABLESPACE pg_default;\r\n> CREATE TABLE test_tablespace_3 PARTITION OF test_tablespace FOR VALUES in ('c') TABLESPACE frak;\r\n>\r\n> SELECT relname, relkind, reltablespace FROM pg_class WHERE relname LIKE 'test_tablespace%' ORDER BY 1;\r\n> ┌───────────────────┬─────────┬───────────────┐\r\n> │ relname │ relkind │ reltablespace │\r\n> ├───────────────────┼─────────┼───────────────┤\r\n> │ test_tablespace │ p │ 16384 │\r\n> │ test_tablespace_1 │ r │ 16384 │\r\n> │ test_tablespace_2 │ r │ 0 │\r\n> │ test_tablespace_3 │ r │ 16384 │\r\n> └───────────────────┴─────────┴───────────────┘\r\n\r\n[pg_dump/pg_restore]\r\n\r\n> ┌───────────────────┬─────────┬───────────────┐\r\n> │ relname │ relkind │ reltablespace │\r\n> ├───────────────────┼─────────┼───────────────┤\r\n> │ test_tablespace │ p │ 16384 │\r\n> │ test_tablespace_1 │ r │ 16384 │\r\n> │ test_tablespace_2 │ r │ 16384 │\r\n> │ test_tablespace_3 │ r │ 16384 │\r\n> └───────────────────┴─────────┴───────────────┘\r\n\r\nfrak... that's a bit busted. I can't instantly think of a fix, but I\r\nsee the same problem does not seem to exist for partition indexes, so\r\nthat's a relief since that's already in PG11.\r\n\r\nI'll take this up on another thread once I have something good to report.\r\n\r\n-- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n", "msg_date": "Wed, 6 Mar 2019 17:37:22 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" }, { "msg_contents": "On Wed, 6 Mar 2019 at 07:19, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Mar 5, 2019 at 12:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > Based on this mail I'm currently planning to simply forbid specifying\n> > USING for partitioned tables. Then we can argue about this later.\n>\n> +1. I actually think that might be the right thing in the long-term,\n> but it undeniably avoids committing to any particular decision in the\n> short term, which seems good.\n\nI've not really been following the storage am patch, but given that a\npartition's TABLESPACE is inherited from its partitioned table, I'd\nfind it pretty surprising that USING wouldn't do the same. They're\nboth storage options, so I think having them behave differently is\ngoing to cause some confusion.\n\nI think the patch I just submitted to [1] should make it pretty easy\nto make this work the same as TABLESPACE does.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_iyBpAuYBPQv_GGeME%3Dg9Rpr8yWjCaYV4E685yQ1uzkw%40mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 00:31:09 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Inheriting table AMs for partitioned tables" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15668\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: Unsupported/Unknown\nOperating system: Ubuntu 18.04\nDescription: \n\nThe following query:\r\nCREATE TABLE range_parted (a int) PARTITION BY RANGE (a);\r\nCREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n(unknown.unknown) TO (1);\r\n\r\ncrashes server (on the master branch) with the stack trace:\r\nCore was generated by `postgres: law regression [local] CREATE TABLE \n '.\r\nProgram terminated with signal SIGSEGV, Segmentation fault.\r\n#0 0x0000560ab19ea0bc in transformPartitionRangeBounds\n(pstate=pstate@entry=0x560ab3290da8, blist=<optimized out>, \r\n parent=parent@entry=0x7f7846a6bea8) at parse_utilcmd.c:3754\r\n3754 if (strcmp(\"minvalue\", cname) == 0)\r\n(gdb) bt\r\n#0 0x0000560ab19ea0bc in transformPartitionRangeBounds\n(pstate=pstate@entry=0x560ab3290da8, blist=<optimized out>, \r\n parent=parent@entry=0x7f7846a6bea8) at parse_utilcmd.c:3754\r\n#1 0x0000560ab19eea30 in transformPartitionBound\n(pstate=pstate@entry=0x560ab3290da8, \r\n parent=parent@entry=0x7f7846a6bea8, spec=0x560ab31fd448) at\nparse_utilcmd.c:3706\r\n#2 0x0000560ab1a4436f in DefineRelation (stmt=stmt@entry=0x560ab3295b30,\nrelkind=relkind@entry=114 'r', ownerId=10, \r\n ownerId@entry=0, typaddress=typaddress@entry=0x0, \r\n queryString=queryString@entry=0x560ab31da3b0 \"CREATE TABLE rp_part\nPARTITION OF range_parted FOR VALUES FROM (unknown.unknown) TO (1);\") at\ntablecmds.c:881\r\n#3 0x0000560ab1bba086 in ProcessUtilitySlow\n(pstate=pstate@entry=0x560ab3295a20, pstmt=pstmt@entry=0x560ab31db4b0, \r\n queryString=queryString@entry=0x560ab31da3b0 \"CREATE TABLE rp_part\nPARTITION OF range_parted FOR VALUES FROM (unknown.unknown) TO (1);\",\ncontext=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, \r\n queryEnv=queryEnv@entry=0x0, completionTag=0x7ffcc3c17120 \"\",\ndest=0x560ab31db590) at utility.c:1003\r\n#4 0x0000560ab1bb8b60 in standard_ProcessUtility (pstmt=0x560ab31db4b0, \r\n queryString=0x560ab31da3b0 \"CREATE TABLE rp_part PARTITION OF\nrange_parted FOR VALUES FROM (unknown.unknown) TO (1);\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x560ab31db590, completionTag=0x7ffcc3c17120 \"\")\r\n at utility.c:923\r\n#5 0x0000560ab1bb6022 in PortalRunUtility (portal=0x560ab32415e0,\npstmt=0x560ab31db4b0, isTopLevel=<optimized out>, \r\n setHoldSnapshot=<optimized out>, dest=<optimized out>,\ncompletionTag=0x7ffcc3c17120 \"\") at pquery.c:1175\r\n#6 0x0000560ab1bb6ae0 in PortalRunMulti\n(portal=portal@entry=0x560ab32415e0, isTopLevel=isTopLevel@entry=true, \r\n setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x560ab31db590, altdest=altdest@entry=0x560ab31db590, \r\n completionTag=completionTag@entry=0x7ffcc3c17120 \"\") at pquery.c:1328\r\n#7 0x0000560ab1bb76da in PortalRun (portal=portal@entry=0x560ab32415e0,\ncount=count@entry=9223372036854775807, \r\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\ndest=dest@entry=0x560ab31db590, \r\n altdest=altdest@entry=0x560ab31db590, completionTag=0x7ffcc3c17120 \"\")\nat pquery.c:796\r\n#8 0x0000560ab1bb3582 in exec_simple_query (\r\n query_string=0x560ab31da3b0 \"CREATE TABLE rp_part PARTITION OF\nrange_parted FOR VALUES FROM (unknown.unknown) TO (1);\") at\npostgres.c:1215\r\n#9 0x0000560ab1bb54ee in PostgresMain (argc=<optimized out>,\nargv=argv@entry=0x560ab32056a0, dbname=<optimized out>, \r\n username=<optimized out>) at postgres.c:4256\r\n#10 0x0000560ab1b42e00 in BackendRun (port=0x560ab31fdde0,\nport=0x560ab31fdde0) at postmaster.c:4382\r\n#11 BackendStartup (port=0x560ab31fdde0) at postmaster.c:4073\r\n#12 ServerLoop () at postmaster.c:1703\r\n#13 0x0000560ab1b43eb9 in PostmasterMain (argc=3, argv=0x560ab31d4a10) at\npostmaster.c:1376\r\n#14 0x0000560ab18e33a1 in main (argc=3, argv=0x560ab31d4a10) at main.c:228", "msg_date": "Tue, 05 Mar 2019 09:05:55 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Hi,\n\n(cc'ing -hackers and Peter E)\n\nOn Tue, Mar 5, 2019 at 8:02 PM PG Bug reporting form\n<noreply@postgresql.org> wrote:\n>\n> The following bug has been logged on the website:\n>\n> Bug reference: 15668\n> Logged by: Alexander Lakhin\n> Email address: exclusion@gmail.com\n> PostgreSQL version: Unsupported/Unknown\n> Operating system: Ubuntu 18.04\n> Description:\n>\n> The following query:\n> CREATE TABLE range_parted (a int) PARTITION BY RANGE (a);\n> CREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n> (unknown.unknown) TO (1);\n>\n> crashes server (on the master branch) with the stack trace:\n> Core was generated by `postgres: law regression [local] CREATE TABLE\n> '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 0x0000560ab19ea0bc in transformPartitionRangeBounds\n> (pstate=pstate@entry=0x560ab3290da8, blist=<optimized out>,\n> parent=parent@entry=0x7f7846a6bea8) at parse_utilcmd.c:3754\n> 3754 if (strcmp(\"minvalue\", cname) == 0)\n\nThanks for the report. Seems to be a bug of the following commit in\nHEAD, of which I was one of the authors:\n\ncommit 7c079d7417a8f2d4bf5144732e2f85117db9214f\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Fri Jan 25 11:27:59 2019 +0100\n\n Allow generalized expression syntax for partition bounds\n\nThat seems to be caused by some over-optimistic coding in\ntransformPartitionRangeBounds. Following will crash too.\n\nCREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n(a.a.a.a.a.a.a.a.a.a.a.a) TO (1);\n\nIf I try the list partitioning syntax, it doesn't crash but gives the\nfollowing error:\n\ncreate table lparted1 partition of lparted for values in (a.a.a.a.a.a);\nERROR: improper qualified name (too many dotted names): a.a.a.a.a.a\nLINE 1: ...able lparted1 partition of lparted for values in (a.a.a.a.a....\n ^\nMaybe we should error out as follows in\ntransformPartitionRangeBounds(), although that means we'll get\ndifferent error message than when using list partitioning syntax:\n\n@@ -3749,6 +3749,12 @@ transformPartitionRangeBounds(ParseState\n*pstate, List *blist,\n if (list_length(cref->fields) == 1 &&\n IsA(linitial(cref->fields), String))\n cname = strVal(linitial(cref->fields));\n+ else\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"invalid expression for range bound\"),\n+ parser_errposition(pstate,\n+ exprLocation((Node *) expr))));\n\n Assert(cname != NULL);\n if (strcmp(\"minvalue\", cname) == 0)\n\nThanks,\nAmit\n\n", "msg_date": "Tue, 5 Mar 2019 23:04:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Tue, Mar 05, 2019 at 11:04:17PM +0900, Amit Langote wrote:\n> Maybe we should error out as follows in\n> transformPartitionRangeBounds(), although that means we'll get\n> different error message than when using list partitioning syntax:\n\nHm. I don't think that this is a good idea as you could lose some\ninformation for the expression transformation handling, and the error\nhandling becomes inconsistent depending on the partition bound\nstrategy. It seems to me that if we cannot extract any special value\nfrom the ColumnRef expression generated, then we ought to let\ntransformPartitionBoundValue() and particularly transformExprRecurse()\ndo the analysis work and complain if needed:\n=# CREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n(unknown.unknown) TO (1);\nERROR: 42P01: missing FROM-clause entry for table \"unknown\"\nLINE 1: ...p_part PARTITION OF range_parted FOR VALUES FROM\n(unknown.un...\n=# CREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n(a.a.a.a.a.a.a.a.a.a.a.a) TO (1);\nERROR: 42601: improper qualified name (too many dotted names):\na.a.a.a.a.a.a.a.a.a.a.a\nLINE 1: ...p_part PARTITION OF range_parted FOR VALUES FROM\n(a.a.a.a.a....\n\nWhat about something like the attached instead? Minus the test\ncases which should go to create_table.sql of course.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 15:48:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Hi,\n\nOn 2019/03/06 15:48, Michael Paquier wrote:\n> On Tue, Mar 05, 2019 at 11:04:17PM +0900, Amit Langote wrote:\n>> Maybe we should error out as follows in\n>> transformPartitionRangeBounds(), although that means we'll get\n>> different error message than when using list partitioning syntax:\n> \n> Hm. I don't think that this is a good idea as you could lose some\n> information for the expression transformation handling, and the error\n> handling becomes inconsistent depending on the partition bound\n> strategy. It seems to me that if we cannot extract any special value\n> from the ColumnRef expression generated, then we ought to let\n> transformPartitionBoundValue() and particularly transformExprRecurse()\n> do the analysis work and complain if needed:\n> =# CREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n> (unknown.unknown) TO (1);\n> ERROR: 42P01: missing FROM-clause entry for table \"unknown\"\n> LINE 1: ...p_part PARTITION OF range_parted FOR VALUES FROM\n> (unknown.un...\n> =# CREATE TABLE rp_part PARTITION OF range_parted FOR VALUES FROM\n> (a.a.a.a.a.a.a.a.a.a.a.a) TO (1);\n> ERROR: 42601: improper qualified name (too many dotted names):\n> a.a.a.a.a.a.a.a.a.a.a.a\n> LINE 1: ...p_part PARTITION OF range_parted FOR VALUES FROM\n> (a.a.a.a.a....\n> \n> What about something like the attached instead? Minus the test\n> cases which should go to create_table.sql of course.\n\nThanks for looking at this. Your patch seems better, because it allows us\nto keep the error message consistent with the message one would get with\nlist-partitioned syntax.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 6 Mar 2019 16:00:42 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Wed, Mar 06, 2019 at 04:00:42PM +0900, Amit Langote wrote:\n> Thanks for looking at this. Your patch seems better, because it allows us\n> to keep the error message consistent with the message one would get with\n> list-partitioned syntax.\n\nThanks for confirming. I think that it would be nice as well to add\nmore test coverage for such error patterns with all the strategies.\nIt would be good to fix that first, so I can take care of that.\n\nNow I don't really find the error \"missing FROM-clause entry for\ntable\" quite convincing when this is applied to a partition bound when\nusing a column defined in the relation. Adding more error classes in\nthe set of CRERR_NO_RTE would be perhaps nice, still I am not sure how\nelegant it could be made when looking at expressions for partition\nbounds.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 17:27:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Hi,\n\nOn 2019/03/06 17:27, Michael Paquier wrote:\n> On Wed, Mar 06, 2019 at 04:00:42PM +0900, Amit Langote wrote:\n>> Thanks for looking at this. Your patch seems better, because it allows us\n>> to keep the error message consistent with the message one would get with\n>> list-partitioned syntax.\n> \n> Thanks for confirming. I think that it would be nice as well to add\n> more test coverage for such error patterns with all the strategies.\n> It would be good to fix that first, so I can take care of that.\n\nI've added some tests to your patch. Also improved the comments a bit.\n\nI noticed another issue with the code -- it's using strcmp() to compare\nspecified string against \"minvalue\" and \"maxvalue\", which causes the\nfollowing silly error:\n\ncreate table q2 partition of q for values from (\"MINVALUE\") to (maxvalue);\nERROR: column \"MINVALUE\" does not exist\nLINE 1: create table q2 partition of q for values from (\"MINVALUE\") ...\n\nIt should be using pg_strncasecmp().\n\n> Now I don't really find the error \"missing FROM-clause entry for\n> table\" quite convincing when this is applied to a partition bound when\n> using a column defined in the relation. Adding more error classes in\n> the set of CRERR_NO_RTE would be perhaps nice, still I am not sure how\n> elegant it could be made when looking at expressions for partition\n> bounds.\n\nNote that this is not just a problem for partition bounds. You can see it\nwith default expressions too.\n\ncreate table foo (a int default (aa.a));\nERROR: missing FROM-clause entry for table \"aa\"\nLINE 1: create table foo (a int default (aa.a));\n\ncreate table foo (a int default (a.a.aa.a.a.a.a.aa));\nERROR: improper qualified name (too many dotted names): a.a.aa.a.a.a.a.aa\nLINE 1: create table foo (a int default (a.a.aa.a.a.a.a.aa));\n\nWe could make the error message more meaningful depending on the context,\nbut maybe it'd better be pursue it as a separate project.\n\nThanks,\nAmit", "msg_date": "Mon, 11 Mar 2019 15:44:39 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Mon, Mar 11, 2019 at 03:44:39PM +0900, Amit Langote wrote:\n> We could make the error message more meaningful depending on the context,\n> but maybe it'd better be pursue it as a separate project.\n\nYeah, I noticed that stuff when working on it this afternoon. The\nerror message does not completely feel right even in your produced\ntests. Out of curiosity I have been working on this thing myself,\nand it is possible to have a context-related message. Please see\nattached, that's in my opinion less confusing, and of course\ndebatable. Still this approach does not feel completely right either\nas that means hijacking the code path which generates a generic\nmessage for missing RTEs. :(\n--\nMichael", "msg_date": "Mon, 11 Mar 2019 16:21:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Mon, Mar 11, 2019 at 2:45 AM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> I noticed another issue with the code -- it's using strcmp() to compare\n> specified string against \"minvalue\" and \"maxvalue\", which causes the\n> following silly error:\n>\n> create table q2 partition of q for values from (\"MINVALUE\") to (maxvalue);\n> ERROR: column \"MINVALUE\" does not exist\n> LINE 1: create table q2 partition of q for values from (\"MINVALUE\") ...\n>\n> It should be using pg_strncasecmp().\n\nUh, why? Generally, an unquoted keyword is equivalent to a quoted\nlowercase version of that same keyword, not anything else. Like\nCREATE TABLE \"foo\" = CREATE TABLE FOO <> CREATE TABLE \"FOO\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 12 Mar 2019 11:13:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Mar 11, 2019 at 2:45 AM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> I noticed another issue with the code -- it's using strcmp() to compare\n>> specified string against \"minvalue\" and \"maxvalue\", which causes the\n>> following silly error:\n>> \n>> create table q2 partition of q for values from (\"MINVALUE\") to (maxvalue);\n>> ERROR: column \"MINVALUE\" does not exist\n>> LINE 1: create table q2 partition of q for values from (\"MINVALUE\") ...\n>> \n>> It should be using pg_strncasecmp().\n\n> Uh, why? Generally, an unquoted keyword is equivalent to a quoted\n> lowercase version of that same keyword, not anything else. Like\n> CREATE TABLE \"foo\" = CREATE TABLE FOO <> CREATE TABLE \"FOO\".\n\nYeah. The behavior shown above is entirely correct, and accepting the\nstatement would be flat out wrong; it would cause trouble if somebody\ncreated a table containing multiple case-variations of MINVALUE.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Mar 2019 12:35:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On 2019/03/13 1:35, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Mon, Mar 11, 2019 at 2:45 AM Amit Langote\n>> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>> I noticed another issue with the code -- it's using strcmp() to compare\n>>> specified string against \"minvalue\" and \"maxvalue\", which causes the\n>>> following silly error:\n>>>\n>>> create table q2 partition of q for values from (\"MINVALUE\") to (maxvalue);\n>>> ERROR: column \"MINVALUE\" does not exist\n>>> LINE 1: create table q2 partition of q for values from (\"MINVALUE\") ...\n>>>\n>>> It should be using pg_strncasecmp().\n> \n>> Uh, why? Generally, an unquoted keyword is equivalent to a quoted\n>> lowercase version of that same keyword, not anything else. Like\n>> CREATE TABLE \"foo\" = CREATE TABLE FOO <> CREATE TABLE \"FOO\".\n\nOK. Perhaps, I reacted too strongly to encountering the following\nbehavior with HEAD:\n\ncreate table p1 partition of p for values from (\"minValue\") to (1);\nERROR: column \"minValue\" does not exist\n\nbut,\n\ncreate table p1 partition of p for values from (\"minvalue\") to (1);\n\\d p1\n Table \"public.p1\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\nPartition of: p FOR VALUES FROM (MINVALUE) TO (1)\n\nBut as you and Tom have pointed out, maybe it's normal.\n\n> Yeah. The behavior shown above is entirely correct, and accepting the\n> statement would be flat out wrong; it would cause trouble if somebody\n> created a table containing multiple case-variations of MINVALUE.\n\nSorry, I didn't understand this last part. Different case-variations will\nall be interpreted as a minvalue (negative infinity) range bound and\nflagged if the resulting range bound constraint would be invalid.\n\nDid you mean something like the following:\n\ncreate table p1 partition of ... from (\"minValue\") to (\"MINVALUE\");\n\n\nwhich using pg_strncasecmp() comparisons gives:\n\ncreate table p1 partition of p for values from (\"minValue\") to (\"MINVALUE\");\nERROR: empty range bound specified for partition \"p1\"\nDETAIL: Specified lower bound (MINVALUE) is greater than or equal to\nupper bound (MINVALUE).\n\nwhich is same as the behavior with unquoted keyword syntax:\n\ncreate table p1 partition of p for values from (minValue) to (MINVALUE);\nERROR: empty range bound specified for partition \"p1\"\nDETAIL: Specified lower bound (MINVALUE) is greater than or equal to\nupper bound (MINVALUE).\n\nwhereas quoted identifier syntax on HEAD gives:\n\ncreate table p1 partition of p for values from (\"minValue\") to (\"MINVALUE\");\nERROR: column \"minValue\" does not exist\nLINE 1: create table p1 partition of p for values from (\"minValue\") ...\n\nHowever, as you guys said, HEAD is behaving sanely.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 13 Mar 2019 13:08:01 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On 2019/03/11 16:21, Michael Paquier wrote:\n> On Mon, Mar 11, 2019 at 03:44:39PM +0900, Amit Langote wrote:\n>> We could make the error message more meaningful depending on the context,\n>> but maybe it'd better be pursue it as a separate project.\n> \n> Yeah, I noticed that stuff when working on it this afternoon. The\n> error message does not completely feel right even in your produced\n> tests. Out of curiosity I have been working on this thing myself,\n> and it is possible to have a context-related message. Please see\n> attached, that's in my opinion less confusing, and of course\n> debatable. Still this approach does not feel completely right either\n> as that means hijacking the code path which generates a generic\n> message for missing RTEs. :(\n\n@@ -3259,6 +3259,9 @@ errorMissingRTE(ParseState *pstate, RangeVar\n+\t *\n+\t * Also, in the context of parsing a partition bound, produce a more\n+\t * helpful error message.\n \t */\n \tif (rte && rte->alias &&\n \t\tstrcmp(rte->eref->aliasname, relation->relname) != 0 &&\n\n\n-\tif (rte)\n+\tif (pstate->p_expr_kind == EXPR_KIND_PARTITION_BOUND)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_UNDEFINED_TABLE),\n+\t\t\t\t errmsg(\"invalid reference in partition bound expression for table\n\\\"%s\\\"\",\n+\t\t\t\t\t\trelation->relname)));\n+\telse if (rte)\n\nHmm, it seems odd to me that it's OK for default expressions to emit the\n\"missing RTE\" error, whereas partition bound expressions would emit this\nspecial error message?\n\ncreate table foo (a int default (bar.a));\nERROR: missing FROM-clause entry for table \"bar\"\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 13 Mar 2019 14:15:24 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On 2019/03/13 14:15, Amit Langote wrote:\n> On 2019/03/11 16:21, Michael Paquier wrote:\n>> On Mon, Mar 11, 2019 at 03:44:39PM +0900, Amit Langote wrote:\n>>> We could make the error message more meaningful depending on the context,\n>>> but maybe it'd better be pursue it as a separate project.\n>>\n>> Yeah, I noticed that stuff when working on it this afternoon. The\n>> error message does not completely feel right even in your produced\n>> tests. Out of curiosity I have been working on this thing myself,\n>> and it is possible to have a context-related message. Please see\n>> attached, that's in my opinion less confusing, and of course\n>> debatable. Still this approach does not feel completely right either\n>> as that means hijacking the code path which generates a generic\n>> message for missing RTEs. :(\n> \n> @@ -3259,6 +3259,9 @@ errorMissingRTE(ParseState *pstate, RangeVar\n> +\t *\n> +\t * Also, in the context of parsing a partition bound, produce a more\n> +\t * helpful error message.\n> \t */\n> \tif (rte && rte->alias &&\n> \t\tstrcmp(rte->eref->aliasname, relation->relname) != 0 &&\n> \n> \n> -\tif (rte)\n> +\tif (pstate->p_expr_kind == EXPR_KIND_PARTITION_BOUND)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_UNDEFINED_TABLE),\n> +\t\t\t\t errmsg(\"invalid reference in partition bound expression for table\n> \\\"%s\\\"\",\n> +\t\t\t\t\t\trelation->relname)));\n> +\telse if (rte)\n> \n> Hmm, it seems odd to me that it's OK for default expressions to emit the\n> \"missing RTE\" error, whereas partition bound expressions would emit this\n> special error message?\n> \n> create table foo (a int default (bar.a));\n> ERROR: missing FROM-clause entry for table \"bar\"\n\nLooking into this a bit more, I wonder if it would be a good idea to to\nhave one of those error-emitting switch (pstate->p_expr_kind) blocks in\ntransformColumnRef(), as shown in the attached patch?\n\nFor example, the following error is emitted by one of such blocks that's\nin transformSubLink():\n\ncreate table foo (a int default (select * from non_existent_table));\nERROR: cannot use subquery in DEFAULT expression\n\nHowever, if we decide to go with this approach, we will start getting\ndifferent error messages from what HEAD gives in certain cases. With the\npatch, you will get the following error when trying to use an aggregate\nfunction in for DEFAULT expression:\n\ncreate table foo (a int default (avg(foo.a)));\nERROR: cannot use column reference in DEFAULT expression\n\nbut on HEAD, you get:\n\ncreate table foo (a int default (avg(foo.a)));\nERROR: aggregate functions are not allowed in DEFAULT expressions\n\nThat's because, on HEAD, transformAggregateCall() (or something that calls\nit) first calls transformColumnRef() to resolve 'foo.a', which checks that\nfoo.a is a valid column reference, but it doesn't concern itself with the\nfact that the bigger expression it's part of is being used for DEFAULT\nexpression. It's only after 'foo.a' has been resolved as a valid column\nreference that check_agglevels_and_constraints(), via\ntransformAggregateCall, emits an error that the overall expression is\ninvalid to use as a DEFAULT expression. With patches, error will be\nemitted even before resolving 'foo.a'.\n\nWhile transformAggregateCall() works like that, transformSubLink(), which\nI mentioned in the first example, doesn't bother to analyze the query\nfirst (select * from non_existent_table) to notice that the referenced\ntable doesn't exist. If it had bothered to analyze the query first, we\nwould've most likely gotten an error from errorMissingRTE(), not what we\nget today. So, there are certainly some inconsistencies even today in how\nthese errors are emitted.\n\nThanks,\nAmit", "msg_date": "Wed, 13 Mar 2019 15:17:47 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Wed, Mar 13, 2019 at 03:17:47PM +0900, Amit Langote wrote:\n> but on HEAD, you get:\n> \n> create table foo (a int default (avg(foo.a)));\n> ERROR: aggregate functions are not allowed in DEFAULT expressions\n\nI actually think that what you propose here makes more sense than what\nHEAD does because the most inner expression gets evaluated first.\nThis for example generates the same error as on HEAD:\n=# create table foo (a int default (avg(1)));\nERROR: 42803: aggregate functions are not allowed in DEFAULT expressions\n--\nMichael", "msg_date": "Thu, 14 Mar 2019 13:23:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Thu, Mar 14, 2019 at 01:23:08PM +0900, Michael Paquier wrote:\n> I actually think that what you propose here makes more sense than what\n> HEAD does because the most inner expression gets evaluated first.\n> This for example generates the same error as on HEAD:\n> =# create table foo (a int default (avg(1)));\n> ERROR: 42803: aggregate functions are not allowed in DEFAULT expressions\n\nI have been working on that, and in the case of a non-existing column\nthe patch would generate the following error on HEAD:\nERROR: 42703: column \"non_existent\" does not exist\nBut with the patch we get that:\nERROR: cannot use column reference in DEFAULT expression\n\nStill I think that this looks right as we should not have any direct\ncolumn reference anyway, and it keeps the code more simple. So I have\nadded more tests to cover those grounds.\n\nThe docs of CREATE TABLE are actually wrong, right? It mentions that\n\"subqueries and cross-references to other columns in the current table\nare not allowed\", but cookDefault() rejects any kind of column\nreferences anyway for default expressions, including references to the\ncolumn which uses the default expression (or we talk about generated\ncolumns here still if I recall Peter Eisentraunt's patch correctly\ngenerated columns don't allow references to the column using the\nexpression itself, which is logic by the way).\n\n+ * transformExpr() should have already rejected column references,\n+ * subqueries, aggregates, window functions, and SRFs, based on the\n+ * EXPR_KIND_ for a default expression.\n */\nI would have added an assertion here, perhaps an elog(). Same remark\nfor cookDefault(). The attached patch has an assertion.\n\n CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES\n IN (sum(a));\n-ERROR: aggregate functions are not allowed in partition bound\n+ERROR: cannot use column reference in partition bound expression\nIt would be nice to also test the case where an aggregate is\nforbidden, so I have added a test with sum(1) instead of a column\nreference.\n\nWe never actually tested in the tree the case of subqueries and SRFs\nused in default expressions, so added.\n\nThe last patch you sent did not fix the original problem of the\nthread. That was intentional from your side I guess to show your\npoint, still we are touching the same area of the code so I propose to\nfix everything together, and to improve the test coverage for list and\nrange strategies. In order to achieve that, I have merged my previous \nproposal into your patch, and added more tests. The new tests for the\nrange strategy reproduce the crash. The result is attached.\n\nWhat do you think?\n--\nMichael", "msg_date": "Wed, 20 Mar 2019 11:07:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Hi,\n\nOn 2019/03/20 11:07, Michael Paquier wrote:\n> On Thu, Mar 14, 2019 at 01:23:08PM +0900, Michael Paquier wrote:\n>> I actually think that what you propose here makes more sense than what\n>> HEAD does because the most inner expression gets evaluated first.\n>> This for example generates the same error as on HEAD:\n>> =# create table foo (a int default (avg(1)));\n>> ERROR: 42803: aggregate functions are not allowed in DEFAULT expressions\n> \n> I have been working on that,\n\nThanks a lot.\n\n> and in the case of a non-existing column\n> the patch would generate the following error on HEAD:\n> ERROR: 42703: column \"non_existent\" does not exist\n> But with the patch we get that:\n> ERROR: cannot use column reference in DEFAULT expression\n> \n> Still I think that this looks right as we should not have any direct\n> column reference anyway, and it keeps the code more simple. So I have\n> added more tests to cover those grounds.\n\nI agree that we should error out immediately, once we encounter an (sub-)\nexpression that's not supported by a given feature (default, partition\nbounds, etc.)\n\nActually, doesn't that mean we should error out because of the aggregate\nin the following example:\n\ncreate table foo (a int default (avg(a));\n\nbecause we can notice the aggregate before we look into its arguments.\nMaybe, we should move the error-checking switch to a point before checking\nthe arguments? That looks slightly more drastic change to make though.\n\n> The docs of CREATE TABLE are actually wrong, right? It mentions that\n> \"subqueries and cross-references to other columns in the current table\n> are not allowed\", but cookDefault() rejects any kind of column\n> references anyway for default expressions, including references to the\n> column which uses the default expression (or we talk about generated\n> columns here still if I recall Peter Eisentraunt's patch correctly\n> generated columns don't allow references to the column using the\n> expression itself, which is logic by the way).\n\nYeah, the documentation in your patch looks correct at a first glance.\n\n> + * transformExpr() should have already rejected column references,\n> + * subqueries, aggregates, window functions, and SRFs, based on the\n> + * EXPR_KIND_ for a default expression.\n> */\n> I would have added an assertion here, perhaps an elog(). Same remark\n> for cookDefault(). The attached patch has an assertion.\n\n+1\n\n> CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted FOR VALUES\n> IN (sum(a));\n> -ERROR: aggregate functions are not allowed in partition bound\n> +ERROR: cannot use column reference in partition bound expression\n> It would be nice to also test the case where an aggregate is\n> forbidden, so I have added a test with sum(1) instead of a column\n> reference.\n\nAs I said above, maybe we should try to rearrange things so that we get\nthe former error in both cases.\n\n> We never actually tested in the tree the case of subqueries and SRFs\n> used in default expressions, so added.\n> \n> The last patch you sent did not fix the original problem of the\n> thread. That was intentional from your side I guess to show your\n> point,\n\nYeah, that's right.\n\n> still we are touching the same area of the code so I propose to\n> fix everything together, and to improve the test coverage for list and\n> range strategies. In order to achieve that, I have merged my previous \n> proposal into your patch, and added more tests. The new tests for the\n> range strategy reproduce the crash. The result is attached.\n\nWe may want to fix the crash first. It might be better to hear other\nopinions before doing something about the error messages.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 20 Mar 2019 18:07:23 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Wed, Mar 20, 2019 at 06:07:23PM +0900, Amit Langote wrote:\n> because we can notice the aggregate before we look into its arguments.\n> Maybe, we should move the error-checking switch to a point before checking\n> the arguments? That looks slightly more drastic change to make though.\n\nYeah, I think that it would be more invasive because the parsing logic\nlooks at the column references first. This way of doing things is\nalso something which is logic in its own way as the most internal\nportions of an expression get checked first, so I quite like that way\nof doing things. And that's just consistent with the point of view of\nthe parsing check order.\n\n> We may want to fix the crash first. It might be better to hear other\n> opinions before doing something about the error messages.\n\nThe thing is that in order to keep the tests for the crash, we finish\nwith the inintuitive RTE-related errors, so it is also inconsistent to\nnot group things..\n--\nMichael", "msg_date": "Wed, 20 Mar 2019 18:17:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Wed, Mar 20, 2019 at 06:17:27PM +0900, Michael Paquier wrote:\n> The thing is that in order to keep the tests for the crash, we finish\n> with the inintuitive RTE-related errors, so it is also inconsistent to\n> not group things..\n\nAs I have seen no feedback from others regarding the changes in error\nmessages depending on the parsing context, so I have been looking at\nsplitting the fix for the crash and changing the error messages, and\nattached is the result of the split (minus the commit messages). The\nfirst patch fixes the crash, and includes test cases to cover the\ncrash as well as extra cases for list and range strategies with\npartition bounds. Some of the error messages are confusing, but that\nfixes the issue. This is not the most elegant thing without the\nsecond patch, but well that could be worse.\n\nThe second patch adds better error context for the different error\nmessages, and includes tests for default expressions, which we could\ndiscuss in a separate thread. So I am not proposing to commit that\nwithout more feedback.\n--\nMichael", "msg_date": "Fri, 22 Mar 2019 14:09:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Hi,\n\nThanks for splitting. It makes sense, because, as you know, the bug that\ncauses the crash is a separate problem from unintuitive error messages\nwhich result from the way in which we parse expressions.\n\nOn 2019/03/22 14:09, Michael Paquier wrote:\n> On Wed, Mar 20, 2019 at 06:17:27PM +0900, Michael Paquier wrote:\n>> The thing is that in order to keep the tests for the crash, we finish\n>> with the inintuitive RTE-related errors, so it is also inconsistent to\n>> not group things..\n> \n> As I have seen no feedback from others regarding the changes in error\n> messages depending on the parsing context, so I have been looking at\n> splitting the fix for the crash and changing the error messages, and\n> attached is the result of the split (minus the commit messages). The\n> first patch fixes the crash, and includes test cases to cover the\n> crash as well as extra cases for list and range strategies with\n> partition bounds. Some of the error messages are confusing, but that\n> fixes the issue. This is not the most elegant thing without the\n> second patch, but well that could be worse.\n\nA comment on this one:\n\n+\tif (cname == NULL)\n+\t{\n+\t\t/*\n+\t\t * No field names have been found, meaning that there\n+\t\t * is not much to do with special value handling. Instead\n+\t\t * let the expression transformation handle any errors and\n+\t\t * limitations.\n+\t\t */\n\nThis comment sounds a bit misleading. The code above this \"did\" find\nfield names, but there were too many. What this comment should mention is\nthat parsing didn't return a single field name, which is the format that\nthe code below this can do something useful with. I had proposed that\nupthread, but maybe that feedback got lost in the discussion about other\nrelated issues.\n\nI had proposed this:\n\n+ /*\n+ * There should be a single field named \"minvalue\" or \"maxvalue\".\n+ */\n if (list_length(cref->fields) == 1 &&\n IsA(linitial(cref->fields), String))\n cname = strVal(linitial(cref->fields));\n\n- Assert(cname != NULL);\n- if (strcmp(\"minvalue\", cname) == 0)\n+ if (cname == NULL)\n+ {\n+ /*\n+ * ColumnRef is not in the desired single-field-name form; for\n+ * consistency, let transformExpr() report the error rather\n+ * than doing it ourselves.\n+ */\n+ }\n\nMaybe that could use few more tweaks, but hope I've made my point.\n\n+CREATE TABLE part_bogus_expr_fail PARTITION OF range_parted\n+ FOR VALUES FROM (sum(a)) TO ('2019-01-01');\n+ERROR: function sum(date) does not exist\n+LINE 2: FOR VALUES FROM (sum(a)) TO ('2019-01-01');\n\nMaybe, we should add to this patch only the tests relevant to the cases\nthat would lead to crash without this patch.\n\nTests regarding error messages fine tuning can be added in the other patch.\n\n> The second patch adds better error context for the different error\n> messages, and includes tests for default expressions, which we could\n> discuss in a separate thread. So I am not proposing to commit that\n> without more feedback.\n\nA separate thread will definitely attract more attention, at least in due\ntime. :)\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 22 Mar 2019 14:49:42 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "On Fri, Mar 22, 2019 at 02:49:42PM +0900, Amit Langote wrote:\n> This comment sounds a bit misleading. The code above this \"did\" find\n> field names, but there were too many. What this comment should mention is\n> that parsing didn't return a single field name, which is the format that\n> the code below this can do something useful with. I had proposed that\n> upthread, but maybe that feedback got lost in the discussion about other\n> related issues.\n\nTrue. I was reviewing that stuff yesterday and I have not been able\nto finish wrapping it.\n\n> +CREATE TABLE part_bogus_expr_fail PARTITION OF range_parted\n> + FOR VALUES FROM (sum(a)) TO ('2019-01-01');\n> +ERROR: function sum(date) does not exist\n> +LINE 2: FOR VALUES FROM (sum(a)) TO ('2019-01-01');\n> \n> Maybe, we should add to this patch only the tests relevant to the cases\n> that would lead to crash without this patch.\n\nDone as you suggested, with a minimal set enough to trigger the crash,\nstill the error message is rather misleading as you would expect :)\n\n> A separate thread will definitely attract more attention, at least in due\n> time. :)\n\nSure. For now I have committed a lighter version of 0001, with\ntweaked comments based on your suggestion, as well as a minimum set of\ntest cases. I have added on the way some tests for range partitions\nwhich have been missing from the start, and improved the existing set\nby removing the original \"a.a\" references, and switching to use\nmax(date) for range partitions to bump correctly on the aggregate\nerror. I am just updating the second patch now and I'll begin a new\nthread soon.\n--\nMichael", "msg_date": "Tue, 26 Mar 2019 10:15:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" }, { "msg_contents": "Hi,\n\nOn 2019/03/26 10:15, Michael Paquier wrote:\n> Done as you suggested, with a minimal set enough to trigger the crash,\n> still the error message is rather misleading as you would expect :)\n\nThanks for committing.\n\n>> A separate thread will definitely attract more attention, at least in due\n>> time. :)\n> \n> Sure. For now I have committed a lighter version of 0001, with\n> tweaked comments based on your suggestion, as well as a minimum set of\n> test cases. I have added on the way some tests for range partitions\n> which have been missing from the start, and improved the existing set\n> by removing the original \"a.a\" references, and switching to use\n> max(date) for range partitions to bump correctly on the aggregate\n> error. I am just updating the second patch now and I'll begin a new\n> thread soon.\n\nThanks.\n\nRegards,\nAmit\n\n\n", "msg_date": "Tue, 26 Mar 2019 10:24:52 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15668: Server crash in transformPartitionRangeBounds" } ]
[ { "msg_contents": "1. Currently, cube extension has CUBE_MAX_DIM set as 100.\nA recent github issue. [1]\n2. To compile a custom version of the extension off the tree requires:\n```\n make -C contrib/custom_cube USE_PGXS=1\n```\n3. But utils/float.h required by cube.c and cubeparse.y is not installed.\nIt's not present in the latest release file [2],\nnor being installed when running\nmake install when compiling from git.\n4. Current workaround is to use\n```\n#include \"../../src/include/utils/float.h\"\n```\nin cube.c and cubeparse.y when compiling in git tree.\n\n[1] https://github.com/postgres/postgres/pull/38\n[2] https://github.com/postgres/postgres/archive/REL_11_2.tar.gz\n\n1. Currently, cube extension has CUBE_MAX_DIM set as 100.A recent github issue. [1]2. To compile a custom version of the extension off the tree requires:```   make -C contrib/custom_cube USE_PGXS=1```3. But utils/float.h required by cube.c and cubeparse.y is not installed.It's not present in the latest release file [2],nor being installed when runningmake install when compiling from git.4. Current workaround is to use```#include \"../../src/include/utils/float.h\"```in cube.c and cubeparse.y when compiling in git tree.[1] https://github.com/postgres/postgres/pull/38[2] https://github.com/postgres/postgres/archive/REL_11_2.tar.gz", "msg_date": "Tue, 5 Mar 2019 12:51:47 +0300", "msg_from": "Siarhei Siniak <serega.belarus@gmail.com>", "msg_from_op": true, "msg_subject": "[Issue] Can't recompile cube extension as PGXS,\n utils/float.h is not installed" }, { "msg_contents": "Siarhei Siniak <serega.belarus@gmail.com> writes:\n> 3. But utils/float.h required by cube.c and cubeparse.y is not installed.\n\nAFAICT, that file only exists in HEAD, not in any released branch, and\nit is installed during \"make install\" from HEAD. Please be sure you\nare using installed files that match whatever branch you're trying\nto build from.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 05 Mar 2019 09:50:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Issue] Can't recompile cube extension as PGXS,\n utils/float.h is not installed" }, { "msg_contents": "---------- Forwarded message ---------\nFrom: Siarhei Siniak <serega.belarus@gmail.com>\nDate: Tue, 5 Mar 2019 at 23:31\nSubject: Re: [Issue] Can't recompile cube extension as PGXS, utils/float.h\nis not installed\nTo: Tom Lane <tgl@sss.pgh.pa.us>\n\n\n>AFAICT, that file only exists in HEAD, not in any released branch, and\n>it is installed during \"make install\" from HEAD. Please be sure you\n>are using installed files that match whatever branch you're trying\n>to build from.\nYeah june and july 2018, a month in between. Just thought a release was not\nso long ago.\n```\ngit log REL_11_BETA2..6bf0bc842bd75 --format=oneline -- | wc -l\n# 175\n```\nOk, then probably no more questions.\n\n---------- Forwarded message ---------From: Siarhei Siniak <serega.belarus@gmail.com>Date: Tue, 5 Mar 2019 at 23:31Subject: Re: [Issue] Can't recompile cube extension as PGXS, utils/float.h is not installedTo: Tom Lane <tgl@sss.pgh.pa.us>>AFAICT, that file only exists in HEAD, not in any released branch, and>it is installed during \"make install\" from HEAD.  Please be sure you>are using installed files that match whatever branch you're trying>to build from.Yeah june and july 2018, a month in between. Just thought a release was not so long ago.```git log  REL_11_BETA2..6bf0bc842bd75 --format=oneline -- | wc -l# 175```Ok, then probably no more questions.", "msg_date": "Tue, 5 Mar 2019 23:36:08 +0300", "msg_from": "Siarhei Siniak <serega.belarus@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: [Issue] Can't recompile cube extension as PGXS, utils/float.h is\n not installed" } ]
[ { "msg_contents": "currently there is one process per connection and it will not not very good\nfor some short time connection. In oracle database, it support shared\nserver which can serve more than 1 users at the same time.\n\nSee\nhttps://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc001.htm#ADMIN11166\n\n\ndo we have any plan about this?\n\ncurrently there is one process per connection and it will not not very good for some short time connection.    In oracle database, it support shared server which can serve more than 1 users  at the same time.   See https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc001.htm#ADMIN11166 do we have any plan about this?", "msg_date": "Tue, 5 Mar 2019 23:34:56 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "any plan to support shared servers like Oracle in PG?" }, { "msg_contents": "There already are solutions regarding this feature in Postgres\n using \"connection pooler\" wording\n\nsee \n\npgpool: http://www.pgpool.net/mediawiki/index.php/Main_Page\n\npgbouncer: https://pgbouncer.github.io/\n\nthere are also discussions to include this as a core feature\n\nhttps://www.postgresql.org/message-id/flat/4b971a8f-ff61-40eb-8f30-7b57eb0fdf9d%40postgrespro.ru\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n", "msg_date": "Tue, 5 Mar 2019 12:13:09 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: any plan to support shared servers like Oracle in PG?" }, { "msg_contents": "thank you for this information! takes 2 days to read the discussion..\n\nOn Wed, Mar 6, 2019 at 3:13 AM legrand legrand <legrand_legrand@hotmail.com>\nwrote:\n\n> There already are solutions regarding this feature in Postgres\n> using \"connection pooler\" wording\n>\n> see\n>\n> pgpool: http://www.pgpool.net/mediawiki/index.php/Main_Page\n>\n> pgbouncer: https://pgbouncer.github.io/\n>\n> there are also discussions to include this as a core feature\n>\n>\n> https://www.postgresql.org/message-id/flat/4b971a8f-ff61-40eb-8f30-7b57eb0fdf9d%40postgrespro.ru\n>\n>\n>\n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n>\n>\n\nthank you for this information!  takes 2 days to read the discussion..On Wed, Mar 6, 2019 at 3:13 AM legrand legrand <legrand_legrand@hotmail.com> wrote:There already are solutions regarding this feature in Postgres\n using \"connection pooler\" wording\n\nsee \n\npgpool: http://www.pgpool.net/mediawiki/index.php/Main_Page\n\npgbouncer: https://pgbouncer.github.io/\n\nthere are also discussions to include this as a core feature\n\nhttps://www.postgresql.org/message-id/flat/4b971a8f-ff61-40eb-8f30-7b57eb0fdf9d%40postgrespro.ru\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html", "msg_date": "Fri, 8 Mar 2019 12:49:38 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: any plan to support shared servers like Oracle in PG?" } ]
[ { "msg_contents": "\nWe don't currently have any buildfarm animals running 32 bit mingw\nbuilds for releases > 10. As part of my testing of msys 2 I thought I\nwould try its 32 bit compiler and got this regression diff on HEAD\n\n\ncheers\n\n\nandrew\n\n\ndiff -w -U3\nC:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql/src/test/regress/expected/circle.out\nC:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql.build/src/test/regress/results/circle.out\n---\nC:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql/src/test/regress/expected/circle.out \n2019-03-03 17:14:38.648207000 +0000\n+++\nC:/tools/msys64/home/Administrator/bf/root/HEAD/pgsql.build/src/test/regress/results/circle.out    \n2019-03-04 19:33:37.576494900 +0000\n@@ -111,8 +111,8 @@\n   WHERE (c1.f1 < c2.f1) AND ((c1.f1 <-> c2.f1) > 0)\n   ORDER BY distance, area(c1.f1), area(c2.f1);\n  five |      one       |      two       |     distance\n-------+----------------+----------------+------------------\n-      | <(3,5),0>      | <(1,2),3>      | 0.60555127546399\n+------+----------------+----------------+-------------------\n+      | <(3,5),0>      | <(1,2),3>      | 0.605551275463989\n       | <(3,5),0>      | <(5,1),3>      | 1.47213595499958\n       | <(100,200),10> | <(100,1),115>  |               74\n       | <(100,200),10> | <(1,2),100>    | 111.370729772479\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Mar 2019 13:43:50 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Windows 32 bit vs circle test" } ]
[ { "msg_contents": "Hi all,\n\nHere is my attempt to fix a 12-years old ltree bug (which is a todo item).\n\nI see it's not backward-compatible, but in my understanding that's\nwhat is documented. Previous behavior was inconsistent with\ndocumentation (where single asterisk should match zero or more\nlabels).\n\nhttp://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php", "msg_date": "Tue, 5 Mar 2019 22:27:02 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com>", "msg_from_op": true, "msg_subject": "fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com> writes:\n> Here is my attempt to fix a 12-years old ltree bug (which is a todo item).\n> I see it's not backward-compatible, but in my understanding that's\n> what is documented. Previous behavior was inconsistent with\n> documentation (where single asterisk should match zero or more\n> labels).\n> http://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php\n\nI took a quick look through this. I see where you're going with this,\nand I agree that this coding seems like a better match to what it says\nin the documentation:\n\n\t... you can put ! (NOT) at the start to match any label that\n\tdoesn't match any of the alternatives.\n\nHowever, it seems like Teodor and Oleg went to an awful lot of trouble\nto implement some other behavior. It looks like what's there is trying\nto do something like \"true if this pattern does not match any label\nbetween the matches for whatever's around it\", rather than just \"true\nif this pattern does not match at one specific position\". The number\nof changes in the expected output for existing regression test cases\nis also disheartening: it's fairly hard to believe that whoever wrote\nthe test cases didn't think those expected outputs were correct.\n\nIn short, I'm wondering if we should treat this as a documentation\nbug not a code bug. But to do that, we'd need a more accurate\ndescription of what the code is supposed to do, because the statement\nquoted above is certainly not a match to the actual behavior.\n\nBTW, if we do proceed in this direction, I wonder whether the\nltree_gist code needs any adjustments.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Apr 2019 11:46:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On Sun, Apr 7, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> =?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com> writes:\n> > Here is my attempt to fix a 12-years old ltree bug (which is a todo item).\n> > I see it's not backward-compatible, but in my understanding that's\n> > what is documented. Previous behavior was inconsistent with\n> > documentation (where single asterisk should match zero or more\n> > labels).\n> > http://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php\n\n[...]\n\n> In short, I'm wondering if we should treat this as a documentation\n> bug not a code bug. But to do that, we'd need a more accurate\n> description of what the code is supposed to do, because the statement\n> quoted above is certainly not a match to the actual behavior.\n\nThis patch doesn't apply. More importantly, it seems like we don't\nhave a consensus on whether we want it.\n\nTeodor, Oleg, would you like to offer an opinion here? If I\nunderstand correctly, the choices are doc change, code/comment change\nor WONT_FIX. This seems to be an entry that we can bring to a\nconclusion in this CF with some input from the ltree experts.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2019 16:22:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On Mon, Jul 8, 2019 at 7:22 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sun, Apr 7, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > =?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com> writes:\n> > > Here is my attempt to fix a 12-years old ltree bug (which is a todo item).\n> > > I see it's not backward-compatible, but in my understanding that's\n> > > what is documented. Previous behavior was inconsistent with\n> > > documentation (where single asterisk should match zero or more\n> > > labels).\n> > > http://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php\n>\n> [...]\n>\n> > In short, I'm wondering if we should treat this as a documentation\n> > bug not a code bug. But to do that, we'd need a more accurate\n> > description of what the code is supposed to do, because the statement\n> > quoted above is certainly not a match to the actual behavior.\n>\n> This patch doesn't apply. More importantly, it seems like we don't\n> have a consensus on whether we want it.\n>\n> Teodor, Oleg, would you like to offer an opinion here? If I\n> understand correctly, the choices are doc change, code/comment change\n> or WONT_FIX. This seems to be an entry that we can bring to a\n> conclusion in this CF with some input from the ltree experts.\n\nWe are currently very busy and will look at the problem (and dig into\nour memory)\nlater. There is also another ltree patch\n(https://commitfest.postgresql.org/23/1977/), it would be\nnice if Filip try it.\n\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 9 Jul 2019 17:57:56 +0300", "msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On 09.07.2019 17:57, Oleg Bartunov wrote:\n> On Mon, Jul 8, 2019 at 7:22 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Sun, Apr 7, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> =?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com> writes:\n>>>> Here is my attempt to fix a 12-years old ltree bug (which is a todo item).\n>>>> I see it's not backward-compatible, but in my understanding that's\n>>>> what is documented. Previous behavior was inconsistent with\n>>>> documentation (where single asterisk should match zero or more\n>>>> labels).\n>>>> http://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php\n>> [...]\n>>\n>>> In short, I'm wondering if we should treat this as a documentation\n>>> bug not a code bug. But to do that, we'd need a more accurate\n>>> description of what the code is supposed to do, because the statement\n>>> quoted above is certainly not a match to the actual behavior.\n>> This patch doesn't apply. More importantly, it seems like we don't\n>> have a consensus on whether we want it.\n>>\n>> Teodor, Oleg, would you like to offer an opinion here? If I\n>> understand correctly, the choices are doc change, code/comment change\n>> or WONT_FIX. This seems to be an entry that we can bring to a\n>> conclusion in this CF with some input from the ltree experts.\n> We are currently very busy and will look at the problem (and dig into\n> our memory) later. There is also another ltree patch\n> (https://commitfest.postgresql.org/23/1977/), it would be nice if\n> Filip try it.\n\nI looked at \"ltree syntax improvement\" patch and found two more very\nold bugs in ltree/lquery (fixes are attached):\n\n1. ltree/lquery level counter overflow is wrongly checked:\n\n SELECT nlevel((repeat('a.', 65534) || 'a')::ltree);\n nlevel\n --------\n 65535\n (1 row)\n\n -- expected 65536 or error\n SELECT nlevel((repeat('a.', 65535) || 'a')::ltree);\n nlevel\n --------\n 0\n (1 row)\n\n -- expected 65537 or error\n SELECT nlevel((repeat('a.', 65536) || 'a')::ltree);\n nlevel\n --------\n 1\n (1 row)\n\n -- expected 'aaaaa...' or error\n SELECT (repeat('a.', 65535) || 'a')::ltree;\n ltree\n -------\n \n (1 row)\n\n -- expected 'aaaaa...' or error\n SELECT (repeat('a.', 65536) || 'a')::ltree;\n ltree\n -------\n a\n (1 row)\n\n\n2. '*{a}.*{b}.*{c}' is not equivalent to '*{a+b+c}' (as I expect):\n\n SELECT ltree '1.2' ~ '*{2}';\n ?column?\n ----------\n t\n (1 row)\n\n -- expected true\n SELECT ltree '1.2' ~ '*{1}.*{1}';\n ?column?\n ----------\n f\n (1 row)\n\n\nMaybe these two bugs need a separate thread?\n\n\n--\nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 16 Jul 2019 18:50:22 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Hi Nikita,\n\nOn Tue, Jul 16, 2019 at 6:52 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> I looked at \"ltree syntax improvement\" patch and found two more very\n> old bugs in ltree/lquery (fixes are attached):\n\nThank you for the fixes. I've couple notes on them.\n\n0001-Fix-max-size-checking-for-ltree-and-lquery.patch\n\n+#define LTREE_MAX_LEVELS Min(PG_UINT16_MAX, MaxAllocSize / sizeof(nodeitem))\n+#define LQUERY_MAX_LEVELS Min(PG_UINT16_MAX, MaxAllocSize / ITEMSIZE)\n\nLooks over caution. PG_UINT16_MAX is not even close to MaxAllocSize /\nsizeof(nodeitem) or MaxAllocSize / ITEMSIZE.\n\n0002-Fix-successive-lquery-ops.patch\n\ndiff --git a/contrib/ltree/lquery_op.c b/contrib/ltree/lquery_op.c\nindex 62172d5..d4f4941 100644\n--- a/contrib/ltree/lquery_op.c\n+++ b/contrib/ltree/lquery_op.c\n@@ -255,8 +255,8 @@ checkCond(lquery_level *curq, int query_numlevel,\nltree_level *curt, int tree_nu\n }\n else\n {\n- low_pos = cur_tpos + curq->low;\n- high_pos = cur_tpos + curq->high;\n+ low_pos = Min(low_pos + curq->low, PG_UINT16_MAX);\n+ high_pos = Min(high_pos + curq->high, PG_UINT16_MAX);\n if (ptr && ptr->q)\n {\n ptr->nq++;\n@@ -282,8 +282,8 @@ checkCond(lquery_level *curq, int query_numlevel,\nltree_level *curt, int tree_nu\n }\n else\n {\n- low_pos = cur_tpos + curq->low;\n- high_pos = cur_tpos + curq->high;\n+ low_pos = Min(low_pos + curq->low, PG_UINT16_MAX);\n+ high_pos = Min(high_pos + curq->high, PG_UINT16_MAX);\n }\n\n curq = LQL_NEXT(curq);\n\nI'm not sure what do these checks do. Code around is uncommented and\npuzzled. But could we guarantee the same invariant on the stage of\nltree/lquery parsing?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 17 Jul 2019 21:33:46 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Please create separate commitfest entry.", "msg_date": "Wed, 04 Sep 2019 11:05:03 +0000", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On Tue, Jul 16, 2019 at 8:52 PM Nikita Glukhov <n.gluhov@postgrespro.ru>\nwrote:\n\n>\n> On 09.07.2019 17:57, Oleg Bartunov wrote:\n>\n> On Mon, Jul 8, 2019 at 7:22 AM Thomas Munro <thomas.munro@gmail.com> <thomas.munro@gmail.com> wrote:\n>\n> On Sun, Apr 7, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> <tgl@sss.pgh.pa.us> wrote:\n>\n> =?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com> <filip.rembialkowski@gmail.com> writes:\n>\n> Here is my attempt to fix a 12-years old ltree bug (which is a todo item).\n> I see it's not backward-compatible, but in my understanding that's\n> what is documented. Previous behavior was inconsistent with\n> documentation (where single asterisk should match zero or more\n> labels).http://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php\n>\n> [...]\n>\n>\n> In short, I'm wondering if we should treat this as a documentation\n> bug not a code bug. But to do that, we'd need a more accurate\n> description of what the code is supposed to do, because the statement\n> quoted above is certainly not a match to the actual behavior.\n>\n> This patch doesn't apply. More importantly, it seems like we don't\n> have a consensus on whether we want it.\n>\n> Teodor, Oleg, would you like to offer an opinion here? If I\n> understand correctly, the choices are doc change, code/comment change\n> or WONT_FIX. This seems to be an entry that we can bring to a\n> conclusion in this CF with some input from the ltree experts.\n>\n> We are currently very busy and will look at the problem (and dig into\n> our memory) later. There is also another ltree patch\n> (https://commitfest.postgresql.org/23/1977/), it would be nice if\n> Filip try it.\n>\n> I looked at \"ltree syntax improvement\" patch and found two more very\n> old bugs in ltree/lquery (fixes are attached):\n>\n> 1. ltree/lquery level counter overflow is wrongly checked:\n>\n> SELECT nlevel((repeat('a.', 65534) || 'a')::ltree);\n> nlevel\n> --------\n> 65535\n> (1 row)\n>\n> -- expected 65536 or error\n> SELECT nlevel((repeat('a.', 65535) || 'a')::ltree);\n> nlevel\n> --------\n> 0\n> (1 row)\n>\n> -- expected 65537 or error\n> SELECT nlevel((repeat('a.', 65536) || 'a')::ltree);\n> nlevel\n> --------\n> 1\n> (1 row)\n>\n> -- expected 'aaaaa...' or error\n> SELECT (repeat('a.', 65535) || 'a')::ltree;\n> ltree\n> -------\n>\n> (1 row)\n>\n> -- expected 'aaaaa...' or error\n> SELECT (repeat('a.', 65536) || 'a')::ltree;\n> ltree\n> -------\n> a\n> (1 row)\n>\n>\n> 2. '*{a}.*{b}.*{c}' is not equivalent to '*{a+b+c}' (as I expect):\n>\n> SELECT ltree '1.2' ~ '*{2}';\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> -- expected true\n> SELECT ltree '1.2' ~ '*{1}.*{1}';\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n>\n> Maybe these two bugs need a separate thread?\n>\n>\n> Please create separate commitfest entry.\n\n\n\n> --\n> Nikita Glukhov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Tue, Jul 16, 2019 at 8:52 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n\n\n\nOn 09.07.2019 17:57, Oleg Bartunov\n wrote:\n\n\nOn Mon, Jul 8, 2019 at 7:22 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n\nOn Sun, Apr 7, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com> writes:\n\n\nHere is my attempt to fix a 12-years old ltree bug (which is a todo item).\nI see it's not backward-compatible, but in my understanding that's\nwhat is documented. Previous behavior was inconsistent with\ndocumentation (where single asterisk should match zero or more\nlabels).\nhttp://archives.postgresql.org/pgsql-bugs/2007-11/msg00044.php\n\n\n\n[...]\n\n\n\nIn short, I'm wondering if we should treat this as a documentation\nbug not a code bug. But to do that, we'd need a more accurate\ndescription of what the code is supposed to do, because the statement\nquoted above is certainly not a match to the actual behavior.\n\n\nThis patch doesn't apply. More importantly, it seems like we don't\nhave a consensus on whether we want it.\n\nTeodor, Oleg, would you like to offer an opinion here? If I\nunderstand correctly, the choices are doc change, code/comment change\nor WONT_FIX. This seems to be an entry that we can bring to a\nconclusion in this CF with some input from the ltree experts.\n\n\nWe are currently very busy and will look at the problem (and dig into\nour memory) later. There is also another ltree patch\n(https://commitfest.postgresql.org/23/1977/), it would be nice if \nFilip try it.\n\n\n\nI looked at \"ltree syntax improvement\" patch and found two more very \nold bugs in ltree/lquery (fixes are attached):\n\n1. ltree/lquery level counter overflow is wrongly checked:\n\n SELECT nlevel((repeat('a.', 65534) || 'a')::ltree);\n nlevel \n --------\n 65535\n (1 row)\n\n -- expected 65536 or error\n SELECT nlevel((repeat('a.', 65535) || 'a')::ltree);\n nlevel \n --------\n 0\n (1 row)\n\n -- expected 65537 or error\n SELECT nlevel((repeat('a.', 65536) || 'a')::ltree);\n nlevel \n --------\n 1\n (1 row)\n\n -- expected 'aaaaa...' or error\n SELECT (repeat('a.', 65535) || 'a')::ltree;\n ltree \n -------\n \n (1 row)\n\n -- expected 'aaaaa...' or error\n SELECT (repeat('a.', 65536) || 'a')::ltree;\n ltree \n -------\n a\n (1 row)\n\n\n2. '*{a}.*{b}.*{c}' is not equivalent to '*{a+b+c}' (as I expect):\n\n SELECT ltree '1.2' ~ '*{2}';\n ?column? \n ----------\n t\n (1 row)\n\n -- expected true\n SELECT ltree '1.2' ~ '*{1}.*{1}';\n ?column? \n ----------\n f\n (1 row)\n\n\nMaybe these two bugs need a separate thread?\n Please create separate commitfest entry. --\nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n-- Ibrar Ahmed", "msg_date": "Wed, 4 Sep 2019 16:06:28 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On 2019-Jul-09, Oleg Bartunov wrote:\n\n> On Mon, Jul 8, 2019 at 7:22 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Sun, Apr 7, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > > In short, I'm wondering if we should treat this as a documentation\n> > > bug not a code bug. But to do that, we'd need a more accurate\n> > > description of what the code is supposed to do, because the statement\n> > > quoted above is certainly not a match to the actual behavior.\n\n> > Teodor, Oleg, would you like to offer an opinion here?\n\n> We are currently very busy and will look at the problem (and dig into\n> our memory) later.\n\nHi Oleg, Teodor. Did you find time to refresh your memory on these things?\nIt would be good to have these bugfixes sorted out.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Sep 2019 16:50:58 -0400", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, failed\n\nThis is my first PostgreSQL commitfest and review, guidance welcome.\r\n\r\nThis patch is straightforward, it applies cleanly, and it includes tests (I've also tested the feature manually).\r\n\r\nThe (existing) documentation states \"The length of a label path must be less than 65kB,\" I believe that the 65kB mentioned here should instead be 64kB - perhaps the patch could be updated with this single-character fix? At first I thought the 65kB limit would be applied to the label path string (e.g. 'Top.Countries.Europe.Russia' would be 27 bytes), but it seems the limit applies to the number of labels in the path - perhaps `kB` is not the right measurement here and it should explicitly state 65536?\r\n\r\nIt is not stated in the documentation what should happen if the label path length is greater than 65535, so raising an error makes sense (but may be a breaking change).\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 19 Nov 2019 10:27:55 +0000", "msg_from": "Benjie Gillam <benjie@jemjie.com>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On Thu, Sep 05, 2019 at 04:50:58PM -0400, Alvaro Herrera from 2ndQuadrant wrote:\n> Hi Oleg, Teodor. Did you find time to refresh your memory on these things?\n> It would be good to have these bugfixes sorted out.\n\nTwo months later. Now would be a good time as well! Alexander, you\nhave also looked at two patches from Nikita upthread. If these look\ngood enough for you, are you working on merging them into the tree?\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 16:27:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Hi Nikita,\n\nThis patch seems inactive / stuck in \"waiting on author\" since November.\nIt's marked as bugfix, so it'd be good to get it committed instead of\njust punting it to the next CF.\n\nI did a quick review, and I came mostly with the same two complaints as\nAlexander ...\n\nOn Wed, Jul 17, 2019 at 09:33:46PM +0300, Alexander Korotkov wrote:\n>Hi Nikita,\n>\n>On Tue, Jul 16, 2019 at 6:52 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n>> I looked at \"ltree syntax improvement\" patch and found two more very\n>> old bugs in ltree/lquery (fixes are attached):\n>\n>Thank you for the fixes. I've couple notes on them.\n>\n>0001-Fix-max-size-checking-for-ltree-and-lquery.patch\n>\n>+#define LTREE_MAX_LEVELS Min(PG_UINT16_MAX, MaxAllocSize / sizeof(nodeitem))\n>+#define LQUERY_MAX_LEVELS Min(PG_UINT16_MAX, MaxAllocSize / ITEMSIZE)\n>\n>Looks over caution. PG_UINT16_MAX is not even close to MaxAllocSize /\n>sizeof(nodeitem) or MaxAllocSize / ITEMSIZE.\n>\n\nYeah, I'm also puzzled by the usage of PG_UINT16_MAX here. It's so much\nlower than the other values we could jut use the constant directly, but\nlet's say the structs could grow from the ~16B to chnge this.\n\nThe main question is why we need PG_UINT16_MAX at all? It kinda implies\nwe need to squish the value into a 2B counter or something, but is that\nactually true? I don't see anything like that in ltree_io.c.\n\nSo it seems more like an arbitrary value considered \"sane\" - which is\nfine, but then a comment saying so would be nice, and we could pick a\nvalue that is \"nicer\" for humans. Or just use value computed from the\nMaxAllocSize limit, without the Min().\n\n>0002-Fix-successive-lquery-ops.patch\n>\n>diff --git a/contrib/ltree/lquery_op.c b/contrib/ltree/lquery_op.c\n>index 62172d5..d4f4941 100644\n>--- a/contrib/ltree/lquery_op.c\n>+++ b/contrib/ltree/lquery_op.c\n>@@ -255,8 +255,8 @@ checkCond(lquery_level *curq, int query_numlevel,\n>ltree_level *curt, int tree_nu\n> }\n> else\n> {\n>- low_pos = cur_tpos + curq->low;\n>- high_pos = cur_tpos + curq->high;\n>+ low_pos = Min(low_pos + curq->low, PG_UINT16_MAX);\n>+ high_pos = Min(high_pos + curq->high, PG_UINT16_MAX);\n> if (ptr && ptr->q)\n> {\n> ptr->nq++;\n>@@ -282,8 +282,8 @@ checkCond(lquery_level *curq, int query_numlevel,\n>ltree_level *curt, int tree_nu\n> }\n> else\n> {\n>- low_pos = cur_tpos + curq->low;\n>- high_pos = cur_tpos + curq->high;\n>+ low_pos = Min(low_pos + curq->low, PG_UINT16_MAX);\n>+ high_pos = Min(high_pos + curq->high, PG_UINT16_MAX);\n> }\n>\n> curq = LQL_NEXT(curq);\n>\n>I'm not sure what do these checks do. Code around is uncommented and\n>puzzled. But could we guarantee the same invariant on the stage of\n>ltree/lquery parsing?\n>\n\nUnfortunately, the current code is somewhat undercommented :-(\n\nAnyway, I don't quite understand why we need these caps. It kinda seems\nlike a band-aid for potential overflow.\n\nWhy should it be OK for the values to even get past the maximum, with\nsane input data? And isn't there a better upper limit (e.g. based on\nhow much space we actually allocated)?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jan 2020 19:29:15 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Hi,\n\nI've moved this patch to the next CF - it's still in WoA state, but it's\nsupposedly a bugfix so I've decided not to return it with feedback.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 1 Feb 2020 12:13:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On 24.01.2020 21:29, Tomas Vondra wrote:\n> Hi Nikita,\n>\n> This patch seems inactive / stuck in \"waiting on author\" since November.\n> It's marked as bugfix, so it'd be good to get it committed instead of\n> just punting it to the next CF.\n>\n> I did a quick review, and I came mostly with the same two complaints as\n> Alexander ...\n>\n> On Wed, Jul 17, 2019 at 09:33:46PM +0300, Alexander Korotkov wrote:\n>> Hi Nikita,\n>>\n>> On Tue, Jul 16, 2019 at 6:52 PM Nikita Glukhov \n>> <n.gluhov@postgrespro.ru> wrote:\n>>> I looked at \"ltree syntax improvement\" patch and found two more very\n>>> old bugs in ltree/lquery (fixes are attached):\n>>\n>> Thank you for the fixes.  I've couple notes on them.\n>>\n>> 0001-Fix-max-size-checking-for-ltree-and-lquery.patch\n>>\n>> +#define LTREE_MAX_LEVELS Min(PG_UINT16_MAX, MaxAllocSize / \n>> sizeof(nodeitem))\n>> +#define LQUERY_MAX_LEVELS Min(PG_UINT16_MAX, MaxAllocSize / ITEMSIZE)\n>>\n>> Looks over caution.  PG_UINT16_MAX is not even close to MaxAllocSize /\n>> sizeof(nodeitem) or MaxAllocSize / ITEMSIZE.\n>>\n>\n> Yeah, I'm also puzzled by the usage of PG_UINT16_MAX here. It's so much\n> lower than the other values we could jut use the constant directly, but\n> let's say the structs could grow from the ~16B to chnge this.\n\nOk, LTREE_MAX_LEVELS and LQUERY_MAX_LEVELS are defined simply as PG_UINT16_MAX now.\n\n>\n> The main question is why we need PG_UINT16_MAX at all? It kinda implies\n> we need to squish the value into a 2B counter or something, but is that\n> actually true? I don't see anything like that in ltree_io.c.\n\nltree.numlevel and lquery.numlevel are uint16 fields.\n\n\nI also found two places in ltree_concat() where numlevel can overflow.\n\nThe first is ltree_concat() (operator ||(ltree, ltree)):\n\n=# SELECT nlevel(('a' || repeat('.a', 65533))::ltree || 'a');\n nlevel\n--------\n 65535\n(1 row)\n\n=# SELECT nlevel(('a' || repeat('.a', 65534))::ltree || 'a');\n nlevel\n--------\n 0\n(1 row)\n\n\n\nThe second is parsing of low and high level limits in lquery_in():\n\n=# SELECT '*{65535}'::lquery;\n lquery\n----------\n *{65535}\n(1 row)\n\n=# SELECT '*{65536}'::lquery;\n lquery\n--------\n *{0}\n(1 row)\n\n=# SELECT '*{65537}'::lquery;\n lquery\n--------\n *{1}\n(1 row)\n\n\nThe both problems are fixed in the new version of the patch.\n\n> So it seems more like an arbitrary value considered \"sane\" - which is\n> fine, but then a comment saying so would be nice, and we could pick a\n> value that is \"nicer\" for humans. Or just use value computed from the\n> MaxAllocSize limit, without the Min().\n>\n>> 0002-Fix-successive-lquery-ops.patch\n>>\n>> diff --git a/contrib/ltree/lquery_op.c b/contrib/ltree/lquery_op.c\n>> index 62172d5..d4f4941 100644\n>> --- a/contrib/ltree/lquery_op.c\n>> +++ b/contrib/ltree/lquery_op.c\n>> @@ -255,8 +255,8 @@ checkCond(lquery_level *curq, int query_numlevel,\n>> ltree_level *curt, int tree_nu\n>>  }\n>>  else\n>>  {\n>> - low_pos = cur_tpos + curq->low;\n>> - high_pos = cur_tpos + curq->high;\n>> + low_pos = Min(low_pos + curq->low, PG_UINT16_MAX);\n>> + high_pos = Min(high_pos + curq->high, PG_UINT16_MAX);\n>>  if (ptr && ptr->q)\n>>  {\n>>  ptr->nq++;\n>> @@ -282,8 +282,8 @@ checkCond(lquery_level *curq, int query_numlevel,\n>> ltree_level *curt, int tree_nu\n>>  }\n>>  else\n>>  {\n>> - low_pos = cur_tpos + curq->low;\n>> - high_pos = cur_tpos + curq->high;\n>> + low_pos = Min(low_pos + curq->low, PG_UINT16_MAX);\n>> + high_pos = Min(high_pos + curq->high, PG_UINT16_MAX);\n>>  }\n>>\n>>  curq = LQL_NEXT(curq);\n>>\n>> I'm not sure what do these checks do.  Code around is uncommented and\n>> puzzled.  But could we guarantee the same invariant on the stage of\n>> ltree/lquery parsing?\n>>\n>\n> Unfortunately, the current code is somewhat undercommented :-(\n\nThe main problem is that no one really understands how it works now.\n\nlow_pos and high_pos seem to be a range of tree levels, from which is allowed\nto match the rest of lquery.\n\nFor example, when we are matching '.b' in the 'a.*{2,3}.*{4,5}.b'::lquery,\nlow_pos = 1 + 2 + 4 = 7 and high_pos = 1 + 3 + 5 = 9.\n\n\nThe main goal of the patch is to fix calculation of low_pos and high_pos:\n\n- low_pos = cur_tpos + curq->low;\n- high_pos = cur_tpos + curq->high;\n+ low_pos = low_pos + curq->low;\n+ high_pos = high_pos + curq->high;\n\n>\n> Anyway, I don't quite understand why we need these caps. It kinda seems\n> like a band-aid for potential overflow.\n>\n> Why should it be OK for the values to even get past the maximum, with\n> sane input data? And isn't there a better upper limit (e.g. based on\n> how much space we actually allocated)?\n\nWe can compare low_pos to tree_numlevel and return false earlier, if it is\ngreater. And it seems that high_pos we can also limit to tree_numlevel.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 28 Mar 2020 01:43:35 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> On 24.01.2020 21:29, Tomas Vondra wrote:\n>> Unfortunately, the current code is somewhat undercommented :-(\n\n> The main problem is that no one really understands how it works now.\n\nIndeed. I was disturbed to realize that lquery_op.c, despite being\nfar from trivial code, contained NOT ONE SINGLE COMMENT before today,\nother than the content-free file header and a commented-out (visibly\nunsafe, too) debugging printing function. This is a long way south\nof minimally acceptable, in my book.\n\nAnyway, I concur that Nikita's two patches are bug fixes, so I pushed\nthem. Nonetheless, he *did* hijack this thread, so in hopes of restoring\nattention to the original topic, here's a rebased version of the original\npatch.\n\nMy main complaint about it remains the same, that it changes a\ndisturbingly large number of existing regression-test results,\nsuggesting that there's not a meeting of the minds about what\nthis logic is supposed to do. Maybe it's okay or maybe it's\nnot, but who's going to decide?\n\nAlso, now that I've looked at it a bit more, I'd be inclined to\nstrip out the parts of the patch that remove setting up the\nLQUERY_HASNOT flag. Even if we're not using that right now,\nwe might want it again someday, and we're not saving much of\nanything by introducing a minor on-disk incompatibility.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 28 Mar 2020 19:26:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "I wrote:\n> My main complaint about it remains the same, that it changes a\n> disturbingly large number of existing regression-test results,\n> suggesting that there's not a meeting of the minds about what\n> this logic is supposed to do. Maybe it's okay or maybe it's\n> not, but who's going to decide?\n\nWell ... *somebody's* got to decide, and since Oleg and Teodor aren't\nstepping up, I took it on myself to study this more closely.\n\nIt seems to me that indeed the existing behavior is broken: given\nwhat the documentation has to say, it's really hard to argue that\nan lquery like \"*.!foo.*\" means something other than \"match any\nltree that has at least one label that is not 'foo'\". But the\nexisting code produces\n\nregression=# select 'bar.foo.baz'::ltree ~ '*.!foo.*'::lquery;\n ?column? \n----------\n f\n(1 row)\n\nI agree that's just plain wrong, and so are all the regression\ntest cases that this patch is changing the results of.\n\nHowever, I think there is a valid use-case that the existing\ncode is trying to solve: how can you say \"match any ltree in\nwhich no label is 'foo'\"? That is the effective behavior right\nnow of a pattern like this, and it seems useful. So if we change\nthis, we ought to provide some other way to get that result.\n\nWhat I propose we do about that is allow lquery quantifiers to\nbe attached to regular items as well as star items, so that the\nneed is met by saying this:\n\nregression=# select 'bar.foo.baz'::ltree ~ '!foo{,}'::lquery;\n ?column? \n----------\n f\n(1 row)\n\nregression=# select 'bar.fool.baz'::ltree ~ '!foo{,}'::lquery;\n ?column? \n----------\n t\n(1 row)\n\nAlso, once we do that, it's possible to treat star and non-star items\nbasically alike in checkCond, with checkLevel being the place that\naccounts for them being different. This results in logic that's far\nsimpler and much more nearly like the way that LIKE patterns are\nimplemented, which seems like a good thing to me.\n\nHence, attached are two revised patches that attack the problem\nthis way. The first one is somewhat unrelated to the original\npoint --- it's trying to clean up the error messages in ltree_in\nand lquery_in so that they are more consistent and agree with\nthe terminology used in the documentation. (Notably, the term\n\"level\" is used nowhere in the ltree docs, except in the legacy\nfunction name nlevel().) However its movement of the check for\nhigh < low is needed to make that cover the case of a bogus non-star\nquantifier in patch 0002. These also depend on the cosmetic\npatches I just committed, so you need HEAD as of today to get\nthem to apply.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 29 Mar 2020 20:53:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "I wrote:\n> Hence, attached are two revised patches that attack the problem\n> this way. The first one is somewhat unrelated to the original\n> point --- it's trying to clean up the error messages in ltree_in\n> and lquery_in so that they are more consistent and agree with\n> the terminology used in the documentation. (Notably, the term\n> \"level\" is used nowhere in the ltree docs, except in the legacy\n> function name nlevel().)\n\nOne thing I changed in that patch was to change the syntax error\nreports to say \"at character %d\" not \"in position %d\", because\nI thought the latter was pretty confusing --- it's not obvious\nwhether it's counting characters or labels or what. However,\nI now notice that what the code is providing is a zero-based\ncharacter index, which is out of line with our practice\nelsewhere: core parser errors are reported using 1-based indexes.\nIf we reword these messages then we should adopt that practice too.\nHence, new patch versions that do it like that. (0002 is unchanged.)\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 30 Mar 2020 14:00:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On 30.03.2020 21:00, Tom Lane wrote:\n\n> Hence, new patch versions that do it like that. (0002 is unchanged.)\n\nI tried to simplify a bit loops in checkCond() by merging two of them into\none with an explicit exit condition. Also I added return statement after\nthis loop, so it's now clear that we can't fall into next \"while\" loop.\n\nThe rest code in 0001 and 0002 is unchanged.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 31 Mar 2020 00:46:12 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> On 30.03.2020 21:00, Tom Lane wrote:\n>> Hence, new patch versions that do it like that. (0002 is unchanged.)\n\n> I tried to simplify a bit loops in checkCond() by merging two of them into\n> one with an explicit exit condition. Also I added return statement after\n> this loop, so it's now clear that we can't fall into next \"while\" loop.\n\nI dunno, that doesn't really seem clearer to me (although some of it\nmight be that you expended no effort on making the comments match\nthe new code logic).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 17:59:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "I wrote:\n> I dunno, that doesn't really seem clearer to me (although some of it\n> might be that you expended no effort on making the comments match\n> the new code logic).\n\n... although looking closer, this formulation does have one very nice\nadvantage: for the typical non-star case with high = low = 1, the\nonly recursive call is a tail recursion, so it ought to consume less\nstack space than what I wrote.\n\nLet me see what I can do with the comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 18:12:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On 31.03.2020 1:12, Tom Lane wrote:\n\n> I wrote:\n>> I dunno, that doesn't really seem clearer to me (although some of it\n>> might be that you expended no effort on making the comments match\n>> the new code logic).\n> ... although looking closer, this formulation does have one very nice\n> advantage: for the typical non-star case with high = low = 1, the\n> only recursive call is a tail recursion, so it ought to consume less\n> stack space than what I wrote.\n\nAnd we even can simply transform this tail call into a loop:\n\n-if (tlen > 0 && qlen > 0)\n+while (tlen > 0 && qlen > 0)\n\n> Let me see what I can do with the comments.\n\nThanks.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\nOn 31.03.2020 1:12, Tom Lane wrote:\n\n\nI wrote:\n\n\nI dunno, that doesn't really seem clearer to me (although some of it\nmight be that you expended no effort on making the comments match\nthe new code logic).\n\n\n\n... although looking closer, this formulation does have one very nice\nadvantage: for the typical non-star case with high = low = 1, the\nonly recursive call is a tail recursion, so it ought to consume less\nstack space than what I wrote.\n\n\nAnd we even can simply transform this tail call into a loop:\n\n-if (tlen > 0 && qlen > 0)\n+while (tlen > 0 && qlen > 0)\n\n\n\nLet me see what I can do with the comments.\n\nThanks.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 31 Mar 2020 01:22:19 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> And we even can simply transform this tail call into a loop:\n\n> -if (tlen > 0 && qlen > 0)\n> +while (tlen > 0 && qlen > 0)\n\nYeah, the same occurred to me ... and then we can drop the other loop too.\nI've got it down to this now:\n\n/*\n * Try to match an lquery (of qlen items) to an ltree (of tlen items)\n */\nstatic bool\ncheckCond(lquery_level *curq, int qlen,\n ltree_level *curt, int tlen)\n{\n /* Since this function recurses, it could be driven to stack overflow */\n check_stack_depth();\n\n /* Loop while we have query items to consider */\n while (qlen > 0)\n {\n int low,\n high;\n lquery_level *nextq;\n\n /*\n * Get min and max repetition counts for this query item, dealing with\n * the backwards-compatibility hack that the low/high fields aren't\n * meaningful for non-'*' items unless LQL_COUNT is set.\n */\n if ((curq->flag & LQL_COUNT) || curq->numvar == 0)\n low = curq->low, high = curq->high;\n else\n low = high = 1;\n\n /*\n * We may limit \"high\" to the remaining text length; this avoids\n * separate tests below.\n */\n if (high > tlen)\n high = tlen;\n\n /* Fail if a match of required number of items is impossible */\n if (high < low)\n return false;\n\n /*\n * Recursively check the rest of the pattern against each possible\n * start point following some of this item's match(es).\n */\n nextq = LQL_NEXT(curq);\n qlen--;\n\n for (int matchcnt = 0; matchcnt < high; matchcnt++)\n {\n /*\n * If we've consumed an acceptable number of matches of this item,\n * and the rest of the pattern matches beginning here, we're good.\n */\n if (matchcnt >= low && checkCond(nextq, qlen, curt, tlen))\n return true;\n\n /*\n * Otherwise, try to match one more text item to this query item.\n */\n if (!checkLevel(curq, curt))\n return false;\n\n curt = LEVEL_NEXT(curt);\n tlen--;\n }\n\n /*\n * Once we've consumed \"high\" matches, we can succeed only if the rest\n * of the pattern matches beginning here. Loop around (if you prefer,\n * think of this as tail recursion).\n */\n curq = nextq;\n }\n\n /*\n * Once we're out of query items, we match only if there's no remaining\n * text either.\n */\n return (tlen == 0);\n}\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 18:35:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "On 31.03.2020 1:35, Tom Lane wrote:\n> Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n>> And we even can simply transform this tail call into a loop:\n>>\n>> -if (tlen > 0 && qlen > 0)\n>> +while (tlen > 0 && qlen > 0)\n> Yeah, the same occurred to me ... and then we can drop the other loop too.\n\nI think now it looks as simple as the whole algorithm is.\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\nOn 31.03.2020 1:35, Tom Lane wrote:\n\n\nNikita Glukhov <n.gluhov@postgrespro.ru> writes:\n\n\nAnd we even can simply transform this tail call into a loop:\n\n-if (tlen > 0 && qlen > 0)\n+while (tlen > 0 && qlen > 0)\n\n\n\nYeah, the same occurred to me ... and then we can drop the other loop too.\n\n\n\nI think now it looks as simple as the whole algorithm is.\n\n -- \n Nikita Glukhov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company", "msg_date": "Tue, 31 Mar 2020 01:47:05 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> I think now it looks as simple as the whole algorithm is.\n\nYeah, I think we've gotten checkCond to the point of \"there's no\nlonger anything to take away\".\n\nI've marked this RFC, and will push tomorrow unless somebody wants\nto object to the loss of backwards compatibility.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 18:58:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" }, { "msg_contents": "I wrote:\n> I've marked this RFC, and will push tomorrow unless somebody wants\n> to object to the loss of backwards compatibility.\n\nAnd done. I noticed in some final testing that it's possible to\nmake this code take a long time by forcing it to backtrack a lot:\n\nregression=# SELECT (('1' || repeat('.1', 65534))::ltree) ~ '*.*.x';\n ?column? \n----------\n f\n(1 row)\n\nTime: 54015.421 ms (00:54.015)\n\nso I threw in a CHECK_FOR_INTERRUPTS(). Maybe it'd be worth trying\nto optimize such cases, but I'm not sure that it'd ever matter for\nreal-world cases with reasonable-size label strings.\n\nThe old implementation seems to handle that particular case well,\nevidently because it more-or-less folds adjacent stars together.\nHowever, before anyone starts complaining about regressions, they\nshould note that it's really easy to get the old code to fail\nvia stack overflow:\n\nregression=# SELECT (('1' || repeat('.1', 65534))::ltree) ~ '*.!1.*';\nERROR: stack depth limit exceeded\n\n(That's as of five minutes ago, before that it dumped core.)\nSo I don't feel bad about the tradeoff. At least now we have\nsimple, visibly correct code that could serve as a starting\npoint for optimization if anyone feels the need to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 11:47:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for BUG #3720: wrong results at using ltree" } ]
[ { "msg_contents": "It's possible I'm misreading this, but I'm thinking that commits\ncc5f8136 and ab9e0e71 added a few tranches which we need to add to the docs.\n\nsession_dsa, session_record_table, session_typmod_table, and\nshared_tuplestore\n\nhttps://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\n It's possible I'm misreading this, but I'm thinking that commits\n cc5f8136 and ab9e0e71 added a few tranches which we need to add to\n the docs.\n\n session_dsa, session_record_table, session_typmod_table, and\n shared_tuplestore\n\nhttps://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE\n\n -Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services", "msg_date": "Tue, 5 Mar 2019 15:23:03 -0800", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": true, "msg_subject": "few more wait events to add to docs" }, { "msg_contents": "On 2019-Mar-05, Jeremy Schneider wrote:\n\n> It's possible I'm misreading this, but I'm thinking that commits\n> cc5f8136 and ab9e0e71 added a few tranches which we need to add to the docs.\n> \n> session_dsa, session_record_table, session_typmod_table, and\n> shared_tuplestore\n> \n> https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE\n\nhttps://postgr.es/m/CAB7nPqSU1=-3fQVZ+ncgm5VmRO4LSzjgQSmoYgwiZs2HvpyKBA@mail.gmail.com\n:-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 5 Mar 2019 22:18:20 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: few more wait events to add to docs" }, { "msg_contents": "On Wed, Mar 6, 2019 at 2:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Mar-05, Jeremy Schneider wrote:\n> > It's possible I'm misreading this, but I'm thinking that commits\n> > cc5f8136 and ab9e0e71 added a few tranches which we need to add to the docs.\n> >\n> > session_dsa, session_record_table, session_typmod_table, and\n> > shared_tuplestore\n> >\n> > https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE\n>\n> https://postgr.es/m/CAB7nPqSU1=-3fQVZ+ncgm5VmRO4LSzjgQSmoYgwiZs2HvpyKBA@mail.gmail.com\n\nHere's some missing documentation.\n\nHmm, yeah. I wish these were alphabetised, I wish there was an\nautomated warning about this, I wish these tranches were declared a\nbetter way that by adding code in RegisterLWLockTranches().\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Wed, 6 Mar 2019 14:21:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: few more wait events to add to docs" }, { "msg_contents": "On Wed, Mar 06, 2019 at 02:21:15PM +1300, Thomas Munro wrote:\n> Here's some missing documentation.\n\nThe addition looks fine to me.\n\n> Hmm, yeah. I wish these were alphabetised, I wish there was an\n> automated warning about this, I wish these tranches were declared a\n> better way that by adding code in RegisterLWLockTranches().\n\nWhy not using this occasion to reorganize the LWLock and Lock sections\nso as their entries are fully alphabetized then? I have made an\neffort in this direction in 5ef037c for all the sections except these\ntwo. And honestly getting all these organized would really help\ndocumentation readers.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 11:49:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: few more wait events to add to docs" }, { "msg_contents": "On 3/5/19 18:49, Michael Paquier wrote:\n> Why not using this occasion to reorganize the LWLock and Lock sections\n> so as their entries are fully alphabetized then? I have made an\n> effort in this direction in 5ef037c for all the sections except these\n> two. And honestly getting all these organized would really help\n> documentation readers.\n\nRight now, the LWLock documentation implicitly follows a predictable\npattern, so it's not really that hard to check for completeness.  I'm\nnot sure to what extent you want to alphabetize, but I think the current\nstructure isn't bad:\n\nLWLock order in documentation:\n1) CamelCase LWLocks: individually named - see lwlocknames.txt\n2) lowercase LWLocks: tranches\n2a) SLRUs - see SimpleLruInit() callers on doxygen\n2b) Shared Buffer (buffer_content, buffer_io)\n2c) Individually Named - see RegisterLWLockTranches() in lwlock.c\n\n[see attached screenshot image... spoiler alert, it'll be a slide in my\ntalk at PgConf NY]\n\nIf anything, I think we might just want to add comments to\nRegisterLWLockTranches() and lwlocknames.txt with links to the doc file\nthat needs to be updated whenever a new tranche is added.\n\nNot sure the best place for a comment on SLRUs (is SimpleLruInit a good\nplace?)... but I'm kindof hopeful that we're not adding many more new\nSLRUs anyway and that people would bias toward leveraging the buffer\ncache when possible.\n\nI'd rather make this pattern explicit in the docs than lose it... it\ngreatly helps me understand what some particular wait event is, and\nwhere I need to look in the code to find it.\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services", "msg_date": "Wed, 6 Mar 2019 11:08:12 -0800", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": true, "msg_subject": "Re: few more wait events to add to docs" }, { "msg_contents": "On Wed, Mar 06, 2019 at 11:08:12AM -0800, Jeremy Schneider wrote:\n> LWLock order in documentation:\n> 1) CamelCase LWLocks: individually named - see lwlocknames.txt\n> 2) lowercase LWLocks: tranches\n> 2a) SLRUs - see SimpleLruInit() callers on doxygen\n> 2b) Shared Buffer (buffer_content, buffer_io)\n> 2c) Individually Named - see RegisterLWLockTranches() in lwlock.c\n\nHm, OK. Perhaps I lack some user insight on the matter. Thanks for\nthe feedback! Still there are some areas where we could make the\nmicro-ordering better. For example replication_slot_io and buffer_io\nare I/O specific still they get in the middle of the page, still we\nwant buffer_content close by.\n\nOne thing that I think we could do is reorganize at least\nalphabetically the section for \"Lock\". Each item does not really rely\non others. What do you think?\n\n> If anything, I think we might just want to add comments to\n> RegisterLWLockTranches() and lwlocknames.txt with links to the doc file\n> that needs to be updated whenever a new tranche is added.\n\nYes, that would surely help.\n\n> Not sure the best place for a comment on SLRUs (is SimpleLruInit a good\n> place?)... but I'm kindof hopeful that we're not adding many more new\n> SLRUs anyway and that people would bias toward leveraging the buffer\n> cache when possible.\n\nA reference at the top of SimpleLruInit() sounds good to me.\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 11:25:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: few more wait events to add to docs" }, { "msg_contents": "On Thu, Mar 07, 2019 at 11:25:10AM +0900, Michael Paquier wrote:\n> On Wed, Mar 06, 2019 at 11:08:12AM -0800, Jeremy Schneider wrote:\n>> If anything, I think we might just want to add comments to\n>> RegisterLWLockTranches() and lwlocknames.txt with links to the doc file\n>> that needs to be updated whenever a new tranche is added.\n> \n> Yes, that would surely help.\n> \n>> Not sure the best place for a comment on SLRUs (is SimpleLruInit a good\n>> place?)... but I'm kindof hopeful that we're not adding many more new\n>> SLRUs anyway and that people would bias toward leveraging the buffer\n>> cache when possible.\n> \n> A reference at the top of SimpleLruInit() sounds good to me.\n\nThinking more about that, a comment at the top of SimpleLruInit() and\nRegisterLWLockTranches() are both good things.\n\nSo please find attached a patch which does the following things to\naddress this thread:\n- Reorder the list of events in the Lock section in alphabetical order\n(not LWLock!).\n- Add the missing event entries, which is what Thomas has provided.\n- Add more documentation to mention the doc updates.\n\nThoughts?\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 22:59:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: few more wait events to add to docs" } ]
[ { "msg_contents": "Way back in [1] I proposed that we allow NOT IN subqueries to be\nconverted into an anti-join where the subquery cannot return any NULL\nvalues. As Tom pointed out to me, I had neglected to consider that\nthe outer side producing NULLs can cause the anti-join plan to produce\nincorrect results. The difference is that a NOT IN where the subquery\nreturns no results filters nothing, otherwise it filters the nulls,\nplus the records that exist in the subquery.\n\nMore recently over on [2], Jim and Zheng have re-proposed making\nimprovements in this area. Their ideas are slightly different from\nmine as they propose to add an OR .. IS NULL clause to the join\ncondition to handle the outer side being NULL with empty subquery\nproblem. Before Jim and Zheng's patch arrived I managed to fix the\nknown problems with my 4-year-old patch thinking it would have been\nwelcome, but it seems that's not the case, perhaps due to the\ndiffering ideas we have on how this should work. At that time I didn't\nthink the other patch actually existed yet... oops\n\nAnyway, I don't really want to drop my patch as I believe what it does\nis correct and there's debate on the other thread about how good an\nidea adding these OR clauses to the join quals is... (forces nested\nloop plan (see [3])), but it appears Jim and Zheng are fairly set on\nthat idea. Hence...\n\nI'm moving my patch here, so it can be debated without interfering\nwith the other work that's going on in this area. There has also been\nsome review of my patch in [4], and of course, originally in [1].\n\nThe background is really.\n\n1. Seems fine to do this transformation when there are no nulls.\n2. We don't want to cost anything to decide on to do the\ntransformation or not, i.e do it regardless, in all possible cases\nwhere it's valid to do so. We already do that for NOT EXISTS, no\napparent reason to think this case is any different.\n3. Need to consider what planner overhead there is from doing this and\nfailing to do the conversion due lack of evidence for no NULLs.\n\nI've not done #3, at least not with the latest patch.\n\nThere's already a CF entry [5] for this patch, although its targeting PG13.\n\nThe latest patch is attached.\n\n[1] https://www.postgresql.org/message-id/CAApHDvqRB-iFBy68%3DdCgqS46aRep7AuN2pou4KTwL8kX9YOcTQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/1550706289606-0.post@n3.nabble.com\n[3] https://www.postgresql.org/message-id/CAKJS1f_ZwXtzPz6wDpBXgAVYuxforsqpc6hBw05Y6aPGcOONfA%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/18203.1551543939%40sss.pgh.pa.us\n[5] https://commitfest.postgresql.org/22/2020/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 6 Mar 2019 12:54:49 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Converting NOT IN to anti-joins during planning" }, { "msg_contents": "Actually, we're working hard to integrate the two approaches. I haven't had\ntime since I returned to review your patch, but I understand that you were\nchecking for strict predicates as part of the nullness checking criteria,\nand we definitely must have that. Zheng tells me that he has combined your\npatch with ours, but before we put out a new patch, we're trying to figure\nout how to preserve the existing NOT IN execution plan in the case where the\nmaterialized subplan fits in memory. This (good) plan is effectively an\nin-memory hash anti-join.\n\nThis is tricky to do because the NOT IN Subplan to anti-join transformation\ncurrently happens early in the planning process, whereas the decision to\nmaterialize is made much later, when the best path is being converted into a\nPlan.\n\nZheng is exploring whether we can defer doing the transformation until Plan\ngeneration time. If we can do that, then we can generate the\nhighest-performing plan in all (known) cases.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n", "msg_date": "Tue, 5 Mar 2019 20:37:45 -0700 (MST)", "msg_from": "Jim Finnerty <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "Hi Jim,\n\nThanks for replying here.\n\nOn Wed, 6 Mar 2019 at 16:37, Jim Finnerty <jfinnert@amazon.com> wrote:\n>\n> Actually, we're working hard to integrate the two approaches. I haven't had\n> time since I returned to review your patch, but I understand that you were\n> checking for strict predicates as part of the nullness checking criteria,\n> and we definitely must have that. Zheng tells me that he has combined your\n> patch with ours, but before we put out a new patch, we're trying to figure\n> out how to preserve the existing NOT IN execution plan in the case where the\n> materialized subplan fits in memory. This (good) plan is effectively an\n> in-memory hash anti-join.\n>\n> This is tricky to do because the NOT IN Subplan to anti-join transformation\n> currently happens early in the planning process, whereas the decision to\n> materialize is made much later, when the best path is being converted into a\n> Plan.\n\nI guess you're still going with the OR ... IS NULL in your patch then?\notherwise, you'd likely find that the transformation (when NULLs are\nnot possible) is always a win since it'll allow hash anti-joins. (see\n#2 in the original email on this thread) FWIW I mentioned in [1] and\nTom confirmed in [2] that we both think hacking the join condition to\nadd an OR .. IS NULL is a bad idea. I guess you're not deterred by\nthat?\n\nI'd say your next best move is, over on the other thread, to put up\nyour argument against what Tom and I mentioned, then detail out what\nexactly you're planning. Likely this will save time. I personally\ndon't think that ignoring this part is going to allow you to progress\nyour patch too much further in PostgreSQL. Consensus about how $thing\nworks is something that's needed before the $thing can ever be\ncommitted. Sometimes lack of objection can count, but an unaddressed\nobjection is not consensus. Not trying to make this hard, just trying\nto explain the process.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8q4S%2B5Z7WSRDWJd__SwqMr12JdWKXTDo35ptzneRvZnw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/5420.1551487529%40sss.pgh.pa.us\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Wed, 6 Mar 2019 17:11:26 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Wed, 6 Mar 2019 at 12:54, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> The latest patch is attached.\n\nRebased version after pgindent run.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Fri, 24 May 2019 23:46:20 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> wrote:\n\n> On Wed, 6 Mar 2019 at 12:54, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> > The latest patch is attached.\n> \n> Rebased version after pgindent run.\n\nI've spent some time looking into this.\n\nOne problem I see is that SubLink can be in the JOIN/ON clause and thus it's\nnot necessarily at the top of the join tree. Consider this example:\n\nCREATE TABLE a(i int);\nCREATE TABLE b(j int);\nCREATE TABLE c(k int NOT NULL);\nCREATE TABLE d(l int);\n\n SELECT *\n FROM\n a\n JOIN b ON b.j NOT IN\n ( SELECT\n c.k\n FROM\n c)\n JOIN d ON b.j = d.l;\n\nHere the b.j=d.l condition makes the planner think that the \"b.j NOT IN\n(SELECT c.k FROM c)\" sublink cannot receive NULL values of b.j, but that's not\ntrue: it's possible that ((a JOIN b) ANTI JOIN c) is evaluated before \"d\" is\njoined to the other tables, so the NULL values of b.j are not filtered out\nearly enough.\n\nI thought it would help if find_innerjoined_rels(), when called from\nexpressions_are_not_nullable(), only collected rels (and quals) from the\nsubtree below the sublink, but that does not seem to help:\n\nCREATE TABLE e(m int);\n\n SELECT *\n FROM\n a\n JOIN e ON a.i = e.m\n JOIN b ON a.i NOT IN\n ( SELECT\n c.k\n FROM\n c)\n JOIN d ON COALESCE(a.i, 0) = COALESCE(d.l, 0);\n\nHere it might seem that the a.i=e.m condition eliminates NULL values from the\nANTI JOIN input, but it's probably hard to prove at query preparation time\nthat\n\n (((a JOIN e) JOIN b) ANTI JOIN c) JOIN d\n\nwon't eventually be optimized to\n\n (((a JOIN d) JOIN b) ANTI JOIN c) JOIN e\n\nSince the join condition between \"a\" and \"d\" is not strict in this case, the\nANTI JOIN will receive the NULL values of a.i.\n\nIt seems tricky, I've got no idea of an alternative approach right now.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 27 May 2019 10:43:40 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> One problem I see is that SubLink can be in the JOIN/ON clause and thus it's\n> not necessarily at the top of the join tree. Consider this example:\n> \n> CREATE TABLE a(i int);\n> CREATE TABLE b(j int);\n> CREATE TABLE c(k int NOT NULL);\n> CREATE TABLE d(l int);\n> \n> SELECT *\n> FROM\n> a\n> JOIN b ON b.j NOT IN\n> ( SELECT\n> c.k\n> FROM\n> c)\n> JOIN d ON b.j = d.l;\n> \n> Here the b.j=d.l condition makes the planner think that the \"b.j NOT IN\n> (SELECT c.k FROM c)\" sublink cannot receive NULL values of b.j, but that's not\n> true: it's possible that ((a JOIN b) ANTI JOIN c) is evaluated before \"d\" is\n> joined to the other tables, so the NULL values of b.j are not filtered out\n> early enough.\n> \n> I thought it would help if find_innerjoined_rels(), when called from\n> expressions_are_not_nullable(), only collected rels (and quals) from the\n> subtree below the sublink, but that does not seem to help:\n> \n> CREATE TABLE e(m int);\n> \n> SELECT *\n> FROM\n> a\n> JOIN e ON a.i = e.m\n> JOIN b ON a.i NOT IN\n> ( SELECT\n> c.k\n> FROM\n> c)\n> JOIN d ON COALESCE(a.i, 0) = COALESCE(d.l, 0);\n> \n> Here it might seem that the a.i=e.m condition eliminates NULL values from the\n> ANTI JOIN input, but it's probably hard to prove at query preparation time\n> that\n> \n> (((a JOIN e) JOIN b) ANTI JOIN c) JOIN d\n> \n> won't eventually be optimized to\n> \n> (((a JOIN d) JOIN b) ANTI JOIN c) JOIN e\n> \n> Since the join condition between \"a\" and \"d\" is not strict in this case, the\n> ANTI JOIN will receive the NULL values of a.i.\n> \n> It seems tricky, I've got no idea of an alternative approach right now.\n\nJust one idea: perhaps we could use something like PlaceHolderVar to enforce\nevaluation of the inner join expression (\"a.i=e.m\" in the example above) at\ncertain level of the join tree (in particular, below the ANTI JOIN) -\nsomething like make_outerjoininfo() does here:\n\n\t/* Else, prevent join from being formed before we eval the PHV */\n\tmin_righthand = bms_add_members(min_righthand, phinfo->ph_eval_at);\n\nUnlike the typical use of PHV, we would not have to check whether the\nexpression is not evaluated too low in the tree because the quals collected by\nfind_innerjoined_rels() should not reference nullable side of any outer join.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 27 May 2019 16:22:29 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Wed, 6 Mar 2019 at 04:11, David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> Hi Jim,\n>\n> Thanks for replying here.\n>\n> On Wed, 6 Mar 2019 at 16:37, Jim Finnerty <jfinnert@amazon.com> wrote:\n> >\n> > Actually, we're working hard to integrate the two approaches. I haven't\n> had\n> > time since I returned to review your patch, but I understand that you\n> were\n> > checking for strict predicates as part of the nullness checking criteria,\n> > and we definitely must have that. Zheng tells me that he has combined\n> your\n> > patch with ours, but before we put out a new patch, we're trying to\n> figure\n> > out how to preserve the existing NOT IN execution plan in the case where\n> the\n> > materialized subplan fits in memory. This (good) plan is effectively an\n> > in-memory hash anti-join.\n> >\n> > This is tricky to do because the NOT IN Subplan to anti-join\n> transformation\n> > currently happens early in the planning process, whereas the decision to\n> > materialize is made much later, when the best path is being converted\n> into a\n> > Plan.\n>\n> I guess you're still going with the OR ... IS NULL in your patch then?\n> otherwise, you'd likely find that the transformation (when NULLs are\n> not possible) is always a win since it'll allow hash anti-joins. (see\n> #2 in the original email on this thread) FWIW I mentioned in [1] and\n> Tom confirmed in [2] that we both think hacking the join condition to\n> add an OR .. IS NULL is a bad idea. I guess you're not deterred by\n> that?\n>\n\nSurely we want both?\n\n1. Transform when we can\n2. Else apply some other approach if the cost can be reduced by doing it\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 6 Mar 2019 at 04:11, David Rowley <david.rowley@2ndquadrant.com> wrote:Hi Jim,\n\nThanks for replying here.\n\nOn Wed, 6 Mar 2019 at 16:37, Jim Finnerty <jfinnert@amazon.com> wrote:\n>\n> Actually, we're working hard to integrate the two approaches.  I haven't had\n> time since I returned to review your patch, but I understand that you were\n> checking for strict predicates as part of the nullness checking criteria,\n> and we definitely must have that.  Zheng tells me that he has combined your\n> patch with ours, but before we put out a new patch, we're trying to figure\n> out how to preserve the existing NOT IN execution plan in the case where the\n> materialized subplan fits in memory.  This (good) plan is effectively an\n> in-memory hash anti-join.\n>\n> This is tricky to do because the NOT IN Subplan to anti-join transformation\n> currently happens early in the planning process, whereas the decision to\n> materialize is made much later, when the best path is being converted into a\n> Plan.\n\nI guess you're still going with the OR ... IS NULL in your patch then?\notherwise, you'd likely find that the transformation (when NULLs are\nnot possible) is always a win since it'll allow hash anti-joins. (see\n#2 in the original email on this thread)  FWIW I mentioned in [1] and\nTom confirmed in [2] that we both think hacking the join condition to\nadd an OR .. IS NULL is a bad idea. I guess you're not deterred by\nthat?Surely we want both?1. Transform when we can2. Else apply some other approach if the cost can be reduced by doing it-- Simon Riggs                http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 14 Jun 2019 09:40:53 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Fri, 14 Jun 2019 at 20:41, Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Wed, 6 Mar 2019 at 04:11, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>>\n>> Hi Jim,\n>>\n>> Thanks for replying here.\n>>\n>> On Wed, 6 Mar 2019 at 16:37, Jim Finnerty <jfinnert@amazon.com> wrote:\n>> >\n>> > Actually, we're working hard to integrate the two approaches. I haven't had\n>> > time since I returned to review your patch, but I understand that you were\n>> > checking for strict predicates as part of the nullness checking criteria,\n>> > and we definitely must have that. Zheng tells me that he has combined your\n>> > patch with ours, but before we put out a new patch, we're trying to figure\n>> > out how to preserve the existing NOT IN execution plan in the case where the\n>> > materialized subplan fits in memory. This (good) plan is effectively an\n>> > in-memory hash anti-join.\n>> >\n>> > This is tricky to do because the NOT IN Subplan to anti-join transformation\n>> > currently happens early in the planning process, whereas the decision to\n>> > materialize is made much later, when the best path is being converted into a\n>> > Plan.\n>>\n>> I guess you're still going with the OR ... IS NULL in your patch then?\n>> otherwise, you'd likely find that the transformation (when NULLs are\n>> not possible) is always a win since it'll allow hash anti-joins. (see\n>> #2 in the original email on this thread) FWIW I mentioned in [1] and\n>> Tom confirmed in [2] that we both think hacking the join condition to\n>> add an OR .. IS NULL is a bad idea. I guess you're not deterred by\n>> that?\n>\n>\n> Surely we want both?\n>\n> 1. Transform when we can\n> 2. Else apply some other approach if the cost can be reduced by doing it\n\nMaybe. If the scope for the conversion is reduced to only add the OR\n.. IS NULL join clause when the subplan could not be hashed then it's\nmaybe less likely to cause performance regressions. Remember that this\nforces the planner to use a nested loop join since no other join\nalgorithms support OR clauses. I think Jim and Zheng have now changed\ntheir patch to do that. If we can perform a parameterised nested loop\njoin then that has a good chance of being better than scanning the\nsubquery multiple times, however, if there's no index to do a\nparameterized nested loop, then we need to do a normal nested loop\nwhich will perform poorly, but so will the non-hashed subplan...\n\n# create table t1 (a int);\nCREATE TABLE\n# create table t2 (a int);\nCREATE TABLE\n# set work_mem = '64kB';\nSET\n# insert into t1 select generate_Series(1,10000);\nINSERT 0 10000\n# insert into t2 select generate_Series(1,10000);\nINSERT 0 10000\n# explain analyze select count(*) from t1 where a not in(select a from t2);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1668739.50..1668739.51 rows=1 width=8) (actual\ntime=7079.077..7079.077 rows=1 loops=1)\n -> Seq Scan on t1 (cost=0.00..1668725.16 rows=5738 width=0)\n(actual time=7079.072..7079.072 rows=0 loops=1)\n Filter: (NOT (SubPlan 1))\n Rows Removed by Filter: 10000\n SubPlan 1\n -> Materialize (cost=0.00..262.12 rows=11475 width=4)\n(actual time=0.004..0.397 rows=5000 loops=10000)\n -> Seq Scan on t2 (cost=0.00..159.75 rows=11475\nwidth=4) (actual time=0.019..4.921 rows=10000 loops=1)\n Planning Time: 0.348 ms\n Execution Time: 7079.309 ms\n(9 rows)\n\n# explain analyze select count(*) from t1 where not exists(select 1\nfrom t2 where t1.a = t2.a or t2.a is null);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1250873.25..1250873.26 rows=1 width=8) (actual\ntime=7263.980..7263.980 rows=1 loops=1)\n -> Nested Loop Anti Join (cost=0.00..1250858.97 rows=5709\nwidth=0) (actual time=7263.976..7263.976 rows=0 loops=1)\n Join Filter: ((t1.a = t2.a) OR (t2.a IS NULL))\n Rows Removed by Join Filter: 49995000\n -> Seq Scan on t1 (cost=0.00..159.75 rows=11475 width=4)\n(actual time=0.013..2.350 rows=10000 loops=1)\n -> Materialize (cost=0.00..262.12 rows=11475 width=4)\n(actual time=0.004..0.396 rows=5000 loops=10000)\n -> Seq Scan on t2 (cost=0.00..159.75 rows=11475\nwidth=4) (actual time=0.007..4.075 rows=10000 loops=1)\n Planning Time: 0.086 ms\n Execution Time: 7264.141 ms\n(9 rows)\n\nWhen the index exists the transformation is certainly much better.\n\n# create index on t2(a);\nCREATE INDEX\n# explain analyze select count(*) from t1 where not exists(select 1\nfrom t2 where t1.a = t2.a or t2.a is null);\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=111342.50..111342.51 rows=1 width=8) (actual\ntime=29.580..29.581 rows=1 loops=1)\n -> Nested Loop Anti Join (cost=7.10..111342.50 rows=1 width=0)\n(actual time=29.574..29.622 rows=0 loops=1)\n -> Seq Scan on t1 (cost=0.00..145.00 rows=10000 width=4)\n(actual time=0.010..0.883 rows=10000 loops=1)\n -> Bitmap Heap Scan on t2 (cost=7.10..11.11 rows=1 width=4)\n(actual time=0.002..0.002 rows=1 loops=10000)\n Recheck Cond: ((t1.a = a) OR (a IS NULL))\n Heap Blocks: exact=10000\n -> BitmapOr (cost=7.10..7.10 rows=1 width=0) (actual\ntime=0.002..0.002 rows=0 loops=10000)\n -> Bitmap Index Scan on t2_a_idx\n(cost=0.00..0.30 rows=1 width=0) (actual time=0.001..0.001 rows=1\nloops=10000)\n Index Cond: (a = t1.a)\n -> Bitmap Index Scan on t2_a_idx\n(cost=0.00..4.29 rows=1 width=0) (actual time=0.001..0.001 rows=0\nloops=10000)\n Index Cond: (a IS NULL)\n Planning Time: 0.311 ms\n Execution Time: 29.670 ms\n(13 rows)\n\nThe big \"IF\" here is if we can calculate the size of the subplan to\nknow if it'll be hashed or not at the point in planning where this\nconversion is done. I personally can't quite see how that'll work\nreliably without actually planning the subquery, which I really doubt\nis something we'd consider doing just for a cost estimate. Remember\nthe subquery may not just be a single relation scan, it could be a\ncomplex query containing many joins and UNION / GROUP BY / DISTINCT /\nHAVING clauses etc.\n\nHowever, if there turns out to be some good way to do that that I\ncan't see then I think that each patch should be separate so that they\ncan be accepted or rejected on their own merits. The problem, for now,\nis that the patches conflict with each other. I don't really want to\nbase mine on Jim and Zheng's patch, perhaps they feel the same about\nbasing theirs on mine.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Fri, 14 Jun 2019 21:50:40 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "----- \r\n The big \"IF\" here is if we can calculate the size of the subplan to\r\n know if it'll be hashed or not at the point in planning where this\r\n conversion is done. I personally can't quite see how that'll work\r\n reliably without actually planning the subquery, which I really doubt\r\n is something we'd consider doing just for a cost estimate. Remember\r\n the subquery may not just be a single relation scan, it could be a\r\n complex query containing many joins and UNION / GROUP BY / DISTINCT /\r\n HAVING clauses etc.\r\n-----\r\n\r\nIn our latest patch, we plan the subquery right before conversion, we only\r\nproceed with the ANTI JOIN conversion if subplan_is_hashable(subplan) is\r\nfalse. To avoid re-planning the subquery again in a later phase, I think we can\r\nkeep a pointer to the subplan in SubLink.\r\n\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL\r\n \r\n\r\nOn 6/14/19, 5:51 AM, \"David Rowley\" <david.rowley@2ndquadrant.com> wrote:\r\n\r\n On Fri, 14 Jun 2019 at 20:41, Simon Riggs <simon@2ndquadrant.com> wrote:\r\n >\r\n > On Wed, 6 Mar 2019 at 04:11, David Rowley <david.rowley@2ndquadrant.com> wrote:\r\n >>\r\n >> Hi Jim,\r\n >>\r\n >> Thanks for replying here.\r\n >>\r\n >> On Wed, 6 Mar 2019 at 16:37, Jim Finnerty <jfinnert@amazon.com> wrote:\r\n >> >\r\n >> > Actually, we're working hard to integrate the two approaches. I haven't had\r\n >> > time since I returned to review your patch, but I understand that you were\r\n >> > checking for strict predicates as part of the nullness checking criteria,\r\n >> > and we definitely must have that. Zheng tells me that he has combined your\r\n >> > patch with ours, but before we put out a new patch, we're trying to figure\r\n >> > out how to preserve the existing NOT IN execution plan in the case where the\r\n >> > materialized subplan fits in memory. This (good) plan is effectively an\r\n >> > in-memory hash anti-join.\r\n >> >\r\n >> > This is tricky to do because the NOT IN Subplan to anti-join transformation\r\n >> > currently happens early in the planning process, whereas the decision to\r\n >> > materialize is made much later, when the best path is being converted into a\r\n >> > Plan.\r\n >>\r\n >> I guess you're still going with the OR ... IS NULL in your patch then?\r\n >> otherwise, you'd likely find that the transformation (when NULLs are\r\n >> not possible) is always a win since it'll allow hash anti-joins. (see\r\n >> #2 in the original email on this thread) FWIW I mentioned in [1] and\r\n >> Tom confirmed in [2] that we both think hacking the join condition to\r\n >> add an OR .. IS NULL is a bad idea. I guess you're not deterred by\r\n >> that?\r\n >\r\n >\r\n > Surely we want both?\r\n >\r\n > 1. Transform when we can\r\n > 2. Else apply some other approach if the cost can be reduced by doing it\r\n \r\n Maybe. If the scope for the conversion is reduced to only add the OR\r\n .. IS NULL join clause when the subplan could not be hashed then it's\r\n maybe less likely to cause performance regressions. Remember that this\r\n forces the planner to use a nested loop join since no other join\r\n algorithms support OR clauses. I think Jim and Zheng have now changed\r\n their patch to do that. If we can perform a parameterised nested loop\r\n join then that has a good chance of being better than scanning the\r\n subquery multiple times, however, if there's no index to do a\r\n parameterized nested loop, then we need to do a normal nested loop\r\n which will perform poorly, but so will the non-hashed subplan...\r\n \r\n # create table t1 (a int);\r\n CREATE TABLE\r\n # create table t2 (a int);\r\n CREATE TABLE\r\n # set work_mem = '64kB';\r\n SET\r\n # insert into t1 select generate_Series(1,10000);\r\n INSERT 0 10000\r\n # insert into t2 select generate_Series(1,10000);\r\n INSERT 0 10000\r\n # explain analyze select count(*) from t1 where a not in(select a from t2);\r\n QUERY PLAN\r\n --------------------------------------------------------------------------------------------------------------------------\r\n Aggregate (cost=1668739.50..1668739.51 rows=1 width=8) (actual\r\n time=7079.077..7079.077 rows=1 loops=1)\r\n -> Seq Scan on t1 (cost=0.00..1668725.16 rows=5738 width=0)\r\n (actual time=7079.072..7079.072 rows=0 loops=1)\r\n Filter: (NOT (SubPlan 1))\r\n Rows Removed by Filter: 10000\r\n SubPlan 1\r\n -> Materialize (cost=0.00..262.12 rows=11475 width=4)\r\n (actual time=0.004..0.397 rows=5000 loops=10000)\r\n -> Seq Scan on t2 (cost=0.00..159.75 rows=11475\r\n width=4) (actual time=0.019..4.921 rows=10000 loops=1)\r\n Planning Time: 0.348 ms\r\n Execution Time: 7079.309 ms\r\n (9 rows)\r\n \r\n # explain analyze select count(*) from t1 where not exists(select 1\r\n from t2 where t1.a = t2.a or t2.a is null);\r\n QUERY PLAN\r\n ------------------------------------------------------------------------------------------------------------------------\r\n Aggregate (cost=1250873.25..1250873.26 rows=1 width=8) (actual\r\n time=7263.980..7263.980 rows=1 loops=1)\r\n -> Nested Loop Anti Join (cost=0.00..1250858.97 rows=5709\r\n width=0) (actual time=7263.976..7263.976 rows=0 loops=1)\r\n Join Filter: ((t1.a = t2.a) OR (t2.a IS NULL))\r\n Rows Removed by Join Filter: 49995000\r\n -> Seq Scan on t1 (cost=0.00..159.75 rows=11475 width=4)\r\n (actual time=0.013..2.350 rows=10000 loops=1)\r\n -> Materialize (cost=0.00..262.12 rows=11475 width=4)\r\n (actual time=0.004..0.396 rows=5000 loops=10000)\r\n -> Seq Scan on t2 (cost=0.00..159.75 rows=11475\r\n width=4) (actual time=0.007..4.075 rows=10000 loops=1)\r\n Planning Time: 0.086 ms\r\n Execution Time: 7264.141 ms\r\n (9 rows)\r\n \r\n When the index exists the transformation is certainly much better.\r\n \r\n # create index on t2(a);\r\n CREATE INDEX\r\n # explain analyze select count(*) from t1 where not exists(select 1\r\n from t2 where t1.a = t2.a or t2.a is null);\r\n QUERY\r\n PLAN\r\n ---------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregate (cost=111342.50..111342.51 rows=1 width=8) (actual\r\n time=29.580..29.581 rows=1 loops=1)\r\n -> Nested Loop Anti Join (cost=7.10..111342.50 rows=1 width=0)\r\n (actual time=29.574..29.622 rows=0 loops=1)\r\n -> Seq Scan on t1 (cost=0.00..145.00 rows=10000 width=4)\r\n (actual time=0.010..0.883 rows=10000 loops=1)\r\n -> Bitmap Heap Scan on t2 (cost=7.10..11.11 rows=1 width=4)\r\n (actual time=0.002..0.002 rows=1 loops=10000)\r\n Recheck Cond: ((t1.a = a) OR (a IS NULL))\r\n Heap Blocks: exact=10000\r\n -> BitmapOr (cost=7.10..7.10 rows=1 width=0) (actual\r\n time=0.002..0.002 rows=0 loops=10000)\r\n -> Bitmap Index Scan on t2_a_idx\r\n (cost=0.00..0.30 rows=1 width=0) (actual time=0.001..0.001 rows=1\r\n loops=10000)\r\n Index Cond: (a = t1.a)\r\n -> Bitmap Index Scan on t2_a_idx\r\n (cost=0.00..4.29 rows=1 width=0) (actual time=0.001..0.001 rows=0\r\n loops=10000)\r\n Index Cond: (a IS NULL)\r\n Planning Time: 0.311 ms\r\n Execution Time: 29.670 ms\r\n (13 rows)\r\n \r\n The big \"IF\" here is if we can calculate the size of the subplan to\r\n know if it'll be hashed or not at the point in planning where this\r\n conversion is done. I personally can't quite see how that'll work\r\n reliably without actually planning the subquery, which I really doubt\r\n is something we'd consider doing just for a cost estimate. Remember\r\n the subquery may not just be a single relation scan, it could be a\r\n complex query containing many joins and UNION / GROUP BY / DISTINCT /\r\n HAVING clauses etc.\r\n \r\n However, if there turns out to be some good way to do that that I\r\n can't see then I think that each patch should be separate so that they\r\n can be accepted or rejected on their own merits. The problem, for now,\r\n is that the patches conflict with each other. I don't really want to\r\n base mine on Jim and Zheng's patch, perhaps they feel the same about\r\n basing theirs on mine.\r\n \r\n -- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n \r\n \r\n \r\n\r\n", "msg_date": "Fri, 14 Jun 2019 16:35:21 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Mon, 27 May 2019 at 20:43, Antonin Houska <ah@cybertec.at> wrote:\n> I've spent some time looking into this.\n\nThank you for having a look at this.\n\n> One problem I see is that SubLink can be in the JOIN/ON clause and thus it's\n> not necessarily at the top of the join tree. Consider this example:\n>\n> CREATE TABLE a(i int);\n> CREATE TABLE b(j int);\n> CREATE TABLE c(k int NOT NULL);\n> CREATE TABLE d(l int);\n>\n> SELECT *\n> FROM\n> a\n> JOIN b ON b.j NOT IN\n> ( SELECT\n> c.k\n> FROM\n> c)\n> JOIN d ON b.j = d.l;\n\nhmm yeah. Since the proofs that are being used in\nexpressions_are_not_nullable assume the join has already taken place,\nthen we'll either need to not use the join conditions are proofs in\nthat case, or just disable the optimisation instead. I think it's\nfine to just disable the optimisation since it seem rather unlikely\nthat someone would write a join condition like that. Plus it seems\nquite a bit more complex to validate that the optimisation would even\nbe correct if NULLs were not possible.\n\nI've attached a patch which restricts the pullups to FromExpr quals.\nAnything below a JoinExpr disables the optimisation now.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Mon, 1 Jul 2019 23:23:34 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> wrote:\n\n> On Mon, 27 May 2019 at 20:43, Antonin Houska <ah@cybertec.at> wrote:\n> > I've spent some time looking into this.\n> \n> Thank you for having a look at this.\n> \n> > One problem I see is that SubLink can be in the JOIN/ON clause and thus it's\n> > not necessarily at the top of the join tree. Consider this example:\n> >\n> > CREATE TABLE a(i int);\n> > CREATE TABLE b(j int);\n> > CREATE TABLE c(k int NOT NULL);\n> > CREATE TABLE d(l int);\n> >\n> > SELECT *\n> > FROM\n> > a\n> > JOIN b ON b.j NOT IN\n> > ( SELECT\n> > c.k\n> > FROM\n> > c)\n> > JOIN d ON b.j = d.l;\n> \n> hmm yeah. Since the proofs that are being used in\n> expressions_are_not_nullable assume the join has already taken place,\n> then we'll either need to not use the join conditions are proofs in\n> that case, or just disable the optimisation instead. I think it's\n> fine to just disable the optimisation since it seem rather unlikely\n> that someone would write a join condition like that. Plus it seems\n> quite a bit more complex to validate that the optimisation would even\n> be correct if NULLs were not possible.\n> \n> I've attached a patch which restricts the pullups to FromExpr quals.\n> Anything below a JoinExpr disables the optimisation now.\n\nok. The planner pulls-up other sublinks located in the ON clause, but it'd be\nquite tricky to do the same for the NOT IN case.\n\nNow that we only consider the WHERE clause, I wonder if the code can be\nsimplified a bit more. In particular, pull_up_sublinks_jointree_recurse()\npasses valid pointer for notnull_proofs to pull_up_sublinks_qual_recurse(),\nwhile it also passes under_joinexpr=true. The latter should imply that NOT IN\nwon't be converted to ANTI JOIN anyway, so no notnull_proofs should be needed.\n\nBTW, I'm not sure if notnull_proofs=j->quals is correct in cases like this:\n\n\tcase JOIN_LEFT:\n\t\tj->quals = pull_up_sublinks_qual_recurse(root, j->quals,\n\t\t\t &j->rarg,\n\t\t\t rightrelids,\n\t\t\t NULL, NULL, j->quals,\n\t\t\t true);\n\nEven if j->quals evaluates to NULL or FALSE (due to NULL value on its input),\nit does not remove any rows (possibly containing NULL values) from the input\nof the SubLink's expression.\n\nI'm not even sure that expressions_are_not_nullable() needs the notnull_proofs\nargument. Now that we only consider SubLinks in the WHERE clause, it seems to\nme that nonnullable_vars is always a subset of nonnullable_inner_vars, isn't\nit?\n\nA few more minor findings:\n\n@@ -225,10 +227,13 @@ pull_up_sublinks(PlannerInfo *root)\n *\n * In addition to returning the possibly-modified jointree node, we return\n * a relids set of the contained rels into *relids.\n+ *\n+ * under_joinexpr must be passed as true if 'jtnode' is or is under a\n+ * JoinExpr.\n */\n static Node *\n pull_up_sublinks_jointree_recurse(PlannerInfo *root, Node *jtnode,\n- Relids *relids)\n+ Relids *relids, bool under_joinexpr)\n {\n if (jtnode == NULL)\n {\n\n\nThe comment \"if 'jtnode' is or is under ...\" is unclear.\n\n\n* is_NOTANY_compatible_with_antijoin()\n\n ** \"'outerquery' is the parse of the query\" -> \"'outerquery' is the parse tree of the query\"\n\n ** \"2. We assume that each join qual is an OpExpr\" -> \"2. We assume that\n each sublink expression is an OpExpr\" ?\n\n ** (OpExpr *) lfirst(lc) -> lfirst_node(OpExpr, lc)\n\n ** The kind of bool expression (AND_EXPR) might need to be tested here too:\n\n+ /* Extract exprs from multiple expressions ANDed together */\n+ else if (IsA(testexpr, BoolExpr))\n\n\n* find_innerjoined_rels()\n\n \"we locate all WHERE and JOIN/ON quals that constrain these rels add them to\"\n ->\n \" ... and add them ...\"\n\n\n* get_attnotnull()\n\n The comment says that FALSE is returned if the attribute is dropped, however\n the function it does not test att_tup->attisdropped. (This patch should not\n call the function for a dropped attribute, so I'm only saying that the\n function code is not consistent with the comment.)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 13 Aug 2019 11:49:24 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "David, will we hear from you on this patch during this month?\nIt sounds from Antonin's review that it needs a few changes.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Sep 2019 17:21:23 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Tue, 3 Sep 2019 at 09:21, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> David, will we hear from you on this patch during this month?\n> It sounds from Antonin's review that it needs a few changes.\n\nThanks for checking. I'm currently quite committed with things away\nfrom the community and it's unlikely I'll get to this in September.\nI'll kick it out to the next 'fest now.\n\n Antonin, Thank you for the review. I will respond to it when I get time again.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 3 Sep 2019 13:13:43 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Tue, Sep 03, 2019 at 01:13:43PM +1200, David Rowley wrote:\n> Antonin, Thank you for the review. I will respond to it when I get\n> time again.\n\nIt has been close to three months since this last update, so marked\nthe patch as returned with feedback.\n--\nMichael", "msg_date": "Thu, 28 Nov 2019 11:30:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" }, { "msg_contents": "On Mon, May 27, 2019 at 4:44 PM Antonin Houska <ah@cybertec.at> wrote:\n\n> David Rowley <david.rowley@2ndquadrant.com> wrote:\n>\n> > On Wed, 6 Mar 2019 at 12:54, David Rowley <david.rowley@2ndquadrant.com>\n> wrote:\n> > > The latest patch is attached.\n> >\n> > Rebased version after pgindent run.\n>\n> I've spent some time looking into this.\n>\n> One problem I see is that SubLink can be in the JOIN/ON clause and thus\n> it's\n> not necessarily at the top of the join tree. Consider this example:\n>\n> CREATE TABLE a(i int);\n> CREATE TABLE b(j int);\n> CREATE TABLE c(k int NOT NULL);\n> CREATE TABLE d(l int);\n>\n> SELECT *\n> FROM\n> a\n> JOIN b ON b.j NOT IN\n> ( SELECT\n> c.k\n> FROM\n> c)\n> JOIN d ON b.j = d.l;\n>\n> Here the b.j=d.l condition makes the planner think that the \"b.j NOT IN\n> (SELECT c.k FROM c)\" sublink cannot receive NULL values of b.j, but that's\n> not\n> true: it's possible that ((a JOIN b) ANTI JOIN c) is evaluated before \"d\"\n> is\n> joined to the other tables, so the NULL values of b.j are not filtered out\n> early enough.\n>\n>\nWould this be an issue? Suppose the b.j is NULL when ((a JOIN b) ANTI JOIN\nc)\nis evaluated, after the evaluation, the NULL is still there. and can be\nfiltered\nout later with b.j = d.l; Am I missing something?\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Mon, May 27, 2019 at 4:44 PM Antonin Houska <ah@cybertec.at> wrote:David Rowley <david.rowley@2ndquadrant.com> wrote:\n\n> On Wed, 6 Mar 2019 at 12:54, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> > The latest patch is attached.\n> \n> Rebased version after pgindent run.\n\nI've spent some time looking into this.\n\nOne problem I see is that SubLink can be in the JOIN/ON clause and thus it's\nnot necessarily at the top of the join tree. Consider this example:\n\nCREATE TABLE a(i int);\nCREATE TABLE b(j int);\nCREATE TABLE c(k int NOT NULL);\nCREATE TABLE d(l int);\n\n  SELECT *\n    FROM\n        a\n        JOIN b ON b.j NOT IN\n                ( SELECT\n                        c.k\n                    FROM\n                        c)\n        JOIN d ON b.j = d.l;\n\nHere the b.j=d.l condition makes the planner think that the \"b.j NOT IN\n(SELECT c.k FROM c)\" sublink cannot receive NULL values of b.j, but that's not\ntrue: it's possible that ((a JOIN b) ANTI JOIN c) is evaluated before \"d\" is\njoined to the other tables, so the NULL values of b.j are not filtered out\nearly enough.\nWould this be an issue?  Suppose the b.j is NULL when ((a JOIN b) ANTI JOIN c)is evaluated,  after the evaluation, the NULL is still there.  and can be filteredout later with  b.j = d.l;  Am I missing something? -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Fri, 23 Apr 2021 01:46:08 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Converting NOT IN to anti-joins during planning" } ]
[ { "msg_contents": "Hi,\n\nI realized that the tab completions for SKIP_LOCKED option of both\nVACUUM and ANALYZE are missing. Attached patch adds them.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center", "msg_date": "Wed, 6 Mar 2019 11:45:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Tab completion for SKIP_LOCKED option" }, { "msg_contents": "On Wed, Mar 06, 2019 at 11:45:01AM +0900, Masahiko Sawada wrote:\n> I realized that the tab completions for SKIP_LOCKED option of both\n> VACUUM and ANALYZE are missing. Attached patch adds them.\n\nThanks Sawada-san, committed.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 14:46:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tab completion for SKIP_LOCKED option" }, { "msg_contents": "On Wed, Mar 6, 2019 at 2:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 06, 2019 at 11:45:01AM +0900, Masahiko Sawada wrote:\n> > I realized that the tab completions for SKIP_LOCKED option of both\n> > VACUUM and ANALYZE are missing. Attached patch adds them.\n>\n> Thanks Sawada-san, committed.\n\nThank you!\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n", "msg_date": "Wed, 6 Mar 2019 14:58:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Tab completion for SKIP_LOCKED option" } ]
[ { "msg_contents": "Over on [1] Andres pointed out that the pg_dump support for the new to\nPG12 tablespace inheritance feature is broken. This is the feature\nadded in ca4103025dfe26 to allow a partitioned table to have a\ntablespace that acts as the default tablespace for newly attached\npartitions. The idea being that you can periodically change the\ndefault tablespace for new partitions to a tablespace that sits on a\ndisk partition with more free space without affecting the default\ntablespace for normal non-partitioned tables. Anyway...\n\npg_dump is broken with this. Consider:\n\ncreate tablespace newts location '/tmp/newts';\ncreate table listp (a int) partition by list(a) tablespace newts;\ncreate table listp1 partition of listp for values in(1) tablespace pg_default;\ncreate table listp2 partition of listp for values in(2);\n\nselect relname,relkind,reltablespace from pg_class where relname like\n'listp%' and relkind in('r','p') order by relname;\nproduces:\n\n relname | relkind | reltablespace\n---------+---------+---------------\n listp | p | 16384\n listp1 | r | 0\n listp2 | r | 16384\n(3 rows)\n\nafter dump/restore:\n\n relname | relkind | reltablespace\n---------+---------+---------------\n listp | p | 16384\n listp1 | r | 16384\n listp2 | r | 16384\n(3 rows)\n\nHere the tablespace for listp1 was inherited from listp, but we really\nshould have restored this to use pg_default like was specified.\n\nThe reason this occurs is that in pg_dump we do:\n\nSET default_tablespace = '';\n\nCREATE TABLE public.listp1 PARTITION OF public.listp\nFOR VALUES IN (1);\n\nso, since we're creating the table initially as a partition the logic\nthat applies the default partition from the parent kicks in.\n\nIf we instead did:\n\nCREATE TABLE public.listp1 (a integer\n);\n\nALTER TABLE public.list1 ATTACH PARTITION public.listp FOR VALUES IN (1);\n\nthen we'd have no issue, as tablespace will be set to whatever\ndefault_tablespace is set to.\n\nPartitioned indexes have this similar inherit tablespace from parent\nfeature, so ca4103025dfe26 was intended to align the behaviour of the\ntwo. Partitioned indexes happen not to suffer from the same issue as\nthe indexes are attached after their creation similar to what I\npropose above.\n\nCan anyone see any fundamental reason that we should not create a\npartitioned table by doing CREATE TABLE followed by ATTACH PARTITION?\n If not, I'll write a patch that fixes it that way.\n\nAs far as I can see, the biggest fundamental difference with doing\nthings this way will be that the column order of partitions will be\npreserved, where before it would inherit the order of the partitioned\ntable. I'm a little unsure if doing this column reordering was an\nintended side-effect or not.\n\n[1] https://www.postgresql.org/message-id/20190305060804.jv5mz4slrnelh3jy@alap3.anarazel.de\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Wed, 6 Mar 2019 19:45:06 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Wed, Mar 06, 2019 at 07:45:06PM +1300, David Rowley wrote:\n> Partitioned indexes have this similar inherit tablespace from parent\n> feature, so ca4103025dfe26 was intended to align the behaviour of the\n> two. Partitioned indexes happen not to suffer from the same issue as\n> the indexes are attached after their creation similar to what I\n> propose above.\n> \n> Can anyone see any fundamental reason that we should not create a\n> partitioned table by doing CREATE TABLE followed by ATTACH PARTITION?\n> If not, I'll write a patch that fixes it that way.\n\nThe part for partitioned indexes is already battle-proven, so if the\npart for partitioned tables can be consolidated the same way that\nwould be really nice.\n\n> As far as I can see, the biggest fundamental difference with doing\n> things this way will be that the column order of partitions will be\n> preserved, where before it would inherit the order of the partitioned\n> table. I'm a little unsure if doing this column reordering was an\n> intended side-effect or not.\n\nI don't see any direct issues with that to be honest thinking about\nit..\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 16:19:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Wed, 6 Mar 2019 at 20:19, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 06, 2019 at 07:45:06PM +1300, David Rowley wrote:\n> > Can anyone see any fundamental reason that we should not create a\n> > partitioned table by doing CREATE TABLE followed by ATTACH PARTITION?\n> > If not, I'll write a patch that fixes it that way.\n>\n> The part for partitioned indexes is already battle-proven, so if the\n> part for partitioned tables can be consolidated the same way that\n> would be really nice.\n\nI think Andres is also going to need it to work this way for the\npluggable storage patch too.\n\nLooking closer at this, I discovered that when pg_dump is in binary\nupgrade mode that it crafts the pg_dump output in this way anyway...\nObviously, the column orders can't go changing magically in that case\nsince we're about to plug the old heap into the new table. Due to the\nremoval of the special case, it means this patch turned out to remove\nmore code than it adds.\n\nPatch attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 7 Mar 2019 00:26:21 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> As far as I can see, the biggest fundamental difference with doing\n> things this way will be that the column order of partitions will be\n> preserved, where before it would inherit the order of the partitioned\n> table. I'm a little unsure if doing this column reordering was an\n> intended side-effect or not.\n\nWell, if the normal behavior results in changing the column order,\nit'd be necessary to do things differently in --binary-upgrade mode\nanyway, because there we *must* preserve column order. I don't know\nif what you're describing represents a separate bug for pg_upgrade runs,\nbut it might. Is there any test case for the situation left behind by\nthe core regression tests?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 06 Mar 2019 09:36:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Hi,\n\nOn 2019-03-06 19:45:06 +1300, David Rowley wrote:\n> Over on [1] Andres pointed out that the pg_dump support for the new to\n> PG12 tablespace inheritance feature is broken. This is the feature\n> added in ca4103025dfe26 to allow a partitioned table to have a\n> tablespace that acts as the default tablespace for newly attached\n> partitions. The idea being that you can periodically change the\n> default tablespace for new partitions to a tablespace that sits on a\n> disk partition with more free space without affecting the default\n> tablespace for normal non-partitioned tables. Anyway...\n\nI'm also concerned that the the current catalog representation isn't\nright. As I said:\n\n> I also find it far from clear that:\n> <listitem>\n> <para>\n> The <replaceable class=\"parameter\">tablespace_name</replaceable> is the name\n> of the tablespace in which the new table is to be created.\n> If not specified,\n> <xref linkend=\"guc-default-tablespace\"/> is consulted, or\n> <xref linkend=\"guc-temp-tablespaces\"/> if the table is temporary. For\n> partitioned tables, since no storage is required for the table itself,\n> the tablespace specified here only serves to mark the default tablespace\n> for any newly created partitions when no other tablespace is explicitly\n> specified.\n> </para>\n> </listitem>\n> is handled correctly. The above says that the *specified* tablespaces -\n> which seems to exclude the default tablespace - is what's going to\n> determine what partitions use as their default tablespace. But in fact\n> that's not true, the partitioned table's pg_class.retablespace is set to\n> what default_tablespaces was at the time of the creation.\n\nI still think the feature as is doesn't seem to have very well defined\nbehaviour.\n\n\n\n> If we instead did:\n> \n> CREATE TABLE public.listp1 (a integer\n> );\n> \n> ALTER TABLE public.list1 ATTACH PARTITION public.listp FOR VALUES IN (1);\n\nIsn't that a bit more expensive, because now the table needs to be\nscanned for maching the value? That's probably neglegible though, given\nit'd probably always empty.\n\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 6 Mar 2019 08:17:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Thu, 7 Mar 2019 at 03:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > As far as I can see, the biggest fundamental difference with doing\n> > things this way will be that the column order of partitions will be\n> > preserved, where before it would inherit the order of the partitioned\n> > table. I'm a little unsure if doing this column reordering was an\n> > intended side-effect or not.\n>\n> Well, if the normal behavior results in changing the column order,\n> it'd be necessary to do things differently in --binary-upgrade mode\n> anyway, because there we *must* preserve column order. I don't know\n> if what you're describing represents a separate bug for pg_upgrade runs,\n> but it might. Is there any test case for the situation left behind by\n> the core regression tests?\n\nAfter having written the patch, I noticed that binary upgrade mode\ndoes the CREATE TABLE then ATTACH PARTITION in order to preserve the\norder.\n\nAfter changing it nothing failed in make check-world with tap tests enabled.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 11:07:31 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Thu, 7 Mar 2019 at 05:17, Andres Freund <andres@anarazel.de> wrote:\n> I'm also concerned that the the current catalog representation isn't\n> right. As I said:\n>\n> > I also find it far from clear that:\n> > <listitem>\n> > <para>\n> > The <replaceable class=\"parameter\">tablespace_name</replaceable> is the name\n> > of the tablespace in which the new table is to be created.\n> > If not specified,\n> > <xref linkend=\"guc-default-tablespace\"/> is consulted, or\n> > <xref linkend=\"guc-temp-tablespaces\"/> if the table is temporary. For\n> > partitioned tables, since no storage is required for the table itself,\n> > the tablespace specified here only serves to mark the default tablespace\n> > for any newly created partitions when no other tablespace is explicitly\n> > specified.\n> > </para>\n> > </listitem>\n> > is handled correctly. The above says that the *specified* tablespaces -\n> > which seems to exclude the default tablespace - is what's going to\n> > determine what partitions use as their default tablespace. But in fact\n> > that's not true, the partitioned table's pg_class.retablespace is set to\n> > what default_tablespaces was at the time of the creation.\n\nDo you think it's fine to reword the docs to make this point more\nclear, or do you see this as a fundamental problem with the patch?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 11:31:15 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Hi,\n\nOn 2019-03-07 11:31:15 +1300, David Rowley wrote:\n> On Thu, 7 Mar 2019 at 05:17, Andres Freund <andres@anarazel.de> wrote:\n> > I'm also concerned that the the current catalog representation isn't\n> > right. As I said:\n> >\n> > > I also find it far from clear that:\n> > > <listitem>\n> > > <para>\n> > > The <replaceable class=\"parameter\">tablespace_name</replaceable> is the name\n> > > of the tablespace in which the new table is to be created.\n> > > If not specified,\n> > > <xref linkend=\"guc-default-tablespace\"/> is consulted, or\n> > > <xref linkend=\"guc-temp-tablespaces\"/> if the table is temporary. For\n> > > partitioned tables, since no storage is required for the table itself,\n> > > the tablespace specified here only serves to mark the default tablespace\n> > > for any newly created partitions when no other tablespace is explicitly\n> > > specified.\n> > > </para>\n> > > </listitem>\n> > > is handled correctly. The above says that the *specified* tablespaces -\n> > > which seems to exclude the default tablespace - is what's going to\n> > > determine what partitions use as their default tablespace. But in fact\n> > > that's not true, the partitioned table's pg_class.retablespace is set to\n> > > what default_tablespaces was at the time of the creation.\n> \n> Do you think it's fine to reword the docs to make this point more\n> clear, or do you see this as a fundamental problem with the patch?\n\nHm, both? I mean I wouldn't necessarily characterize it as \"fundamental\"\nproblem, but ...\n\nI don't think the argument that the user intended to explicitly set a\ntablespace holds much water if it was just set via default_tablespace,\nrather than an explicit TABLESPACE. I think iff you really want\nsomething like this feature, you'd have to mark a partition's\nreltablespace as 0 unless an *explicit* assignment of the tablespace\nhappened. In which case you also would need to explicitly emit a\nTABLESPACE for the partitioned table in pg_dump, to restore that.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 6 Mar 2019 14:37:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Thu, 7 Mar 2019 at 11:37, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-03-07 11:31:15 +1300, David Rowley wrote:\n> > Do you think it's fine to reword the docs to make this point more\n> > clear, or do you see this as a fundamental problem with the patch?\n>\n> Hm, both? I mean I wouldn't necessarily characterize it as \"fundamental\"\n> problem, but ...\n\nOkay, so if I understand you correctly, you're complaining about the\nfact that if the user does:\n\nCREATE TABLE p (a int) PARTITION BY LIST(a) TABLESPACE pg_default;\n\nthat the user intended that all future partitions go to pg_default and\nnot whatever default_tablespace is set to at the time?\n\nIf so, that seems like a genuine concern.\n\nI see in heap_create() we do;\n\n/*\n* Never allow a pg_class entry to explicitly specify the database's\n* default tablespace in reltablespace; force it to zero instead. This\n* ensures that if the database is cloned with a different default\n* tablespace, the pg_class entry will still match where CREATE DATABASE\n* will put the physically copied relation.\n*\n* Yes, this is a bit of a hack.\n*/\nif (reltablespace == MyDatabaseTableSpace)\nreltablespace = InvalidOid;\n\nwhich will zero pg_class.reltablespace if the specified tablespace\nhappens to match pg_database.dattablespace. This causes future\npartitions to think that no tablespace was specified and therefore\nDefineRelation() consults the default_tablespace.\n\nI see that this same problem exists for partitioned indexes too:\n\ncreate table listp (a int) partition by list(a);\ncreate index on listp (a) tablespace pg_default;\nset default_Tablespace = n;\ncreate table listp1 partition of listp for values in(1);\n\\d listp1\n Table \"public.listp1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nPartition of: listp FOR VALUES IN (1)\nIndexes:\n \"listp1_a_idx\" btree (a), tablespace \"n\"\nTablespace: \"n\"\n\nIf I understand what you're saying correctly, then the listp1_a_idx\nshould have been created in pg_default since that's what the default\npartitioned index tablespace was set to.\n\n> I don't think the argument that the user intended to explicitly set a\n> tablespace holds much water if it was just set via default_tablespace,\n> rather than an explicit TABLESPACE. I think iff you really want\n> something like this feature, you'd have to mark a partition's\n> reltablespace as 0 unless an *explicit* assignment of the tablespace\n> happened. In which case you also would need to explicitly emit a\n> TABLESPACE for the partitioned table in pg_dump, to restore that.\n\nI think emitting an explicit tablespace in pg_dump for partitioned\ntables (when non-zero) might have issues for pg_restore's\n--no-tablespaces option.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 15:25:34 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-03-07 15:25:34 +1300, David Rowley wrote:\n> On Thu, 7 Mar 2019 at 11:37, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-03-07 11:31:15 +1300, David Rowley wrote:\n> > > Do you think it's fine to reword the docs to make this point more\n> > > clear, or do you see this as a fundamental problem with the patch?\n> >\n> > Hm, both? I mean I wouldn't necessarily characterize it as \"fundamental\"\n> > problem, but ...\n> \n> Okay, so if I understand you correctly, you're complaining about the\n> fact that if the user does:\n> \n> CREATE TABLE p (a int) PARTITION BY LIST(a) TABLESPACE pg_default;\n> \n> that the user intended that all future partitions go to pg_default and\n> not whatever default_tablespace is set to at the time?\n\nCorrect. And also, conversely, if default_tablespace was set to frakbar\nat the time of CREATE TABLE, but no explicit TABLESPACE was provided,\nthat that should *not* be the default for new partitions; rather\ndefault_tablespace should be consulted again.\n\n\n> If I understand what you're saying correctly, then the listp1_a_idx\n> should have been created in pg_default since that's what the default\n> partitioned index tablespace was set to.\n\nThat's how I understand the intent of the user yes.\n\n\n> > I don't think the argument that the user intended to explicitly set a\n> > tablespace holds much water if it was just set via default_tablespace,\n> > rather than an explicit TABLESPACE. I think iff you really want\n> > something like this feature, you'd have to mark a partition's\n> > reltablespace as 0 unless an *explicit* assignment of the tablespace\n> > happened. In which case you also would need to explicitly emit a\n> > TABLESPACE for the partitioned table in pg_dump, to restore that.\n> \n> I think emitting an explicit tablespace in pg_dump for partitioned\n> tables (when non-zero) might have issues for pg_restore's\n> --no-tablespaces option.\n\nYea, there'd probably need to be handling in pg_backup_archiver.c or\nsuch, not sure how to do that best.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 6 Mar 2019 18:38:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Mar-06, Andres Freund wrote:\n\n\n> > I also find it far from clear that:\n> > <listitem>\n> > <para>\n> > The <replaceable class=\"parameter\">tablespace_name</replaceable> is the name\n> > of the tablespace in which the new table is to be created.\n> > If not specified,\n> > <xref linkend=\"guc-default-tablespace\"/> is consulted, or\n> > <xref linkend=\"guc-temp-tablespaces\"/> if the table is temporary. For\n> > partitioned tables, since no storage is required for the table itself,\n> > the tablespace specified here only serves to mark the default tablespace\n> > for any newly created partitions when no other tablespace is explicitly\n> > specified.\n> > </para>\n> > </listitem>\n> > is handled correctly. The above says that the *specified* tablespaces -\n> > which seems to exclude the default tablespace - is what's going to\n> > determine what partitions use as their default tablespace. But in fact\n> > that's not true, the partitioned table's pg_class.retablespace is set to\n> > what default_tablespaces was at the time of the creation.\n> \n> I still think the feature as is doesn't seem to have very well defined\n> behaviour.\n\nI have just started looking into this issue. I'm not sure yet what's\nthe best fix; maybe for the specific case of partitioned tables and\nindexes we should deviate from the ages-old behavior of storing zero\ntablespace, if the tablespace is specified as the default one. But I\nhaven't written the code yet.\n\nIn the meantime, here's David's patch, rebased to current master and\nwith the pg_upgrade and pg_dump tests fixed to match the new partition\ncreation behavior.\n\n> > If we instead did:\n> > \n> > CREATE TABLE public.listp1 (a integer\n> > );\n> > \n> > ALTER TABLE public.list1 ATTACH PARTITION public.listp FOR VALUES IN (1);\n> \n> Isn't that a bit more expensive, because now the table needs to be\n> scanned for maching the value? That's probably neglegible though, given\n> it'd probably always empty.\n\nI think it's always empty. In the standard case, there are two\ntransactions rather than one, so yeah it's a little bit more expensive.\nMaybe we should make this conditional on there actually being an\nimportant tablespace distinction to preserve.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 9 Apr 2019 09:30:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Mar-07, David Rowley wrote:\n\n> On Thu, 7 Mar 2019 at 11:37, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-03-07 11:31:15 +1300, David Rowley wrote:\n> > > Do you think it's fine to reword the docs to make this point more\n> > > clear, or do you see this as a fundamental problem with the patch?\n> >\n> > Hm, both? I mean I wouldn't necessarily characterize it as \"fundamental\"\n> > problem, but ...\n> \n> Okay, so if I understand you correctly, you're complaining about the\n> fact that if the user does:\n> \n> CREATE TABLE p (a int) PARTITION BY LIST(a) TABLESPACE pg_default;\n> \n> that the user intended that all future partitions go to pg_default and\n> not whatever default_tablespace is set to at the time?\n> \n> If so, that seems like a genuine concern.\n\nSo, you don't need a partition in order to see a problem here: pg_dump\ndoesn't do the right thing for the partitioned table. It does this:\n\nSET default_tablespace = pg_default;\nCREATE TABLE public.p (\n a integer\n)\nPARTITION BY LIST (a);\n\nwhich naturally has a different effect on partitions.\n\nNow, I think the problem for partitions can be solved in a simple\nmanner: we can have this code David quoted ignore the\nMyDatabaseTableSpace exception for partitioned rels:\n\n> /*\n> * Never allow a pg_class entry to explicitly specify the database's\n> * default tablespace in reltablespace; force it to zero instead. This\n> * ensures that if the database is cloned with a different default\n> * tablespace, the pg_class entry will still match where CREATE DATABASE\n> * will put the physically copied relation.\n> *\n> * Yes, this is a bit of a hack.\n> */\n> if (reltablespace == MyDatabaseTableSpace)\n> reltablespace = InvalidOid;\n\nThis gives the right effect AFAICS, namely to store the specified\ntablespace regardless of what it is; and there is no problem with the\n\"physically copied relation\" as the comment says, because there *isn't*\na physical relation in the first place.\n\nHowever, in order to fix the pg_dump behavior for the partitioned rel,\nwe would need to emit the tablespace differently, i.e. not use SET\ndefault_tablespace, but instead attach the tablespace clause to the\nCREATE TABLE line.\n\nI'll go have a look at what problems are there with doing that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Apr 2019 16:35:07 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-09, Alvaro Herrera wrote:\n\n> However, in order to fix the pg_dump behavior for the partitioned rel,\n> we would need to emit the tablespace differently, i.e. not use SET\n> default_tablespace, but instead attach the tablespace clause to the\n> CREATE TABLE line.\n> \n> I'll go have a look at what problems are there with doing that.\n\nI tried with the attached tbspc-partitioned.patch (on top of the\nprevious patch). It makes several additional cases work correctly, but\ncauses much more pain than it fixes:\n\n1. if a partitioned table is created without a tablespace spec, then\npg_dump dumps it with a clause like TABLESPACE \"\".\n\n2. If you fix that to make pg_dump omit the tablespace clause if the\ntablespace is default, then there's no SET nor TABLESPACE clauses, so\nthe table is created in the tablespace of the previously dumped table.\nUseless.\n\n3. You can probably make the above work anyway with enough hacks, but\nby the time you're finished, the code stinks like months-only fish.\n\n4. Even if you ignore the odor, all the resulting CREATE TABLE commands\nwill fail if you run them in a database that doesn't contain the\ntablespaces in question. That is, it negates one of the main points of\nusing \"SET default_tablespace\" in the first place.\n\nI therefore decree that this approach is not a solution and never will\nbe.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 9 Apr 2019 18:34:35 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Mar-06, Andres Freund wrote:\n\n> I don't think the argument that the user intended to explicitly set a\n> tablespace holds much water if it was just set via default_tablespace,\n> rather than an explicit TABLESPACE. I think iff you really want\n> something like this feature, you'd have to mark a partition's\n> reltablespace as 0 unless an *explicit* assignment of the tablespace\n> happened. In which case you also would need to explicitly emit a\n> TABLESPACE for the partitioned table in pg_dump, to restore that.\n\nThinking more about this, I think you're wrong about the behavior under\nnonempty default_tablespace. Quoth the fine manual:\n\ndefault_tablespace:\n\t[...]\n\tThis variable specifies the default tablespace in which to create\n\tobjects (tables and indexes) when a CREATE command does not explicitly specify\n\ta tablespace.\n\tThe value is either the name of a tablespace, or an empty string to\n\tspecify using the default tablespace of the current database. [...]\n\thttps://www.postgresql.org/docs/11/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-STATEMENT\n\nWhat this says to me, if default_tablespace is set, and there is no\nTABLESPACE clause, we should regard the default_tablespace just as if it\nwere an explicitly named tablespace. Note that the default setting of\ndefault_tablespace is empty, meaning that tables are created in the\ndatabase tablespace.\n\nEmerging behavior: default_tablespace is set to A, then partitioned\ntable T is created, then default_tablespace is changed to B. Any\npartitions of T created afterwards still appear in tablespace A.\n\nIf you really intended for new partitions to be created in\ndefault_tablespace (following future changes to that option), then you\nshould just leave default_tablespace as empty when creating T.\n\nThere is one deficiency that needs to be solved in order for this to\nwork fully: currently there is no way to reset \"reltablespace\" to 0.\n\nDoes that make sense?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Apr 2019 18:58:42 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-09, Alvaro Herrera wrote:\n\n> There is one deficiency that needs to be solved in order for this to\n> work fully: currently there is no way to reset \"reltablespace\" to 0.\n\nTherefore I propose to add\nALTER TABLE tb ... RESET TABLESPACE;\nwhich sets reltablespace to 0, and it would work only for partitioned\ntables and indexes.\n\nThat, together with the initial proposal by David, seems to me to solve\nthe issue at hand.\n\nIf no objections, I'll try to come up with a patch tomorrow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Apr 2019 19:05:27 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Wed, 10 Apr 2019 at 11:05, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Apr-09, Alvaro Herrera wrote:\n>\n> > There is one deficiency that needs to be solved in order for this to\n> > work fully: currently there is no way to reset \"reltablespace\" to 0.\n>\n> Therefore I propose to add\n> ALTER TABLE tb ... RESET TABLESPACE;\n> which sets reltablespace to 0, and it would work only for partitioned\n> tables and indexes.\n>\n> That, together with the initial proposal by David, seems to me to solve\n> the issue at hand.\n>\n> If no objections, I'll try to come up with a patch tomorrow.\n\nI'm starting to wonder if maintaining two separate behaviours here\nisn't just to complex.\n\nFor example, if I do:\n\nCREATE TABLE a (a INT PRIMARY KEY) TABLESPACE mytablespace;\n\nthen a_pkey goes into the default_tablespace, not mytablespace.\n\nAlso, is it weird that CLUSTER can move a table into another\ntablespace if the database's tablespace has changed?\n\npostgres=# CREATE TABLE a (a INT PRIMARY KEY) TABLESPACE pg_default;\nCREATE TABLE\npostgres=# SELECT pg_relation_filepath('a'::regclass);\n pg_relation_filepath\n----------------------\n base/12702/16444\n(1 row)\n\npostgres=# \\c n\nn=# ALTER DATABASE postgres TABLESPACE mytablespace;\nALTER DATABASE\nn=# \\c postgres\npostgres=# CLUSTER a USING a_pkey;\nCLUSTER\npostgres=# SELECT pg_relation_filepath('a'::regclass);\n pg_relation_filepath\n---------------------------------------------\n pg_tblspc/16415/PG_12_201904072/12702/16449\n(1 row)\n\nThis one seems very strange to me.\n\nI think to make it work we'd need to modify heap_create() and\nheap_create_with_catalog() to add a new bool argument that controls if\nthe TABLESPACE was defined the calling command then only set the\nreltablespace to InvalidOid if the tablespace was not defined and it\nmatches the database's tablespace. If we want to treat table\npartitions and index partitions in a special way then we'll need to\nadd a condition to not set InvalidOid if the relkind is one of those.\nThat feels a bit dirty, but if the above two cases were also deemed\nwrong then we wouldn't need the special case.\n\nAnother option would be instead of adding a new bool flag, just pass\nInvalidOid for the tablespace to heap_create() when TABLESPACE was not\nspecified then have it lookup GetDefaultTablespace() but keep\npg_class.reltablespace set to InvalidOId. Neither of these would be a\nback-patchable fix for index partitions in PG11. Not sure what to do\nabout that...\n\nMaking constraints follow the tablespace specified during CREATE TABLE\nwould require a bit more work.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 10 Apr 2019 16:34:35 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-09, Alvaro Herrera wrote:\n\n> There is one deficiency that needs to be solved in order for this to\n> work fully: currently there is no way to reset \"reltablespace\" to 0.\n\nActually, there is one -- just need to\n ALTER TABLE .. SET TABLESPACE <the database default tablespace>\nthis took me a bit by surprise, but actually it's quite expected.\n\nSo I think that apart from David's patch, we should just document all\nthese things carefully.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Apr 2019 09:28:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Hi,\n\nOn 2019-04-10 09:28:21 -0400, Alvaro Herrera wrote:\n> So I think that apart from David's patch, we should just document all\n> these things carefully.\n\nYea, I think that's the most important part.\n\nI'm not convinced that we should have any inheriting behaviour btw - it\nseems like there's a lot of different ways to think this should behave,\nwith different good reason each.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2019 09:10:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-10, Andres Freund wrote:\n\n> Hi,\n> \n> On 2019-04-10 09:28:21 -0400, Alvaro Herrera wrote:\n> > So I think that apart from David's patch, we should just document all\n> > these things carefully.\n> \n> Yea, I think that's the most important part.\n> \n> I'm not convinced that we should have any inheriting behaviour btw - it\n> seems like there's a lot of different ways to think this should behave,\n> with different good reason each.\n\nSo, I ended up with the attached patch. I think it works pretty well,\nand it passes all my POLA tests.\n\nBut it doesn't pass pg_upgrade tests! And investigating closer, it\nseems closely related to what David was complaining elsewhere about the\ntablespace being improperly set by some rewrite operations. Here's the\nsetup as created by regress' create_table.sql:\n\ncreate table at_partitioned (a int, b text) partition by range (a);\ncreate table at_part_1 partition of at_partitioned for values from (0) to (1000);\ninsert into at_partitioned values (512, '0.123');\ncreate table at_part_2 (b text, a int);\ninsert into at_part_2 values ('1.234', 1024);\ncreate index on at_partitioned (b);\ncreate index on at_partitioned (a);\n\nIf you examine state at this point, it's all good:\nalvherre=# select relname, reltablespace from pg_class where relname like 'at_partitioned%';\n relname | reltablespace \n----------------------+---------------\n at_partitioned | 0\n at_partitioned_a_idx | 0\n at_partitioned_b_idx | 0\n\nbut the test immediately does this:\n\nalter table at_partitioned alter column b type numeric using b::numeric;\n\nand watch what happens! (1663 is pg_default)\n\nalvherre=# select relname, reltablespace from pg_class where relname like 'at_partitioned%';\n relname | reltablespace \n----------------------+---------------\n at_partitioned | 0\n at_partitioned_a_idx | 0\n at_partitioned_b_idx | 1663\n(3 filas)\n\nOutrageous!\n\nI'm going to have a look at this behavior now. IMO it's a separate bug,\nbut with that obviously we cannot fix the other one.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Apr 2019 16:21:52 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-10, Alvaro Herrera wrote:\n\n> but the test immediately does this:\n> \n> alter table at_partitioned alter column b type numeric using b::numeric;\n> \n> and watch what happens! (1663 is pg_default)\n> \n> alvherre=# select relname, reltablespace from pg_class where relname like 'at_partitioned%';\n> relname | reltablespace \n> ----------------------+---------------\n> at_partitioned | 0\n> at_partitioned_a_idx | 0\n> at_partitioned_b_idx | 1663\n> (3 filas)\n> \n> Outrageous!\n\nThis is because ruleutils.c attaches a TABLESPACE clause when asked to\ndump an index definition; and tablecmds.c uses ruleutils to deparse the\nindex definition into something that can be replayed via CREATE INDEX\ncommands (or ALTER TABLE ADD CONSTRAINT UNIQUE/PRIMARY KEY, if that's\nthe case.)\n\nThis patch (PoC quality) fixes that behavior, but I'm looking to see\nwhat else it breaks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 10 Apr 2019 18:11:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-10, Alvaro Herrera wrote:\n\n> This is because ruleutils.c attaches a TABLESPACE clause when asked to\n> dump an index definition; and tablecmds.c uses ruleutils to deparse the\n> index definition into something that can be replayed via CREATE INDEX\n> commands (or ALTER TABLE ADD CONSTRAINT UNIQUE/PRIMARY KEY, if that's\n> the case.)\n\nFound a \"solution\" to this -- namely, to set the GUC default_tablespace\nto the empty string temporarily, and teach ruleutils.c to attach\nTABLESPACE clauses on index/constraint definitions only if they\nare not in the database tablespace. That makes everything works\ncorrectly. (I did have to patch psql to show tablespace for partitioned\nindexes.)\n\nHowever, because the tablespace to use for an index is determined at\nphase 3 execution time (i.e. inside DefineIndex), look what happens in\ncertain weird cases:\n\ncreate tablespace foo location '/tmp/foo';\nset default_tablespace to foo;\nalter table t add unique (b) ;\ncreate index on t (a);\n\nat this point, the indexes for \"a\" and \"b\" is in tablespace foo, which\nis correct because that's the default tablespace.\n\nHowever, if we do a type change *and add an index in the same command*,\nthen that index ends up in the wrong tablespace (namely the database\ntablespace instead of default_tablespace):\n\nalter table t alter a type bigint, add unique (c);\n\nI'm not seeing any good way to fix this; I need the default tablespace\nreset to only affect the index creations caused by the rewrite, but not\nthe ones used by different commands. I suppose forbidding ADD\nCONSTRAINT subcommands together with ALTER COLUMN SET DATA TYPE would\nnot fly very far. (I'm not sure that's complete either: if you change\ndatatype so that a toast table is created, perhaps this action will\naffect the location of said new toast table, also.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 11 Apr 2019 11:36:16 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Mar-06, David Rowley wrote:\n\n> Over on [1] Andres pointed out that the pg_dump support for the new to\n> PG12 tablespace inheritance feature is broken. This is the feature\n> added in ca4103025dfe26 to allow a partitioned table to have a\n> tablespace that acts as the default tablespace for newly attached\n> partitions. The idea being that you can periodically change the\n> default tablespace for new partitions to a tablespace that sits on a\n> disk partition with more free space without affecting the default\n> tablespace for normal non-partitioned tables. Anyway...\n> \n> pg_dump is broken with this.\n\nHere's a patch to fix the reported problems. It's somewhat invasive,\nand I've spent a long time staring at it, so I very much appreciate eyes\non it.\n\nGuiding principles:\n\n* Partitioned tables can have the database tablespace in their\n reltablespace column, in contrast with non-partitioned relations.\n This is okay and doesn't cause permission checks, since nothing\n is put in the tablespace just yet.\n\n* When creating a partition, the parent's tablespace takes precedence\n over default_tablespace. This sounds a bit weird at first, but I\n think it's correct. If you really want your partition to override the\n parent's tablespace specification, you need to add a tablespace\n clause. (I was initially opposed to this, but on further reflection\n I think it's the right thing to do.)\n\nWith these things in mind, I have introduced some behavior changes. In\nparticular, both bugs mentioned in this thread have been fixed.\n\n* pg_dump now correctly reproduces the state, still without using any\n TABLESPACE clauses. (I suppose this is important for portability of\n dumps, as well as the --no-tablespaces option).\n\n* When a partitioned table is created specifying the database tablespace\n in the TABLESPACE clause, and later a partition is created under\n a different non-empty default_tablespace setting, the partition honors\n the parent's tablespace. (Fixing this bug became one of the\n principles here.)\n\n* When an index is rewritten because of an ALTER TABLE, it no longer\n moves magically to another tablespace. This worked fine for\n non-partitioned indexes (commit bd673e8e864a fixed it), but it was\n broken for partitioned ones.\n\n* pg_dump was patched to emit straight CREATE TABLE plus ALTER TABLE\n ATTACH for partitions (just like the binary upgrade code does) instead\n of CREATE TABLE PARTITION OF. This is what lets the partition use a\n straight default_tablespace setting instead of having to conditionally\n attach TABLESPACE clauses, which would create quite a mess.\n\nMaking this work required somewhat unusual hacks:\n\n* Nodes IndexStmt and Constraint have gained a new member\n \"reset_default_tblspc\". This is not set in the normal code paths, but\n when ALTER TABLE wants to recreate an index, it sets this flag; the\n flag tells index creation to reset default_tablespace to empty. This\n is only necessary because ALTER TABLE execution resolves (at execution\n time) the tablespace of the artifically-generated SQL command to\n recreate the index. If default_tablespace is left there, it\n interferes with that.\n\n* GetDefaultTablespace now behaves differently for partitioned tables,\n precisely because we want it not to return InvalidOid when the\n tablespace is specifically selected.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 12 Apr 2019 19:36:37 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Sat, 13 Apr 2019 at 11:36, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Here's a patch to fix the reported problems. It's somewhat invasive,\n> and I've spent a long time staring at it, so I very much appreciate eyes\n> on it.\n\nI think it's a bit strange that don't store the pg_default's oid in\nreltablespace for objects other than partitioned tables and indexes.\nFor documents [1] say:\n\n\"When default_tablespace is set to anything but an empty string, it\nsupplies an implicit TABLESPACE clause for CREATE TABLE and CREATE\nINDEX commands that do not have an explicit one.\"\n\nI'd say the fact that we populate reltablespace with 0 is a bug as\nit's not going to do what they want after a dump/restore.\n\nIf that's ok to change then maybe the attached is an okay fix. Rather\nnicely it gets rid of the code that's commented with \"Yes, this is a\nbit of a hack.\" and also changes the contract with heap_create() so\nthat we just pass InvalidOid to mean use MyDatabaseTableSpace. I've\nnot really documented that in the patch yet. It also does not need\nthe pg_dump change to have it use ATTACH PARTITION instead of\nPARTITION OF, although perhaps that's an okay change to make\nregardless of this bug.\n\nOn Wed, 10 Apr 2019 at 10:58, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> There is one deficiency that needs to be solved in order for this to\n> work fully: currently there is no way to reset \"reltablespace\" to 0.\n\nYeah, I noticed that too. My patch makes that a consistent problem\nwith all object types that allow tablespaces. Perhaps we can allow\nALTER ... <name> SET TABLESPACE DEFAULT; since \"DEFAULT\" is fully\nreserved it can't be the name of a tablespace.\n\n[1] https://www.postgresql.org/docs/devel/manage-ag-tablespaces.html\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Sun, 14 Apr 2019 23:21:15 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I'd say the fact that we populate reltablespace with 0 is a bug as\n> it's not going to do what they want after a dump/restore.\n\nWell, it's not really nice perhaps, but you cannot just put in some\nother concrete tablespace OID instead. What a zero there means is\n\"use the database's default tablespace\", and the point of it is that\nit still means that after the DB has been cloned with a different\ndefault tablespace. If we don't store 0 then we break\n\"CREATE DATABASE ... TABLESPACE = foo\".\n\nYou could imagine using some special tablespace OID that has these\nsemantics (*not* pg_default, but some new row in pg_tablespace).\nI'm not sure that that'd provide any functional improvement over\nusing zero, but we could certainly entertain such a change if\npartitioned tables seem to need it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Apr 2019 10:16:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Mon, 15 Apr 2019 at 02:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I'd say the fact that we populate reltablespace with 0 is a bug as\n> > it's not going to do what they want after a dump/restore.\n>\n> Well, it's not really nice perhaps, but you cannot just put in some\n> other concrete tablespace OID instead. What a zero there means is\n> \"use the database's default tablespace\", and the point of it is that\n> it still means that after the DB has been cloned with a different\n> default tablespace. If we don't store 0 then we break\n> \"CREATE DATABASE ... TABLESPACE = foo\".\n>\n> You could imagine using some special tablespace OID that has these\n> semantics (*not* pg_default, but some new row in pg_tablespace).\n> I'm not sure that that'd provide any functional improvement over\n> using zero, but we could certainly entertain such a change if\n> partitioned tables seem to need it.\n\nThe patch only changes that behaviour when the user does something like:\n\nset default_tablespace = 'pg_default';\ncreate table ... (...);\n\nor:\n\ncreate table ... (...) tablespace pg_default;\n\nThe 0 value is still maintained when the tablespace is not specified\nor default_tablespace is an empty string.\n\nThe CREATE TABLE docs mention:\n\n\"The tablespace_name is the name of the tablespace in which the new\ntable is to be created. If not specified, default_tablespace is\nconsulted, or temp_tablespaces if the table is temporary. For\npartitioned tables, since no storage is required for the table itself,\nthe tablespace specified here only serves to mark the default\ntablespace for any newly created partitions when no other tablespace\nis explicitly specified.\"\n\nand the default_tablespace docs say:\n\n\"When default_tablespace is set to anything but an empty string, it\nsupplies an implicit TABLESPACE clause for CREATE TABLE and CREATE\nINDEX commands that do not have an explicit one.\"\n\nso the change just seems to be altering the code to follow the documents.\n\nAlvaro is proposing to change this behaviour for partitioned tables\nand indexes. I'm proposing not having that special case and just\nchanging it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 15 Apr 2019 02:25:06 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Mon, 15 Apr 2019 at 02:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, it's not really nice perhaps, but you cannot just put in some\n>> other concrete tablespace OID instead. What a zero there means is\n>> \"use the database's default tablespace\", and the point of it is that\n>> it still means that after the DB has been cloned with a different\n>> default tablespace. If we don't store 0 then we break\n>> \"CREATE DATABASE ... TABLESPACE = foo\".\n\n> [ quotes documents ]\n\nI think those documentation statements are probably wrong in detail,\nor at least you're misreading them if you think they are justification\nfor this patch. *This change will break CREATE DATABASE*.\n\n(And, apparently, the comment you tried to remove isn't sufficiently\nclear about that.)\n\n> Alvaro is proposing to change this behaviour for partitioned tables\n> and indexes. I'm proposing not having that special case and just\n> changing it.\n\nIt's possible that Alvaro's patch is also broken, but I haven't had time\nto review it. The immediate question is what happens when somebody makes\na partitioned table in template1 and then does CREATE DATABASE with a\ntablespace option. Does the partitioned table end up in the same\ntablespace as ordinary tables do?\n\nIt's entirely possible BTW that this whole business of inheriting\ntablespace from the partitioned table is broken and should be thrown\nout. I certainly don't see any compelling reason for partitions to\nact differently from regular tables in this respect, and the more\nproblems we find with the idea, the less attractive it seems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Apr 2019 10:38:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Hi,\n\nOn 2019-04-14 10:38:05 -0400, Tom Lane wrote:\n> It's entirely possible BTW that this whole business of inheriting\n> tablespace from the partitioned table is broken and should be thrown\n> out. I certainly don't see any compelling reason for partitions to\n> act differently from regular tables in this respect, and the more\n> problems we find with the idea, the less attractive it seems.\n\nIndeed. After discovering during the tableam work, and trying to write\ntests for the equivalent feature for tableam, I decided that just not\nallowing AM specifications for partitioned tables is the right call - at\nleast until the desired behaviour is clearer. The discussion of the last\nfew days makes me think so even more.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 14 Apr 2019 09:10:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-14, Andres Freund wrote:\n\n> On 2019-04-14 10:38:05 -0400, Tom Lane wrote:\n> > It's entirely possible BTW that this whole business of inheriting\n> > tablespace from the partitioned table is broken and should be thrown\n> > out. I certainly don't see any compelling reason for partitions to\n> > act differently from regular tables in this respect, and the more\n> > problems we find with the idea, the less attractive it seems.\n> \n> Indeed. After discovering during the tableam work, and trying to write\n> tests for the equivalent feature for tableam, I decided that just not\n> allowing AM specifications for partitioned tables is the right call - at\n> least until the desired behaviour is clearer. The discussion of the last\n> few days makes me think so even more.\n\nTo be honest, when doing that feature I neglected to pay attention to\n(read: forgot about) default_tablespace, and it's mostly the\ninteractions with that feature that makes this partitioned table stuff\nso complicated. I'm not 100% convinced yet that we need to throw it out\ncompletely, but I'm less sure now about it than I was before.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 14 Apr 2019 13:32:14 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Mon, 15 Apr 2019 at 05:32, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Apr-14, Andres Freund wrote:\n>\n> > On 2019-04-14 10:38:05 -0400, Tom Lane wrote:\n> > > It's entirely possible BTW that this whole business of inheriting\n> > > tablespace from the partitioned table is broken and should be thrown\n> > > out. I certainly don't see any compelling reason for partitions to\n> > > act differently from regular tables in this respect, and the more\n> > > problems we find with the idea, the less attractive it seems.\n> >\n> > Indeed. After discovering during the tableam work, and trying to write\n> > tests for the equivalent feature for tableam, I decided that just not\n> > allowing AM specifications for partitioned tables is the right call - at\n> > least until the desired behaviour is clearer. The discussion of the last\n> > few days makes me think so even more.\n>\n> To be honest, when doing that feature I neglected to pay attention to\n> (read: forgot about) default_tablespace, and it's mostly the\n> interactions with that feature that makes this partitioned table stuff\n> so complicated. I'm not 100% convinced yet that we need to throw it out\n> completely, but I'm less sure now about it than I was before.\n\nFWIW, I was trying to hint in [1] that this all might be more trouble\nthan its worth.\n\nTo be honest, if I'd done a better job of thinking through the\nimplications of this tablespace inheritance in ca4103025d, then I'd\nprobably have not bothered submitting a patch for it. We could easily\nrevert that, but we'd still be left with the same behaviour in\npartitioned indexes, which is in PG11.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f-52x3o16fsd4=tBPKct9_E0uEg0LmzOgxBqLiuZsj-SA@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 15 Apr 2019 11:32:27 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-15, David Rowley wrote:\n\n> To be honest, if I'd done a better job of thinking through the\n> implications of this tablespace inheritance in ca4103025d, then I'd\n> probably have not bothered submitting a patch for it. We could easily\n> revert that, but we'd still be left with the same behaviour in\n> partitioned indexes, which is in PG11.\n\nWell, I suppose if we do decide to revert it for tables, we should do it\nfor both tables and indexes. But as I said, I'm not yet convinced that\nthis is the best way forward.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 14 Apr 2019 23:26:44 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Mon, 15 Apr 2019 at 15:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Apr-15, David Rowley wrote:\n>\n> > To be honest, if I'd done a better job of thinking through the\n> > implications of this tablespace inheritance in ca4103025d, then I'd\n> > probably have not bothered submitting a patch for it. We could easily\n> > revert that, but we'd still be left with the same behaviour in\n> > partitioned indexes, which is in PG11.\n>\n> Well, I suppose if we do decide to revert it for tables, we should do it\n> for both tables and indexes. But as I said, I'm not yet convinced that\n> this is the best way forward.\n\nOk. Any ideas or suggestions on how we move on from here? It seems\nlike a bit of a stalemate.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 17 Apr 2019 00:15:12 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-17, David Rowley wrote:\n\n> On Mon, 15 Apr 2019 at 15:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Apr-15, David Rowley wrote:\n> >\n> > > To be honest, if I'd done a better job of thinking through the\n> > > implications of this tablespace inheritance in ca4103025d, then I'd\n> > > probably have not bothered submitting a patch for it. We could easily\n> > > revert that, but we'd still be left with the same behaviour in\n> > > partitioned indexes, which is in PG11.\n> >\n> > Well, I suppose if we do decide to revert it for tables, we should do it\n> > for both tables and indexes. But as I said, I'm not yet convinced that\n> > this is the best way forward.\n> \n> Ok. Any ideas or suggestions on how we move on from here? It seems\n> like a bit of a stalemate.\n\nWell, here's my proposed patch. I'm now fairly happy with how this\nlooks now, concerning partitioned tables.\n\nThis is mostly what was already discussed:\n\n1. pg_dump now uses regular CREATE TABLE followed by ALTER TABLE / ATTACH\n PARTITION when creating partitions, rather than CREATE TABLE\n PARTITION OF. pg_dump --binary-upgrade was already doing that, so\n this part mostly removes some code. In order to make the partitions\n reach the correct tablespace, the \"default_tablespace\" GUC is used.\n No TABLESPACE clause is added to the dump. This is David's patch\n near the start of the thread.\n\n2. When creating a partition using the CREATE TABLE PARTITION OF syntax,\n the TABLESPACE clause has highest precedence; if that is not given,\n the partitioned table's tablespace is used; if that is set to 0 (the\n default), default_tablespace is used; if that's set to empty or a\n nonexistant tablespace, the database's default tablespace is used.\n This is (I think) what Andres proposed in\n https://postgr.es/m/20190306223741.lolaaimhkkp4kict@alap3.anarazel.de\n\n3. Partitioned relations can have the database tablespace in\n pg_class.reltablespace, as opposed to storage-bearing relations which\n cannot. This is useful to be able to put partitions in the database\n tablespace even if the default_tablespace is set to something else.\n\n4. For partitioned tables, ALTER TABLE .. SET TABLESPACE DEFAULT is\n available as suggested by David, which makes future partition\n creations target default_tablespace or the database's tablespace. \n\n5. Recreating indexes during table-rewriting ALTER TABLE resulted in\n broken indexes. We already had some adhesive tape in place to make\n that work for regular indexes (commit bd673e8e864a); my approach to\n fix it for partitioned indexes is to temporarily reset\n default_tablespace to empty.\n\n\nAs for Tom's question in https://postgr.es/m/12678.1555252685@sss.pgh.pa.us :\n\n> It's possible that Alvaro's patch is also broken, but I haven't had time\n> to review it. The immediate question is what happens when somebody makes\n> a partitioned table in template1 and then does CREATE DATABASE with a\n> tablespace option. Does the partitioned table end up in the same\n> tablespace as ordinary tables do?\n\nNote that partitioned don't have any files, so they don't end up\nanywhere; when a partition is created, the target tablespace is\ndetermined using four rules instead of three (see #2 above) so yes, they\ndo end up in the same places as ordinary tables. Note that even if you\ndo put a partitioned table in some tablespace, you will not later run\nafoul of this check:\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot assign new default tablespace \\\"%s\\\"\",\n tablespacename),\n errdetail(\"There is a conflict because database \\\"%s\\\" already has some tables in this tablespace.\",\n dbtemplate)));\nsrc/backend/commands/dbcommands.c:435\n\nbecause that check uses ReadDir() and raise an error if any entry is\nfound; but partitioned tables don't have files in directory, so nothing\nhappens. (Of course, it will hit if you have a partition in that\ntablespace, but that's also the case for regular tables.)\n\n(I propose to commit both 0002 and 0003 as a single unit that fixes the\nwhole problem, rather than attacking backend and pg_dump separately.\n0001 appears logically separate and I would push on its own.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 17 Apr 2019 17:39:01 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> 1. pg_dump now uses regular CREATE TABLE followed by ALTER TABLE / ATTACH\n> PARTITION when creating partitions, rather than CREATE TABLE\n> PARTITION OF. pg_dump --binary-upgrade was already doing that, so\n> this part mostly removes some code. In order to make the partitions\n> reach the correct tablespace, the \"default_tablespace\" GUC is used.\n> No TABLESPACE clause is added to the dump. This is David's patch\n> near the start of the thread.\n\nThis idea seems reasonable independently of all else, simply on the grounds\nof reducing code duplication. It also has the advantage that if you try\nto do a selective restore of just a partition, and the parent partitioned\ntable isn't around, you can still do it (with an ignorable error).\n\n> 2. When creating a partition using the CREATE TABLE PARTITION OF syntax,\n> the TABLESPACE clause has highest precedence; if that is not given,\n> the partitioned table's tablespace is used; if that is set to 0 (the\n> default), default_tablespace is used; if that's set to empty or a\n> nonexistant tablespace, the database's default tablespace is used.\n> This is (I think) what Andres proposed in\n> https://postgr.es/m/20190306223741.lolaaimhkkp4kict@alap3.anarazel.de\n\nHmm. The precedence order between the second and third options seems\npretty arbitrary and hence unrememberable. I don't say this choice is\nwrong, but it's not clear that it's right, either.\n\n> 3. Partitioned relations can have the database tablespace in\n> pg_class.reltablespace, as opposed to storage-bearing relations which\n> cannot. This is useful to be able to put partitions in the database\n> tablespace even if the default_tablespace is set to something else.\n\nI still feel that this is a darn bad idea. It goes against the rule\nthat's existed for pg_class.reltablespace since its beginning, and\nI think it's inevitable that that's going to break something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Apr 2019 17:51:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-17, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > 1. pg_dump now uses regular CREATE TABLE followed by ALTER TABLE / ATTACH\n> > PARTITION when creating partitions, rather than CREATE TABLE\n> > PARTITION OF. pg_dump --binary-upgrade was already doing that, so\n> > this part mostly removes some code. In order to make the partitions\n> > reach the correct tablespace, the \"default_tablespace\" GUC is used.\n> > No TABLESPACE clause is added to the dump. This is David's patch\n> > near the start of the thread.\n> \n> This idea seems reasonable independently of all else, simply on the grounds\n> of reducing code duplication. It also has the advantage that if you try\n> to do a selective restore of just a partition, and the parent partitioned\n> table isn't around, you can still do it (with an ignorable error).\n\nI'll get this part pushed, then.\n\n> > 2. When creating a partition using the CREATE TABLE PARTITION OF syntax,\n> > the TABLESPACE clause has highest precedence; if that is not given,\n> > the partitioned table's tablespace is used; if that is set to 0 (the\n> > default), default_tablespace is used; if that's set to empty or a\n> > nonexistant tablespace, the database's default tablespace is used.\n> > This is (I think) what Andres proposed in\n> > https://postgr.es/m/20190306223741.lolaaimhkkp4kict@alap3.anarazel.de\n> \n> Hmm. The precedence order between the second and third options seems\n> pretty arbitrary and hence unrememberable. I don't say this choice is\n> wrong, but it's not clear that it's right, either.\n\nWell, I see it as the default_tablespace being a global setting whereas\nthe parent is \"closer\" to the partition definition, which is why it has\nhigher priority. I don't have a strong opinion however (and I think the\npatch would be shorter if default_tablespace had higher precedence.)\n\nMaybe others care to comment?\n\n> > 3. Partitioned relations can have the database tablespace in\n> > pg_class.reltablespace, as opposed to storage-bearing relations which\n> > cannot. This is useful to be able to put partitions in the database\n> > tablespace even if the default_tablespace is set to something else.\n> \n> I still feel that this is a darn bad idea. It goes against the rule\n> that's existed for pg_class.reltablespace since its beginning, and\n> I think it's inevitable that that's going to break something.\n\nYes, this deviates from current practice, and while I tested this in as\nmany ways as I could think of, I cannot deny that it might break\nsomething unexpectedly.\n\n\nHere's a v4 btw, which is just some adjustments to the regress test\nscript and expected file.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 17 Apr 2019 18:06:00 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-17, Alvaro Herrera wrote:\n\n> On 2019-Apr-17, Tom Lane wrote:\n> \n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > 1. pg_dump now uses regular CREATE TABLE followed by ALTER TABLE / ATTACH\n> > > PARTITION when creating partitions, rather than CREATE TABLE\n> > > PARTITION OF. pg_dump --binary-upgrade was already doing that, so\n> > > this part mostly removes some code. In order to make the partitions\n> > > reach the correct tablespace, the \"default_tablespace\" GUC is used.\n> > > No TABLESPACE clause is added to the dump. This is David's patch\n> > > near the start of the thread.\n> > \n> > This idea seems reasonable independently of all else, simply on the grounds\n> > of reducing code duplication. It also has the advantage that if you try\n> > to do a selective restore of just a partition, and the parent partitioned\n> > table isn't around, you can still do it (with an ignorable error).\n> \n> I'll get this part pushed, then.\n\nAfter looking at it again, I found that there's no significant\nduplication reduction -- the patch simply duplicates one block in a\ndifferent location, putting half of the original code in each. And if we\nreject the idea of separating tablespaces, there's no reason to do\nthings that way. So ISTM if we don't want the tablespace thing, we\nshould not apply this part. FWIW, we got quite a few positive votes for\nhandling tablespaces this way for partitioned tables [1] [2], so I\nresist the idea that we have to revert the initial commit, as some seem\nto be proposing.\n\n\nAfter re-reading the thread one more time, I found one more pretty\nreasonable point that Andres was complaining about, and I made things\nwork the way he described. Namely, if you do this:\n\nSET default_tablespace TO 'foo';\nCREATE TABLE part (a int) PARTITION BY LIST (a);\nSET default_tablespace TO 'bar';\nCREATE TABLE part1 PARTITION OF part FOR VALUES IN (1);\n\nthen the partition must end up in tablespace bar, not in tablespace foo:\nthe reason is that the default_tablespace is not \"strong enough\" to\nstick with the partitioned table. The partition would only end up in\ntablespace foo in this case:\n\nCREATE TABLE part (a int) PARTITION BY LIST (a) TABLESPACE foo;\nCREATE TABLE part1 PARTITION OF part FOR VALUES IN (1);\n\ni.e. when the tablespace is explicitly indicated in the CREATE TABLE\ncommand for the partitioned table. Of course, you can still add a\nTABLESPACE clause to the partition to override it (and you can change\nthe parent table's tablespace later.)\n\nSo here's a proposed v5.\n\nI would appreciate others' eyes on this patch.\n\n[1] https://postgr.es/m/CAKJS1f9SxVzqDrGD1teosFd6jBMM0UEaa14_8mRvcWE19Tu0hA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAKJS1f9PXYcT%2Bj%3DoyL-Lquz%3DScNwpRtmD7u9svLASUygBdbN8w%40mail.gmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 18 Apr 2019 17:50:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Wed, Apr 17, 2019 at 6:06 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > 3. Partitioned relations can have the database tablespace in\n> > > pg_class.reltablespace, as opposed to storage-bearing relations which\n> > > cannot. This is useful to be able to put partitions in the database\n> > > tablespace even if the default_tablespace is set to something else.\n> >\n> > I still feel that this is a darn bad idea. It goes against the rule\n> > that's existed for pg_class.reltablespace since its beginning, and\n> > I think it's inevitable that that's going to break something.\n>\n> Yes, this deviates from current practice, and while I tested this in as\n> many ways as I could think of, I cannot deny that it might break\n> something unexpectedly.\n\nLike Tom, I think this has got to be broken.\n\nSuppose that you have a partitioned table which has reltablespace =\ndattablespace. It has a bunch of children with reltablespace = 0.\nSo far so good: new children of the partitioned table go into the\ndatabase tablespace regardless of default_tablespace.\n\nNow somebody changes the default tablespace using ALTER DATABASE ..\nSET TABLESPACE. All the existing children end up in the new default\ntablespace, but new children of the partitioned table end up going\ninto the tablespace that used to be the default but is no longer.\nThat's pretty odd, because the whole point of setting a tablespace on\nthe children was to get all of the children into the same tablespace.\n\nThe same thing happens if you clone the database using CREATE DATABASE\n.. TEMPLATE .. TABLESPACE, as Tom mentioned.\n\nPostgreSQL has historically and very deliberately *not made a\ndistinction* between \"this object is in the default tablespace\" and\n\"this object is in tablespace X which happens to be the default.\" I\nthink that it's too late to invent such a distinction for reasons of\nbackward compatibility -- and if we were going to do it, surely it\nwould need to exist for both partitioned tables and the partitions\nthemselves. Otherwise it just produces more strange inconsistencies.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Apr 2019 10:54:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-22, Robert Haas wrote:\n\n> PostgreSQL has historically and very deliberately *not made a\n> distinction* between \"this object is in the default tablespace\" and\n> \"this object is in tablespace X which happens to be the default.\" I\n> think that it's too late to invent such a distinction for reasons of\n> backward compatibility -- and if we were going to do it, surely it\n> would need to exist for both partitioned tables and the partitions\n> themselves. Otherwise it just produces more strange inconsistencies.\n\nYeah, this is probably right. (I don't think it's the same thing that\nTom was saying, though, or at least I didn't understand his argument\nthis way.)\n\nI think we can get out of this whole class of problems by forbidding the\nTABLESPACE clause for partitioned rels from mentioning the database\ntablespace -- that is, users either mention some *other* tablespace, or\npartitions follow default_tablespace like everybody else. AFAICS with\nthat restriction this whole problem does not arise, and the patch may\nbecome simpler. I'll give it a spin.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Apr 2019 14:16:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Hi,\n\nOn 2019-04-22 14:16:28 -0400, Alvaro Herrera wrote:\n> On 2019-Apr-22, Robert Haas wrote:\n> \n> > PostgreSQL has historically and very deliberately *not made a\n> > distinction* between \"this object is in the default tablespace\" and\n> > \"this object is in tablespace X which happens to be the default.\" I\n> > think that it's too late to invent such a distinction for reasons of\n> > backward compatibility -- and if we were going to do it, surely it\n> > would need to exist for both partitioned tables and the partitions\n> > themselves. Otherwise it just produces more strange inconsistencies.\n> \n> Yeah, this is probably right. (I don't think it's the same thing that\n> Tom was saying, though, or at least I didn't understand his argument\n> this way.)\n> \n> I think we can get out of this whole class of problems by forbidding the\n> TABLESPACE clause for partitioned rels from mentioning the database\n> tablespace -- that is, users either mention some *other* tablespace, or\n> partitions follow default_tablespace like everybody else. AFAICS with\n> that restriction this whole problem does not arise, and the patch may\n> become simpler. I'll give it a spin.\n\nWhy is the obvious answer is to not just remove the whole tablespace\ninheritance behaviour? It's obviously ambiguous and hard to get right.\nI still don't see any usecase that even comes close to making the\ninheritance useful enough to justify the amount of work (code, tests,\nbugfixes) and docs that are required.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Apr 2019 11:20:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-22 14:16:28 -0400, Alvaro Herrera wrote:\n>> I think we can get out of this whole class of problems by forbidding the\n>> TABLESPACE clause for partitioned rels from mentioning the database\n>> tablespace -- that is, users either mention some *other* tablespace, or\n>> partitions follow default_tablespace like everybody else. AFAICS with\n>> that restriction this whole problem does not arise, and the patch may\n>> become simpler. I'll give it a spin.\n\n> Why is the obvious answer is to not just remove the whole tablespace\n> inheritance behaviour? It's obviously ambiguous and hard to get right.\n> I still don't see any usecase that even comes close to making the\n> inheritance useful enough to justify the amount of work (code, tests,\n> bugfixes) and docs that are required.\n\nYeah, that's where I'm at as well. Alvaro's proposal could be made\nto work perhaps, but I think it would still end up with some odd\ncorner-case behaviors. One example is that \"TABLESPACE X\" would\nbe allowed if the database's default tablespace is Y, but if you\ntry to dump and restore into a database whose default is X, it'd be\nrejected (?). The results after ALTER DATABASE ... SET TABLESPACE X\nare unclear too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2019 14:30:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-22, Andres Freund wrote:\n\n> Why is the obvious answer is to not just remove the whole tablespace\n> inheritance behaviour?\n\nBecause it was requested by many, and there were plenty of people\nsurprised that things didn't work that way.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Apr 2019 15:08:13 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-22, Tom Lane wrote:\n\n> Yeah, that's where I'm at as well. Alvaro's proposal could be made\n> to work perhaps, but I think it would still end up with some odd\n> corner-case behaviors. One example is that \"TABLESPACE X\" would\n> be allowed if the database's default tablespace is Y, but if you\n> try to dump and restore into a database whose default is X, it'd be\n> rejected (?).\n\nHmm, I don't think so, because dump uses default_tablespace on a plain\ntable instead of TABLESPACE clauses, and the table is attached\nafterwards.\n\n> The results after ALTER DATABASE ... SET TABLESPACE X\n> are unclear too.\n\nCurrently we disallow SET TABLESPACE X if you have any table in that\ntablespace, and we do that by searching for files. A partitioned table\nwould not have a file that would cause it to fail, so this is something\nto study.\n\n\n(BTW I think these tablespace behaviors are not tested very much. The\ntests we have are intra-database operations only, and there's only a\nsingle non-default tablespace.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Apr 2019 15:13:22 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-22, Andres Freund wrote:\n>> Why is the obvious answer is to not just remove the whole tablespace\n>> inheritance behaviour?\n\n> Because it was requested by many, and there were plenty of people\n> surprised that things didn't work that way.\n\nThere are lots of things in SQL that people find surprising.\nIn this particular case, \"we can't do it because it conflicts with\nancient decisions about how PG tablespaces work\" seems like a\ndefensible answer, even without getting into the question of\nwhether \"partitions inherit their tablespace from the parent\"\nis really any less surprising than \"partitions work exactly like\nnormal tables as far as tablespace selection goes\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2019 15:24:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Mon, Apr 22, 2019 at 3:08 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Apr-22, Andres Freund wrote:\n> > Why is the obvious answer is to not just remove the whole tablespace\n> > inheritance behaviour?\n>\n> Because it was requested by many, and there were plenty of people\n> surprised that things didn't work that way.\n\nOn the other hand, that behavior worked correctly on its own terms,\nand this behavior seems to be broken, and you've been through a whole\nseries of possible designs trying to figure out how to fix it, and\nit's still not clear that you've got a working solution. I don't know\nwhether that shows that it is impossible to make this idea work\nsensibly, but at the very least it proves that the whole area needed a\nlot more thought than it got before this code was committed (a\ncomplaint that I also made at the time, if you recall). \"Surprising\"\nis not great, but it is clearly superior to \"broken.\"\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Apr 2019 16:21:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-22, Robert Haas wrote:\n\n> On Mon, Apr 22, 2019 at 3:08 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Apr-22, Andres Freund wrote:\n> > > Why is the obvious answer is to not just remove the whole tablespace\n> > > inheritance behaviour?\n> >\n> > Because it was requested by many, and there were plenty of people\n> > surprised that things didn't work that way.\n> \n> On the other hand, that behavior worked correctly on its own terms,\n> and this behavior seems to be broken, and you've been through a whole\n> series of possible designs trying to figure out how to fix it, and\n> it's still not clear that you've got a working solution.\n\nWell, frequently when people discuss ideas on this list, others discuss\nand provide further ideas to try help to find a working solution, rather\nthan throw every roadblock they can think of (though roadblocks are\nindeed thrown now and then). If I've taken a long time to find a\nworking solution, maybe it's because I have no shoulders of giants to\nstand on, and I'm a pretty short guy, so I need to build me a ladder.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Apr 2019 16:43:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Mon, Apr 22, 2019 at 4:43 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Well, frequently when people discuss ideas on this list, others discuss\n> and provide further ideas to try help to find a working solution, rather\n> than throw every roadblock they can think of (though roadblocks are\n> indeed thrown now and then). If I've taken a long time to find a\n> working solution, maybe it's because I have no shoulders of giants to\n> stand on, and I'm a pretty short guy, so I need to build me a ladder.\n\nWhat exactly do you mean by throwing up roadblocks? I don't have a\nbasket full of great ideas for how to solve this problem that I'm\nfailing to suggest out of some sort of perverse desire to see you\nfail. I'm not quite as convinced as Tom and Andres that this whole\nidea is fatally flawed and can't ever be made to work correctly, but I\nthink it's quite possible that they are right, both because their\nobjections sound to me like they are target and because they are\npretty smart people. But that's also because I haven't spent a lot of\ntime on this issue, which I think is pretty fair, because it seems\nlike it would be unfair to complain that I am not spending enough time\nhelping fix code that I advised against committing in the first place.\nHow much time should I spent giving you advice if my previous advice\nwas ignored?\n\nBut FWIW, it seems to me that a good place to start solving this\nproblem would be to think hard about what Andres said here:\n\nhttp://postgr.es/m/20190306161744.22jdkg37fyi2zyke@alap3.anarazel.de\n\nSpecifically, this complaint: \"I still think the feature as is doesn't\nseem to have very well defined behaviour.\"\n\nIf we know what the feature is supposed to do, then it should be\npossible to look at each relevant piece of code and decides whether it\nimplements the specification correctly or not. But if we don't have a\nspecification -- that is, we don't know precisely what the feature is\nsupposed to do -- then we'll just end up whacking the behavior around\ntrying to get some combination that makes sense, and the whole effort\nis probably doomed.\n\nI think that the large quote block from David Rowley in the middle of\nthe above-linked email gets at the definitional problem pretty\nclearly: the documentation seems to be intending to say -- although it\nis not 100% clear -- that if TABLESPACE is specified it has effect on\nall future children, and if not specified then those children get the\ntablespace they would have gotten anyway. But that behavior is\nimpossible to implement correctly unless there is a way to distinguish\nbetween a partitioned table for which TABLESPACE was not specified and\nwhere it was specified to be the same as the default tablespace for\nthe database. And we know, per previous conversation, that the\ncatalog representation admits of no way to make that distinction.\nUsing reltablespace = dattablespace is clearly the wrong answer,\nbecause that breaks stuff. Tom earlier suggested, I believe, that\nsome fixed OID could be used, e.g. reltablespace = 1 means \"tablespace\nnot specified\" and reltablespace = 0 means dattablespace. That should\nbe safe enough, since I don't think OID 1 can ever be the OID of a\nreal tablespace. I think this is probably the only way forward if\nthis definition captures the desired behavior.\n\nThe other option is to decide that we want some other behavior. In\nthat case, the way forward depends on what we want the behavior to be.\nYour proposal upthread is to disallow the case where the user provides\nan explicit TABLESPACE setting whose value matches the default\ntablespace for the database. But that definition seems to have an\nobvious problem: just because that's not true when the partitioned\ntable is defined doesn't mean it won't become true later, because the\ndefault tablespace for the database can be changed, or the database\ncan be copied and the copy be assigned a different default tablespace.\nEven if there were no issue, that definition doesn't sound very clean,\nbecause it means that reltablespace = 0 for a regular relation means\ndattablespace and for a partitioned relation it means none.\n\nNow that's not the only way we could go either, I suppose. There must\nbe a variety of other possible behaviors. But the one Tom and Andres\nare proposing -- go back to the way it worked in v10 -- is definitely\nnot crazy. You are right that a lot of people didn't like that, but\nthe definition was absolutely clear, because it was the exact same\ndefinition we use for normal tables, with the straigthforward\nextension that for relations with no storage it meant nothing.\n\nGoing back to the proposal of making OID = 0 mean TABLESPACE\ndattablespace, OID = 1 meaning no TABLESPACE clause specified for this\npartitioned table, and OID = whatever meaning that particular\nnon-default tablespace, I do see a couple of problems with that idea\ntoo:\n\n- I don't have any idea how to salvage things in v11, where the\ncatalog representation is irredeemably ambiguous. We'd probably have\nto be satisfied with fairly goofy behavior in v11.\n\n- It's not enough to have a good catalog representation. You also\nhave to be able to recreate those catalog states. I think you\nprobably need a way to set the reltablespace to whatever value you\nneed it to have without changing the tables under it. Like ALTER\nTABLE ONLY blah {TABLESPACE some_tablespace_name | NO TABLESPACE} or\nsome such thing. That exact idea may be wrong, but I think we are not\ngoing to have much luck getting to a happy place without something of\nthis sort.\n\nIf we have a clear definition of what the feature does, a catalog\nrepresentation to match, and syntax that can recreate any given\ncatalog representation, then this should be an SMOP.\n\nBut again, rip it all out and try it again someday is not a crazy\nproposal, IMHO.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Apr 2019 18:36:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Mar-06, Tom Lane wrote:\n\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > As far as I can see, the biggest fundamental difference with doing\n> > things this way will be that the column order of partitions will be\n> > preserved, where before it would inherit the order of the partitioned\n> > table. I'm a little unsure if doing this column reordering was an\n> > intended side-effect or not.\n> \n> Well, if the normal behavior results in changing the column order,\n> it'd be necessary to do things differently in --binary-upgrade mode\n> anyway, because there we *must* preserve column order. I don't know\n> if what you're describing represents a separate bug for pg_upgrade runs,\n> but it might. Is there any test case for the situation left behind by\n> the core regression tests?\n\nNow that I re-read this complaint once again, I wonder if a mismatching\ncolumn order in partitions isn't a thing we ought to preserve anyhow.\nRobert, Amit -- is it by design that pg_dump loses the original column\norder for partitions, when not in binary-upgrade mode? To me, it sounds\nunintuitive to accept partitions that don't exactly match the order of\nthe parent table; but it's been supported all along. In the statu quo,\nif users dump and restore such a database, the restored partition ends\nup with the column order of the parent instead of its own column order\n(by virtue of being created as CREATE TABLE PARTITION OF). Isn't that\nwrong? It'll cause an INSERT/COPY direct to the partition that worked\nprior to the restore to fail after the restore, if the column list isn't\nspecified.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Apr 2019 18:51:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Now that I re-read this complaint once again, I wonder if a mismatching\n> column order in partitions isn't a thing we ought to preserve anyhow.\n> Robert, Amit -- is it by design that pg_dump loses the original column\n> order for partitions, when not in binary-upgrade mode?\n\nI haven't looked at the partitioning code, but I am quite sure that that's\nalways happened for old-style inheritance children, and I imagine pg_dump\nis just duplicating that old behavior.\n\nWasn't there already a patch on the table to change this, though,\nby removing the code path that uses inheritance rather than the\nbinary-upgrade-like solution?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2019 19:11:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019/04/23 7:51, Alvaro Herrera wrote:\n> On 2019-Mar-06, Tom Lane wrote:\n>> David Rowley <david.rowley@2ndquadrant.com> writes:\n>>> As far as I can see, the biggest fundamental difference with doing\n>>> things this way will be that the column order of partitions will be\n>>> preserved, where before it would inherit the order of the partitioned\n>>> table. I'm a little unsure if doing this column reordering was an\n>>> intended side-effect or not.\n>>\n>> Well, if the normal behavior results in changing the column order,\n>> it'd be necessary to do things differently in --binary-upgrade mode\n>> anyway, because there we *must* preserve column order. I don't know\n>> if what you're describing represents a separate bug for pg_upgrade runs,\n>> but it might. Is there any test case for the situation left behind by\n>> the core regression tests?\n> \n> Now that I re-read this complaint once again, I wonder if a mismatching\n> column order in partitions isn't a thing we ought to preserve anyhow.\n> Robert, Amit -- is it by design that pg_dump loses the original column\n> order for partitions, when not in binary-upgrade mode?\n\nI do remember being too wary initially about letting partitions devolve\ninto a state of needing tuple conversion during DML execution, which very\nwell may have been a reason to write the pg_dump support code the way it\nis now. pg_dump chooses to emit partitions with the CREATE TABLE\nPARTITION OF syntax because, as it seems has been correctly interpreted on\nthis thread, it allows partitions to end up with same TupleDesc as the\nparent and hence not require tuple conversion in DML execution, unless of\ncourse it's run with --binary-upgrade mode.\n\nNeeding tuple conversion is still an overhead but maybe there aren't that\nmany cases where TupleDescs differ among tables in partition trees, so the\nconsiderations for emitting PARTITION OF syntax may not be all that\nrelevant. Also, we've made DML involving partitions pretty efficient\nthese days by reducing most other overheads, even though nothing has been\ndone to prevent tuple conversion in the cases it is needed anyway.\n\n> To me, it sounds\n> unintuitive to accept partitions that don't exactly match the order of\n> the parent table; but it's been supported all along.\n\nYou might know it already, but even though column sets of two tables may\nappear identical, their TupleDescs still may not match due to dropped\ncolumns being different in the two tables.\n\n> In the statu quo,\n> if users dump and restore such a database, the restored partition ends\n> up with the column order of the parent instead of its own column order\n> (by virtue of being created as CREATE TABLE PARTITION OF). Isn't that\n> wrong? It'll cause an INSERT/COPY direct to the partition that worked\n> prior to the restore to fail after the restore, if the column list isn't\n> specified.\n\nThat's true, although there is a workaround as you mentioned -- specify\ncolumn names to match the input data. pg_dump itself specifies them, so\nthe dumped output can be loaded unchanged.\n\nAnyway, I don't see a problem with changing pg_dump to *always* emit\nCREATE TABLE followed by ATTACH PARTITION, not just in --binary-upgrade\nmode, if it lets us deal with the tablespace-related issues smoothly.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Tue, 23 Apr 2019 10:49:04 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Tue, 23 Apr 2019 at 13:49, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/04/23 7:51, Alvaro Herrera wrote:\n> > To me, it sounds\n> > unintuitive to accept partitions that don't exactly match the order of\n> > the parent table; but it's been supported all along.\n>\n> You might know it already, but even though column sets of two tables may\n> appear identical, their TupleDescs still may not match due to dropped\n> columns being different in the two tables.\n\nI think that's the most likely reason that the TupleDescs would differ\nat all. For RANGE partitions on time series data, it's quite likely\nthat new partitions are periodically created to store new data. If\nthe partitioned table those belong to evolved over time, gaining new\ncolumns and dropping columns that are no longer needed then some\ntranslation work will end up being required. From my work on\n42f70cd9c, I know tuple conversion is not free, so it's pretty good\nthat pg_dump will remove the need for maps in this case even with the\nproposed change.\n\nI imagine users randomly specifying columns for partitions in various\nrandom orders is a less likely scenario, although, it's entirely\npossible.\n\nTom's point about being able to pg_dump a single partition and restore\nit somewhere without the parent (apart from an error in ATTACH\nPARTITION) seems like it could be useful too, so I'd say that the\npg_dump change is a good one regardless.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 23 Apr 2019 17:45:57 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019/04/23 14:45, David Rowley wrote:\n> On Tue, 23 Apr 2019 at 13:49, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>\n>> On 2019/04/23 7:51, Alvaro Herrera wrote:\n>>> To me, it sounds\n>>> unintuitive to accept partitions that don't exactly match the order of\n>>> the parent table; but it's been supported all along.\n>>\n>> You might know it already, but even though column sets of two tables may\n>> appear identical, their TupleDescs still may not match due to dropped\n>> columns being different in the two tables.\n> \n> I think that's the most likely reason that the TupleDescs would differ\n> at all. For RANGE partitions on time series data, it's quite likely\n> that new partitions are periodically created to store new data. If\n> the partitioned table those belong to evolved over time, gaining new\n> columns and dropping columns that are no longer needed then some\n> translation work will end up being required. From my work on\n> 42f70cd9c, I know tuple conversion is not free, so it's pretty good\n> that pg_dump will remove the need for maps in this case even with the\n> proposed change.\n\nMaybe I'm missing something, but if you're talking about pg_dump changes\nproposed in the latest patch that Alvaro posted on April 18, which is to\nemit partitions as two steps, then I don't see how that will always\nimproves things in terms of whether maps are needed or not (regardless of\nwhether that's something to optimize for or not.) If partitions needed a\nmap in the old database, this patch means that they will *continue* to\nneed it in the new database. With HEAD, they won't, because partitions\ncreated with CREATE TABLE PARTITION OF will have the same descriptor as\nparent, provided the parent is also created afresh in the new database,\nwhich is true in the non-binary-upgrade mode. The current arrangement, as\nI mentioned in my previous email, is partly inspired from the fact that\ncreating the parent and partition afresh in the new database will lead\nthem to have the same TupleDesc and hence won't need maps.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Tue, 23 Apr 2019 15:18:15 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Tue, 23 Apr 2019 at 18:18, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> If partitions needed a\n> map in the old database, this patch means that they will *continue* to\n> need it in the new database.\n\nThat's incorrect. My point was about dropped columns being removed\nafter a dump / reload. Only binary upgrade mode preserves\npg_attribute entries for dropped columns. Normal mode does not, so the\nmaps won't be needed after the reload if they were previously only\nneeded due to dropped columns. This is the case both with and without\nthe pg_dump changes I proposed. The case the patch does change is if\nthe columns were actually out of order, which I saw as an unlikely\nthing to happen in the real world.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 23 Apr 2019 23:03:16 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "Thanks for taking the time to think through this.\n\nOn 2019-Apr-22, Robert Haas wrote:\n\n> If we know what the feature is supposed to do, then it should be\n> possible to look at each relevant piece of code and decides whether it\n> implements the specification correctly or not. But if we don't have a\n> specification -- that is, we don't know precisely what the feature is\n> supposed to do -- then we'll just end up whacking the behavior around\n> trying to get some combination that makes sense, and the whole effort\n> is probably doomed.\n\nClearly this is now the crux of the issue ... the specification was\nunclear, as I cobbled it together without thinking about the overall\nimplications, taking pieces of advice from various posts and my own\nideas of how it should work.\n\nI am trying to split things in a way that makes the most sense to offer\nthe best possible combination of functionality. I think there are three\nsteps to getting this done:\n\n1. for pg10 and pg11, make pg_dump work correctly. I think having\npg_dump use CREATE TABLE PARTITION OF is not correct when the partition\nhas a mismatching column definition. So my proposal is to back-patch\nDavid's pg_dump patch to pg10 and pg11. Thread:\nhttps://postgr.es/m/20190423185007.GA27954@alvherre.pgsql\nNote that this changes pg_dump behavior in back branches.\n\n2. for pg12, try to keep as much of the tablespace inheritance\nfunctionality as possible (more below on how this works), without\nrunning into weird cases. I think if we just forbid the case of\nthe tablespace being defined to the database tablespace, all the\nweird cases disappear from the code.\n\n3. For pg13, we can try to put back the functionality of the database\ntablespace as default for a partition. You suggest using the value 1,\nand Tom suggests adding a new predefined row in pg_tablespace that has\nthe meaning of \"whatever the default tablespace is for the current\ndatabase\". I have a slight preference towards having the additional\nmagical row(s).\n\nMaybe people would like to see #3 put it pg12. I don't oppose that, but\nit seems to be new functionality development; RMT would have to exempt\nthis from feature freeze.\n\nRobert wrote:\n\n> - I don't have any idea how to salvage things in v11, where the\n> catalog representation is irredeemably ambiguous. We'd probably have\n> to be satisfied with fairly goofy behavior in v11.\n\nThis behavior is in use only for indexes in pg11, and does not allow\nsetting the database tablespace anyway (it just gets set back to 0 if\nyou do that), so I don't think there's anything *too* goofy about it.\n(But whatever we do, pg11 behavior for tables would differ from pg12;\nand it would differ for indexes too, depending on how we handle the\ndefault_tablespace thing.)\n\n> - It's not enough to have a good catalog representation. You also\n> have to be able to recreate those catalog states. I think you\n> probably need a way to set the reltablespace to whatever value you\n> need it to have without changing the tables under it. Like ALTER\n> TABLE ONLY blah {TABLESPACE some_tablespace_name | NO TABLESPACE} or\n> some such thing. That exact idea may be wrong, but I think we are not\n> going to have much luck getting to a happy place without something of\n> this sort.\n\nDavid proposed ALTER TABLE .. TABLESPACE DEFAULT as a way to put back\nthe database tablespace behavior (and that's implemented in my previous\npatch), which seems good to me. I'm not sure I like \"NO TABLESPACE\"\nbetter than DEFAULT, but if others do, I'm not really wedded to it.\n\n\nNow I discuss how tablespace inheritance works. To define the\ntablespace of a regular table, there's a simple algorithm:\n\n a1. if there's a TABLESPACE clause, use that.\n b1. otherwise, if there's a default_tablespace, use that.\n c1. otherwise, use the database tablespace.\n d1. if we end up with the database tablespace, overwrite with 0.\n\nIf creating a partition, there is the additional rule that parent's\ntablespace overrides default_tablespace:\n\n a2. if there's a TABLESPACE clause, use that.\n b2. otherwise, if the parent has a tablespace, use that.\n c2. otherwise, if there's a default_tablespace, use that.\n d2. otherwise, use the database tablespace.\n e2. if we end up with the database tablespace, overwrite with 0.\n\nFinally, there's the question of what defines the parent's tablespace.\nI think the best algorithm to determine the tablespace when creating a\npartitioned relation is the same as for partitions:\n\n a3. if there's a TABLESPACE clause, use that.\n b3. otherwise, if the parent has a tablespace, use that. (We have this\n because partitions might be partitioned.)\n c3. otherwise, if there's default_tablespace, use that.\n d3. otherwise, use the database tablespace.\n e3. if we end up with the database tablespace, overwrite with 0.\n\nTracing through the sets of rules, they are all alike -- the only\ndifference is that regular tables lack the rule about parent's\ntablespace.\n\nSome notes on this:\n\n1. We have rule c3, that is, we still honor default_tablespace when\n creating partitioned tables. Andres said, upthread, that we\n shouldn't do that -- if the tablespace isn't *explicitly* mentioned,\n he said, the partitions have no business landing on that tablespace.\n https://postgr.es/m/20190306223741.lolaaimhkkp4kict@alap3.anarazel.de\n However I think it would be more inconsistent to not have that than\n to have it, for the case when you create a partitioned partition.\n I tried to implement things the way he suggests and ended up with\n more warts.\n Would it be surprising for users that all subsequent partitions are\n created in that partition? Perhaps, but in Tom's words, \"there are\n many things [in this area] that are surprising\"; we should be sure to\n display the situation clearly, and if they don't like it, they can\n simply do ALTER TABLE SET TABLESPACE and things are good again.\n\n default_tablespace does not seem to be a particularly popular\n feature; I suspect pg_dump is by far its biggest user, and we make it\n not fall afoul of this problem with patch #1, so it never creates any\n partitions anyway.\n\n2. we have rule b2 ahead of c2; however Tom said users are unlikely to\n remember which rule comes first. I don't care much for that\n argument; surely they can just read the docs. This is not the first\n not-completely-intuitive thing for a DBA, so.\n\n3. If you really want a partitioned rel to end up in the database\n tablespace overriding default_tablespace, you have two options:\n i) reset default_tablespace ahead of time (SET LOCAL), or\n ii) use ALTER TABLE ... TABLESPACE afterwards.\n Maybe we should have a iii) \"CREATE TABLE ... TABLESPACE DEFAULT\"\n clause that does it, but I'm not sure it's necessary.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Apr 2019 18:26:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On Wed, 24 Apr 2019 at 10:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> If creating a partition, there is the additional rule that parent's\n> tablespace overrides default_tablespace:\n>\n> a2. if there's a TABLESPACE clause, use that.\n> b2. otherwise, if the parent has a tablespace, use that.\n> c2. otherwise, if there's a default_tablespace, use that.\n> d2. otherwise, use the database tablespace.\n> e2. if we end up with the database tablespace, overwrite with 0.\n\nWouldn't it just take the proposed pg_dump change to get that? rule\ne2 says we'll store 0 in reltablespace, even if the user does\nTABLESPACE pg_default, so there's no requirement to adjust the hack in\nheap_create to put any additional conditions on when we set\nreltablespace to 0, so it looks like none of the patching work you did\nwould be required to implement this. Right?\n\nThe good part about that is that its consistent with what happens if\nthe user does TABLESPACE pg_default for any other object type that\nsupports tablespaces. i.e we always store 0.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 24 Apr 2019 11:02:41 +1200", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-24, David Rowley wrote:\n\n> On Wed, 24 Apr 2019 at 10:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > If creating a partition, there is the additional rule that parent's\n> > tablespace overrides default_tablespace:\n> >\n> > a2. if there's a TABLESPACE clause, use that.\n> > b2. otherwise, if the parent has a tablespace, use that.\n> > c2. otherwise, if there's a default_tablespace, use that.\n> > d2. otherwise, use the database tablespace.\n> > e2. if we end up with the database tablespace, overwrite with 0.\n> \n> Wouldn't it just take the proposed pg_dump change to get that? rule\n> e2 says we'll store 0 in reltablespace, even if the user does\n> TABLESPACE pg_default, so there's no requirement to adjust the hack in\n> heap_create to put any additional conditions on when we set\n> reltablespace to 0, so it looks like none of the patching work you did\n> would be required to implement this. Right?\n\nI'm not sure yet that 100% of the patch is gone, but yes much of it\nwould go away thankfully. I do suggest we should raise an error if rule\na3 hits and it mentions the database tablespace (I stupidly forgot\nthis critical point in the previous email). I think astonishment is\nlesser that way.\n\n> The good part about that is that its consistent with what happens if\n> the user does TABLESPACE pg_default for any other object type that\n> supports tablespaces. i.e we always store 0.\n\nYeah, it makes the whole thing a lot simpler. Note my note for further\ndevelopment of a feature (modelled after Robert's proposal) to allow the\ndatabase tablespace to be specified, using either a separate pg_tablespace\nentry that means \"use the database tablespace whatever that is\" (Tom's\nsuggestion), or a magic not-a-real-tablespace-OID number known to the\ncode, such as 1 (Robert's).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Apr 2019 20:25:41 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019/04/23 20:03, David Rowley wrote:\n> On Tue, 23 Apr 2019 at 18:18, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>\n>> If partitions needed a\n>> map in the old database, this patch means that they will *continue* to\n>> need it in the new database.\n> \n> That's incorrect.\n\nNot completely though, because...\n\n> My point was about dropped columns being removed\n> after a dump / reload. Only binary upgrade mode preserves\n> pg_attribute entries for dropped columns. Normal mode does not, so the\n> maps won't be needed after the reload if they were previously only\n> needed due to dropped columns. This is the case both with and without\n> the pg_dump changes I proposed. The case the patch does change is if\n> the columns were actually out of order, which I saw as an unlikely\n> thing to happen in the real world.\n\nThis is the case I was talking about, which I agree is very rare. Sorry\nfor being unclear.\n\nI think your proposed patch is fine and I don't want to argue that the way\nthings are now has some very sound basis.\n\nAlso, as you and Alvaro have found, the existing arrangement makes pg_dump\nemit partitions in a way that's not super helpful (insert/copy failing\nunintuitively), but it's not totally broken either. That said, I don't\nmean to oppose back-patching any fix you think is appropriate.\n\nThank you for working on this.\n\nRegards,\nAmit\n\n\n\n", "msg_date": "Wed, 24 Apr 2019 10:48:49 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "On 2019-Apr-23, Alvaro Herrera wrote:\n\n> I'm not sure yet that 100% of the patch is gone, but yes much of it\n> would go away thankfully.\n\nOf course, the part that fixes the bug that indexes move tablespace when\nrecreated by a rewriting ALTER TABLE is still necessary. Included in\nthe attached patch.\n\n(I think it would be good to have the relation being complained about in\nthe error message, though that requires passing the name to\nGetDefaultTablespace.)\n\n> I do suggest we should raise an error if rule a3 hits and it mentions\n> the database tablespace (I stupidly forgot this critical point in the\n> previous email). I think astonishment is lesser that way.\n\nAs in the attached. When pg_default is the database tablespace, these\ncases fail with the patch, as expected:\n\nalvherre=# CREATE TABLE q (a int PRIMARY KEY) PARTITION BY LIST (a) TABLESPACE pg_default;\npsql: ERROR: cannot specify default tablespace for partitioned relations\n\nalvherre=# CREATE TABLE q (a int PRIMARY KEY USING INDEX TABLESPACE pg_default) PARTITION BY LIST (a);\npsql: ERROR: cannot specify default tablespace for partitioned relations\n\n\nalvherre=# SET default_tablespace TO 'pg_default';\n\nalvherre=# CREATE TABLE q (a int PRIMARY KEY) PARTITION BY LIST (a) ;\npsql: ERROR: cannot specify default tablespace for partitioned relations\n\nalvherre=# CREATE TABLE q (a int PRIMARY KEY) PARTITION BY LIST (a) TABLESPACE foo;\npsql: ERROR: cannot specify default tablespace for partitioned relations\n\nalvherre=# CREATE TABLE q (a int PRIMARY KEY USING INDEX TABLESPACE foo) PARTITION BY LIST (a);\npsql: ERROR: cannot specify default tablespace for partitioned relations\n\n\nThese cases work:\n\nalvherre=# CREATE TABLE q (a int PRIMARY KEY USING INDEX TABLESPACE foo) PARTITION BY LIST (a) TABLESPACE foo;\n\nalvherre=# SET default_tablespace TO '';\t-- the default\nalvherre=# CREATE TABLE q (a int PRIMARY KEY) PARTITION BY LIST (a);\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 24 Apr 2019 18:40:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" }, { "msg_contents": "I have pushed this now, after putting back a few of the tests I had\nproposed earlier, as well as a couple of sentences in the docs to\nhopefully make it clearer how it works.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 25 Apr 2019 10:35:46 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump is broken for partition tablespaces" } ]
[ { "msg_contents": "Hi,\n\nCommit bd7c95f0c1a38becffceb3ea7234d57167f6d4bf add DECLARE\nSTATEMENT support to ECPG. This introduced the new rule\nfor EXEC SQL CLOSE cur and with that it gets transformed into\nECPGclose().\n\nNow prior to the above commit, someone can declare the\ncursor in the SQL statement and \"CLOSE cur_name\" can be\nalso, execute as a normal statement.\n\nExample:\n\nEXEC SQL PREPARE cur_query FROM \"DECLARE cur1 CURSOR WITH HOLD FOR SELECT\ncount(*) FROM pg_class\";\nEXEC SQL PREPARE fetch_stmt FROM \"FETCH next FROM cur1\";\nEXEC SQL EXECUTE cur_query;\nEXEC SQL EXECUTE fetch_stmt INTO :c;\nEXEC SQL CLOSE cur1;\n\nWith commit bd7c95f0c1, \"EXEC SQL CLOSE cur1\" will fail\nand throw an error \"sqlcode -245 The cursor is invalid\".\n\nI think the problem here is ECPGclose(), tries to find the\ncursor into \"connection->cursor_stmts\" and if it doesn't\nfind it there, just throws an error. Maybe require fix\ninto ECPGclose() - rather than throwing an error continue\nexecuting statement \"CLOSE cur_name\" with ecpg_do().\n\nAttaching the ECPG program for reference.\n\nThanks,\n\n-- \nRushabh Lathia\nwww.EnterpriseDB.com", "msg_date": "Wed, 6 Mar 2019 12:43:17 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "Hi,\n\n> Commit bd7c95f0c1a38becffceb3ea7234d57167f6d4bf add DECLARE\n> STATEMENT support to ECPG. This introduced the new rule\n> for EXEC SQL CLOSE cur and with that it gets transformed into\n> ECPGclose().\n> \n> Now prior to the above commit, someone can declare the\n> cursor in the SQL statement and \"CLOSE cur_name\" can be\n> also, execute as a normal statement.\n\nThat still works, the difference in your test case is that the DECLARE\nstatement is prepared.\n\n> Example:\n> \n> EXEC SQL PREPARE cur_query FROM \"DECLARE cur1 CURSOR WITH HOLD FOR\n> SELECT count(*) FROM pg_class\";\n> EXEC SQL PREPARE fetch_stmt FROM \"FETCH next FROM cur1\";\n> EXEC SQL EXECUTE cur_query;\n> EXEC SQL EXECUTE fetch_stmt INTO :c;\n> EXEC SQL CLOSE cur1;\n> \n> With commit bd7c95f0c1, \"EXEC SQL CLOSE cur1\" will fail\n> and throw an error \"sqlcode -245 The cursor is invalid\".\n> \n> I think the problem here is ECPGclose(), tries to find the\n> cursor into \"connection->cursor_stmts\" and if it doesn't\n> find it there, just throws an error. Maybe require fix\n> into ECPGclose() - rather than throwing an error continue\n> executing statement \"CLOSE cur_name\" with ecpg_do().\n\nThe problem as I see it is that the cursor is known to the backend but\nnot the library. Takaheshi-san, Hayato-san, any idea how to improve the\nsituation to not error out on statements that used to work?\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n", "msg_date": "Wed, 06 Mar 2019 12:41:01 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "On Thu, Mar 7, 2019 at 9:56 AM Michael Meskes <meskes@postgresql.org> wrote:\n\n> Hi,\n>\n> > Commit bd7c95f0c1a38becffceb3ea7234d57167f6d4bf add DECLARE\n> > STATEMENT support to ECPG. This introduced the new rule\n> > for EXEC SQL CLOSE cur and with that it gets transformed into\n> > ECPGclose().\n> >\n> > Now prior to the above commit, someone can declare the\n> > cursor in the SQL statement and \"CLOSE cur_name\" can be\n> > also, execute as a normal statement.\n>\n> That still works, the difference in your test case is that the DECLARE\n> statement is prepared.\n>\n> > Example:\n> >\n> > EXEC SQL PREPARE cur_query FROM \"DECLARE cur1 CURSOR WITH HOLD FOR\n> > SELECT count(*) FROM pg_class\";\n> > EXEC SQL PREPARE fetch_stmt FROM \"FETCH next FROM cur1\";\n> > EXEC SQL EXECUTE cur_query;\n> > EXEC SQL EXECUTE fetch_stmt INTO :c;\n> > EXEC SQL CLOSE cur1;\n> >\n> > With commit bd7c95f0c1, \"EXEC SQL CLOSE cur1\" will fail\n> > and throw an error \"sqlcode -245 The cursor is invalid\".\n> >\n> > I think the problem here is ECPGclose(), tries to find the\n> > cursor into \"connection->cursor_stmts\" and if it doesn't\n> > find it there, just throws an error. Maybe require fix\n> > into ECPGclose() - rather than throwing an error continue\n> > executing statement \"CLOSE cur_name\" with ecpg_do().\n>\n> The problem as I see it is that the cursor is known to the backend but\n> not the library.\n\n\nExactly. So maybe we should add logic into ECPGclose() if\nit doesn't find a cursor in the library - just send a query to the\nbackend rather than throwing an error.\n\n\n> Takaheshi-san, Hayato-san, any idea how to improve the\n> situation to not error out on statements that used to work?\n>\n> Michael\n> --\n> Michael Meskes\n> Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\n> Meskes at (Debian|Postgresql) dot Org\n> Jabber: michael at xmpp dot meskes dot org\n> VfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n>\n>\n>\n\n-- \nRushabh Lathia\n\nOn Thu, Mar 7, 2019 at 9:56 AM Michael Meskes <meskes@postgresql.org> wrote:Hi,\n> Commit bd7c95f0c1a38becffceb3ea7234d57167f6d4bf add DECLARE> STATEMENT support to ECPG.  This introduced the new rule> for EXEC SQL CLOSE cur and with that it gets transformed into> ECPGclose().> > Now prior to the above commit, someone can declare the> cursor in the SQL statement and \"CLOSE cur_name\" can be> also, execute as a normal statement.\nThat still works, the difference in your test case is that the DECLAREstatement is prepared.\n> Example:> > EXEC SQL PREPARE cur_query FROM \"DECLARE cur1 CURSOR WITH HOLD FOR> SELECT count(*) FROM pg_class\";> EXEC SQL PREPARE fetch_stmt FROM \"FETCH next FROM cur1\";> EXEC SQL EXECUTE cur_query;> EXEC SQL EXECUTE fetch_stmt INTO :c;> EXEC SQL CLOSE cur1;> > With commit bd7c95f0c1, \"EXEC SQL CLOSE cur1\" will fail> and throw an error \"sqlcode -245 The cursor is invalid\".> > I think the problem here is ECPGclose(), tries to find the> cursor into \"connection->cursor_stmts\" and if it doesn't> find it there, just throws an error.   Maybe require fix> into ECPGclose() - rather than throwing an error continue> executing statement \"CLOSE cur_name\" with ecpg_do().\nThe problem as I see it is that the cursor is known to the backend butnot the library.Exactly.   So maybe we should add logic into ECPGclose() ifit doesn't find a cursor in the library - just send a query to thebackend rather than throwing an error. Takaheshi-san, Hayato-san, any idea how to improve thesituation to not error out on statements that used to work?\nMichael-- Michael MeskesMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)Meskes at (Debian|Postgresql) dot OrgJabber: michael at xmpp dot meskes dot orgVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n-- Rushabh Lathia", "msg_date": "Thu, 7 Mar 2019 12:47:50 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "Dear Rushabh, Michael,\r\n\r\nThank you for reporting.\r\nI understood bugs had be embed in ecpg by me.\r\nI start checking codes and investigating solutions and other errors.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\n\n\n\n\n\n\n\n\nDear Rushabh, Michael,\n \nThank you for reporting.\nI understood bugs had be embed in ecpg by me.\nI start checking codes and investigating solutions and other errors.\n \n \nBest Regards,\nHayato Kuroda\nFujitsu LIMITED", "msg_date": "Fri, 8 Mar 2019 01:54:28 +0000", "msg_from": "\"Kuroda, Hayato\" <kuroda.hayato@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "Dear Rushabh, Michael,\r\n\r\nI attached a simple bug-fixing patch.\r\nCould you review it?\r\n\r\nAn added logic is:\r\n\r\n1. Send a close statement to a backend process regardless of the existence of a cursor.\r\n\r\n2. If ecpg_do function returns false, raise “cursor is invalid” error.\r\n\r\n3. Remove cursor from application.\r\n\r\nI already checked this patch passes regression tests and Rushabh’s test code.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED", "msg_date": "Mon, 11 Mar 2019 02:03:29 +0000", "msg_from": "\"Kuroda, Hayato\" <kuroda.hayato@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "> I attached a simple bug-fixing patch.\n\nI'm not happy with the situation, but don't see a better solution\neither. Therefore I committed the change to get rid of the regression.\n\nThanks.\n\nMichael \n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n", "msg_date": "Mon, 11 Mar 2019 16:19:39 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "Hi Kuroda-san\r\n\r\nI think that the 2nd argument of following ecpg_init() must be real_connection_name.\r\nIs it right?\r\n\r\nECPGdeallocate(int lineno, int c, const char *connection_name, const char *name)\r\n:\r\n con = ecpg_get_connection(real_connection_name);\r\n if (!ecpg_init(con, connection_name, lineno))\r\n ^^^^^^^^^^^^^^^\r\n\r\nRegards\r\nRyo Matsumura\r\n", "msg_date": "Wed, 13 Mar 2019 01:56:04 +0000", "msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "Dear Matsumura-san,\r\n\r\n> I think that the 2nd argument of following ecpg_init() must be real_connection_name.\r\n> Is it right?\r\n\r\nYes, I think it should be real_connection_name for raising correct error message. \r\nThis is also an leak of my code and I attached a patch.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED", "msg_date": "Wed, 13 Mar 2019 02:52:19 +0000", "msg_from": "\"Kuroda, Hayato\" <kuroda.hayato@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "Hi Kurokawa-san\r\n\r\nI reviewd it. It's ok.\r\nI also confirm there is no same bug.\r\n\r\nRegards\r\nRyo Matsumura\r\n", "msg_date": "Wed, 13 Mar 2019 04:35:48 +0000", "msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: ECPG regression with DECLARE STATEMENT support" }, { "msg_contents": "On Wed, Mar 13, 2019 at 04:35:48AM +0000, Matsumura, Ryo wrote:\n> Hi Kurokawa-san\n> \n> I reviewd it. It's ok.\n> I also confirm there is no same bug.\n\nFYI, this was applied a few weeks ago:\n\n\tAuthor: Michael Meskes <meskes@postgresql.org>\n\tDate: Fri Mar 15 22:35:24 2019 +0100\n\t\n\t Use correct connection name variable in ecpglib.\n\t\n\t Fixed-by: Kuroda-san <kuroda.hayato@jp.fujitsu.com>\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Apr 2019 13:57:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ECPG regression with DECLARE STATEMENT support" } ]
[ { "msg_contents": "Hi,\n\nApologies if this has been discussed before: When I run pg_basebackup in git\nhead against v11 server, it treats v11 as v12: Does not create recovery.conf, \nadds recovery parameters to postgresql.auto.conf, and also creates\nstandby.signal file. Is this expected, or a bug?\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Wed, 06 Mar 2019 11:55:12 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "pg_basebackup against older server versions" }, { "msg_contents": "On Wed, Mar 06, 2019 at 11:55:12AM +0300, Devrim Gündüz wrote:\n> Apologies if this has been discussed before: When I run pg_basebackup in git\n> head against v11 server, it treats v11 as v12: Does not create recovery.conf, \n> adds recovery parameters to postgresql.auto.conf, and also creates\n> standby.signal file. Is this expected, or a bug?\n\nYou are right, this is a bug. Compatibility with past server versions\nshould be preserved, and we made an effort to do so in the past (see\nthe switch to pg_wal/ for example). Fortunately, maintaining the\ncompatibility is simple enough as the connection information is close\nby so that we just need to change postgresql.auto.conf to\nrecovery.conf, and avoid the creation of standby.signal.\n--\nMichael", "msg_date": "Wed, 6 Mar 2019 18:09:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup against older server versions" }, { "msg_contents": "Hello\n\nMy fault. I thought pg_basebackup works only with same major version, sorry.\nHow about attached patch?\n\nregards, Sergei", "msg_date": "Wed, 06 Mar 2019 13:42:16 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup against older server versions" }, { "msg_contents": "On Wed, Mar 06, 2019 at 01:42:16PM +0300, Sergei Kornilov wrote:\n> My fault. I thought pg_basebackup works only with same major version, sorry.\n> How about attached patch?\n\nNo problem. Thanks for the patch, the logic looks good and I made\nsome adjustments as attached. Does that look fine to you?\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 16:27:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup against older server versions" }, { "msg_contents": "Hi\n\n> No problem. Thanks for the patch, the logic looks good and I made\n> some adjustments as attached. Does that look fine to you?\n\nLooks fine, thanks. I tested against HEAD and v11.2 with and without -R in both plain and tar formats.\n\nregards, Sergei\n\n", "msg_date": "Thu, 07 Mar 2019 10:57:46 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup against older server versions" }, { "msg_contents": "On Thu, Mar 07, 2019 at 10:57:46AM +0300, Sergei Kornilov wrote:\n> Looks fine, thanks. I tested against HEAD and v11.2 with and without\n> -R in both plain and tar formats. \n\nSame here, so I have committed the patch.\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 10:18:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup against older server versions" }, { "msg_contents": "HiGreat, thank you! regards, Sergei\n", "msg_date": "Fri, 08 Mar 2019 11:46:03 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup against older server versions" } ]
[ { "msg_contents": "Hi,\n\n(added Fujita-san)\n\nI noticed a bug with how UPDATE tuple routing initializes ResultRelInfos\nto use for partition routing targets. Specifically, the bug occurs when\nUPDATE targets include a foreign partition that is locally modified (as\nopposed to being modified directly on the remove server) and its\nResultRelInfo (called subplan result rel in the source code) is reused for\ntuple routing:\n\n-- setup\ncreate extension postgres_fdw ;\ncreate server loopback foreign data wrapper postgres_fdw;\ncreate user mapping for current_user server loopback;\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1);\ncreate table p2base (a int check (a = 2));\ncreate foreign table p2 partition of p for values in (2) server loopback\noptions (table_name 'p2base');\ninsert into p values (1), (2);\n\n-- see in the plan that foreign partition p2 is locally modified\nexplain verbose update p set a = 2 from (values (1), (2)) s(x) where a =\ns.x returning *;\n QUERY PLAN\n\n─────────────────────────────────────────────────────────────────────────────────\n Update on public.p (cost=0.05..236.97 rows=50 width=42)\n Output: p1.a, \"*VALUES*\".column1\n Update on public.p1\n Foreign Update on public.p2\n Remote SQL: UPDATE public.p2base SET a = $2 WHERE ctid = $1 RETURNING a\n -> Hash Join (cost=0.05..45.37 rows=26 width=42)\n Output: 2, p1.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n Hash Cond: (p1.a = \"*VALUES*\".column1)\n -> Seq Scan on public.p1 (cost=0.00..35.50 rows=2550 width=10)\n Output: p1.ctid, p1.a\n -> Hash (cost=0.03..0.03 rows=2 width=32)\n Output: \"*VALUES*\".*, \"*VALUES*\".column1\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2\nwidth=32)\n Output: \"*VALUES*\".*, \"*VALUES*\".column1\n -> Hash Join (cost=100.05..191.59 rows=24 width=42)\n Output: 2, p2.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n Hash Cond: (p2.a = \"*VALUES*\".column1)\n -> Foreign Scan on public.p2 (cost=100.00..182.27 rows=2409\nwidth=10)\n Output: p2.ctid, p2.a\n Remote SQL: SELECT a, ctid FROM public.p2base FOR UPDATE\n -> Hash (cost=0.03..0.03 rows=2 width=32)\n Output: \"*VALUES*\".*, \"*VALUES*\".column1\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2\nwidth=32)\n Output: \"*VALUES*\".*, \"*VALUES*\".column1\n\n\n-- as opposed to directly on remote side (because there's no local join)\nexplain verbose update p set a = 2 returning *;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────\n Update on public.p (cost=0.00..227.40 rows=5280 width=10)\n Output: p1.a\n Update on public.p1\n Foreign Update on public.p2\n -> Seq Scan on public.p1 (cost=0.00..35.50 rows=2550 width=10)\n Output: 2, p1.ctid\n -> Foreign Update on public.p2 (cost=100.00..191.90 rows=2730 width=10)\n Remote SQL: UPDATE public.p2base SET a = 2 RETURNING a\n(8 rows)\n\nRunning the first update query crashes:\n\nupdate p set a = 2 from (values (1), (2)) s(x) where a = s.x returning\ntableoid::regclass, *;\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nThe problem seems to occur because ExecSetupPartitionTupleRouting thinks\nit can reuse p2's ResultRelInfo for tuple routing. In this case, it can't\nbe used, because its ri_FdwState contains information that will be needed\nfor postgresExecForeignUpdate to work, but it's still used today. Because\nit's assigned to be used for tuple routing, its ri_FdwState will be\noverwritten by postgresBeginForeignInsert that's invoked by the tuple\nrouting code using the aforementioned ResultRelInfo. Crash occurs when\npostgresExecForeignUpdate () can't find the information it's expecting in\nthe ri_FdwState.\n\nThe solution is to teach ExecSetupPartitionTupleRouting to avoid using a\nsubplan result rel if its ri_FdwState is already set. I was wondering if\nit should also check ri_usesFdwDirectModify is true, but in that case,\nri_FdwState is unused, so it's perhaps safe for tuple routing code to\nscribble on it.\n\nI have attached 2 patches, one for PG 11 where the problem first appeared\nand another for HEAD. The patch for PG 11 is significantly bigger due to\nhaving to handle the complexities of mapping of subplan result rel indexes\nto leaf partition indexes. Most of that complexity is gone in HEAD due to\n3f2393edefa5, so the patch for HEAD is much simpler. I've also added a\ntest in postgres_fdw.sql to exercise this test case.\n\nThanks,\nAmit", "msg_date": "Wed, 6 Mar 2019 18:33:02 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "bug in update tuple routing with foreign partitions" }, { "msg_contents": "Hi Amit,\n\n(2019/03/06 18:33), Amit Langote wrote:\n> I noticed a bug with how UPDATE tuple routing initializes ResultRelInfos\n> to use for partition routing targets. Specifically, the bug occurs when\n> UPDATE targets include a foreign partition that is locally modified (as\n> opposed to being modified directly on the remove server) and its\n> ResultRelInfo (called subplan result rel in the source code) is reused for\n> tuple routing:\n\nWill look into this.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 06 Mar 2019 22:06:27 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/03/06 18:33), Amit Langote wrote:\n> I noticed a bug with how UPDATE tuple routing initializes ResultRelInfos\n> to use for partition routing targets. Specifically, the bug occurs when\n> UPDATE targets include a foreign partition that is locally modified (as\n> opposed to being modified directly on the remove server) and its\n> ResultRelInfo (called subplan result rel in the source code) is reused for\n> tuple routing:\n>\n> -- setup\n> create extension postgres_fdw ;\n> create server loopback foreign data wrapper postgres_fdw;\n> create user mapping for current_user server loopback;\n> create table p (a int) partition by list (a);\n> create table p1 partition of p for values in (1);\n> create table p2base (a int check (a = 2));\n> create foreign table p2 partition of p for values in (2) server loopback\n> options (table_name 'p2base');\n> insert into p values (1), (2);\n>\n> -- see in the plan that foreign partition p2 is locally modified\n> explain verbose update p set a = 2 from (values (1), (2)) s(x) where a =\n> s.x returning *;\n> QUERY PLAN\n>\n> ─────────────────────────────────────────────────────────────────────────────────\n> Update on public.p (cost=0.05..236.97 rows=50 width=42)\n> Output: p1.a, \"*VALUES*\".column1\n> Update on public.p1\n> Foreign Update on public.p2\n> Remote SQL: UPDATE public.p2base SET a = $2 WHERE ctid = $1 RETURNING a\n> -> Hash Join (cost=0.05..45.37 rows=26 width=42)\n> Output: 2, p1.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n> Hash Cond: (p1.a = \"*VALUES*\".column1)\n> -> Seq Scan on public.p1 (cost=0.00..35.50 rows=2550 width=10)\n> Output: p1.ctid, p1.a\n> -> Hash (cost=0.03..0.03 rows=2 width=32)\n> Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2\n> width=32)\n> Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> -> Hash Join (cost=100.05..191.59 rows=24 width=42)\n> Output: 2, p2.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n> Hash Cond: (p2.a = \"*VALUES*\".column1)\n> -> Foreign Scan on public.p2 (cost=100.00..182.27 rows=2409\n> width=10)\n> Output: p2.ctid, p2.a\n> Remote SQL: SELECT a, ctid FROM public.p2base FOR UPDATE\n> -> Hash (cost=0.03..0.03 rows=2 width=32)\n> Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2\n> width=32)\n> Output: \"*VALUES*\".*, \"*VALUES*\".column1\n>\n>\n> -- as opposed to directly on remote side (because there's no local join)\n> explain verbose update p set a = 2 returning *;\n> QUERY PLAN\n> ─────────────────────────────────────────────────────────────────────────────\n> Update on public.p (cost=0.00..227.40 rows=5280 width=10)\n> Output: p1.a\n> Update on public.p1\n> Foreign Update on public.p2\n> -> Seq Scan on public.p1 (cost=0.00..35.50 rows=2550 width=10)\n> Output: 2, p1.ctid\n> -> Foreign Update on public.p2 (cost=100.00..191.90 rows=2730 width=10)\n> Remote SQL: UPDATE public.p2base SET a = 2 RETURNING a\n> (8 rows)\n>\n> Running the first update query crashes:\n>\n> update p set a = 2 from (values (1), (2)) s(x) where a = s.x returning\n> tableoid::regclass, *;\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> The problem seems to occur because ExecSetupPartitionTupleRouting thinks\n> it can reuse p2's ResultRelInfo for tuple routing. In this case, it can't\n> be used, because its ri_FdwState contains information that will be needed\n> for postgresExecForeignUpdate to work, but it's still used today. Because\n> it's assigned to be used for tuple routing, its ri_FdwState will be\n> overwritten by postgresBeginForeignInsert that's invoked by the tuple\n> routing code using the aforementioned ResultRelInfo. Crash occurs when\n> postgresExecForeignUpdate () can't find the information it's expecting in\n> the ri_FdwState.\n\nAgreed, as I said before in another thread.\n\n> The solution is to teach ExecSetupPartitionTupleRouting to avoid using a\n> subplan result rel if its ri_FdwState is already set.\n\nI'm not sure that is a good idea, because that requires to create a new \nResultRelInfo, which is not free; I think it requires a lot of work. \nAnother solution to avoid that would be to store the fmstate created by \npostgresBeginForeignInsert() into the ri_FdwState, not overwrite the \nri_FdwState, like the attached. This would not need any changes to the \ncore, and this would not cause any overheads either, IIUC. What do you \nthink about that?\n\n> I have attached 2 patches, one for PG 11 where the problem first appeared\n> and another for HEAD. The patch for PG 11 is significantly bigger due to\n> having to handle the complexities of mapping of subplan result rel indexes\n> to leaf partition indexes. Most of that complexity is gone in HEAD due to\n> 3f2393edefa5, so the patch for HEAD is much simpler.\n\nThanks for the patches!\n\n> I've also added a\n> test in postgres_fdw.sql to exercise this test case.\n\nThanks again, but the test case you added works well without any fix:\n\n+-- Check case where the foreign partition is a subplan target rel and\n+-- foreign parttion is locally modified (target table being joined\n+-- locally prevents a direct/remote modification plan).\n+explain (verbose, costs off)\n+update utrtest set a = 1 from (values (1), (2)) s(x) where a = s.x \nreturning *;\n+ QUERY PLAN \n\n+------------------------------------------------------------------------------\n+ Update on public.utrtest\n+ Output: remp.a, remp.b, \"*VALUES*\".column1\n+ Foreign Update on public.remp\n+ Remote SQL: UPDATE public.loct SET a = $2 WHERE ctid = $1 \nRETURNING a, b\n+ Update on public.locp\n+ -> Hash Join\n+ Output: 1, remp.b, remp.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n+ Hash Cond: (remp.a = \"*VALUES*\".column1)\n+ -> Foreign Scan on public.remp\n+ Output: remp.b, remp.ctid, remp.a\n+ Remote SQL: SELECT a, b, ctid FROM public.loct FOR UPDATE\n+ -> Hash\n+ Output: \"*VALUES*\".*, \"*VALUES*\".column1\n+ -> Values Scan on \"*VALUES*\"\n+ Output: \"*VALUES*\".*, \"*VALUES*\".column1\n+ -> Hash Join\n+ Output: 1, locp.b, locp.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n+ Hash Cond: (locp.a = \"*VALUES*\".column1)\n+ -> Seq Scan on public.locp\n+ Output: locp.b, locp.ctid, locp.a\n+ -> Hash\n+ Output: \"*VALUES*\".*, \"*VALUES*\".column1\n+ -> Values Scan on \"*VALUES*\"\n+ Output: \"*VALUES*\".*, \"*VALUES*\".column1\n+(24 rows)\n\nIn this test case, the foreign partition is updated before ri_FdwState \nis overwritten by postgresBeginForeignInsert() that's invoked by the \ntuple routing code, so it would work without any fix. So I modified \nthis test case as such.\n\nSorry for the long delay, again.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 10 Apr 2019 17:38:14 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "Fujita-san,\n\nThanks for the review.\n\nOn 2019/04/10 17:38, Etsuro Fujita wrote:\n> (2019/03/06 18:33), Amit Langote wrote:\n>> I noticed a bug with how UPDATE tuple routing initializes ResultRelInfos\n>> to use for partition routing targets.  Specifically, the bug occurs when\n>> UPDATE targets include a foreign partition that is locally modified (as\n>> opposed to being modified directly on the remove server) and its\n>> ResultRelInfo (called subplan result rel in the source code) is reused for\n>> tuple routing:\n> \n>> The solution is to teach ExecSetupPartitionTupleRouting to avoid using a\n>> subplan result rel if its ri_FdwState is already set.\n> \n> I'm not sure that is a good idea, because that requires to create a new\n> ResultRelInfo, which is not free; I think it requires a lot of work.\n\nAfter considering this a bit more and studying your proposed solution, I\ntend to agree. Beside the performance considerations you point out that I\ntoo think are valid, I realize that going with my approach would mean\nembedding some assumptions in the core code about ri_FdwState; in this\ncase, assuming that a subplan result rel's ri_FdwState is not to be\nre-purposed for tuple routing insert, which is only based on looking at\npostgres_fdw's implementation. Not to mention the ensuing complexity in\nthe core tuple routing code -- it now needs to account for the fact that\nnot all subplan result rels may have been re-purposed for tuple-routing,\nsomething that's too complicated to handle in PG 11.\n\nIOW, let's call this a postgres_fdw bug and fix it there as you propose.\n\n> Another solution to avoid that would be to store the fmstate created by\n> postgresBeginForeignInsert() into the ri_FdwState, not overwrite the\n> ri_FdwState, like the attached.  This would not need any changes to the\n> core, and this would not cause any overheads either, IIUC.  What do you\n> think about that?\n\n+1. Just one comment:\n\n+ /*\n+ * If the given resultRelInfo already has PgFdwModifyState set, it means\n+ * the foreign table is an UPDATE subplan resultrel; in which case, store\n+ * the resulting state into the PgFdwModifyState.\n+ */\n\nThe last \"PgFdwModifyState\" should be something else? Maybe,\nsub-PgModifyState or sub_fmstate?\n\n\n>> I've also added a\n>> test in postgres_fdw.sql to exercise this test case.\n> \n> Thanks again, but the test case you added works well without any fix:\n\nOops, I should really adopt a habit of adding a test case before code.\n\n> +-- Check case where the foreign partition is a subplan target rel and\n> +-- foreign parttion is locally modified (target table being joined\n> +-- locally prevents a direct/remote modification plan).\n> +explain (verbose, costs off)\n> +update utrtest set a = 1 from (values (1), (2)) s(x) where a = s.x\n> returning *;\n> +                                  QUERY PLAN\n> +------------------------------------------------------------------------------\n> \n> + Update on public.utrtest\n> +   Output: remp.a, remp.b, \"*VALUES*\".column1\n> +   Foreign Update on public.remp\n> +     Remote SQL: UPDATE public.loct SET a = $2 WHERE ctid = $1 RETURNING\n> a, b\n> +   Update on public.locp\n> +   ->  Hash Join\n> +         Output: 1, remp.b, remp.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n> +         Hash Cond: (remp.a = \"*VALUES*\".column1)\n> +         ->  Foreign Scan on public.remp\n> +               Output: remp.b, remp.ctid, remp.a\n> +               Remote SQL: SELECT a, b, ctid FROM public.loct FOR UPDATE\n> +         ->  Hash\n> +               Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> +               ->  Values Scan on \"*VALUES*\"\n> +                     Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> +   ->  Hash Join\n> +         Output: 1, locp.b, locp.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n> +         Hash Cond: (locp.a = \"*VALUES*\".column1)\n> +         ->  Seq Scan on public.locp\n> +               Output: locp.b, locp.ctid, locp.a\n> +         ->  Hash\n> +               Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> +               ->  Values Scan on \"*VALUES*\"\n> +                     Output: \"*VALUES*\".*, \"*VALUES*\".column1\n> +(24 rows)\n> \n> In this test case, the foreign partition is updated before ri_FdwState is\n> overwritten by postgresBeginForeignInsert() that's invoked by the tuple\n> routing code, so it would work without any fix.  So I modified this test\n> case as such.\n\nThanks for fixing that. I hadn't noticed that foreign partition remp is\nupdated before local partition locp; should've added a locp0 for values\n(0) so that remp appears second in the ModifyTable's subplans. Or, well,\nmake the foreign table remp appear later by making it a partition for\nvalues in (3) as your patch does.\n\nBTW, have you noticed that the RETURNING clause returns the same row twice?\n\n+update utrtest set a = 3 from (values (2), (3)) s(x) where a = s.x\nreturning *;\n+ a | b | x\n+---+-------------------+---\n+ 3 | qux triggered ! | 2\n+ 3 | xyzzy triggered ! | 3\n+ 3 | qux triggered ! | 3\n+(3 rows)\n\nYou can see that the row that's moved into remp is returned twice (one\nwith \"qux triggered!\" in b column), whereas it should've been only once?\nI checked the behavior with moving into a local partition just to be sure\nand the changed row is returned only once.\n\ncreate table p (a int, b text) partition by list (a);\ncreate table p1 partition of p for values in (1);\ncreate table p2 partition of p for values in (2);\ninsert into p values (1, 'p1'), (2, 'p2');\n\nselect tableoid::regclass, * from p;\n tableoid │ a │ b\n──────────┼───┼────\n p1 │ 1 │ p1\n p2 │ 2 │ p2\n(2 rows)\n\nupdate p set a = 2 returning tableoid::regclass, *;\n tableoid │ a │ b\n──────────┼───┼────\n p2 │ 2 │ p1\n p2 │ 2 │ p2\n(2 rows)\n\nAdd a foreign table partition in the mix and that's no longer true.\n\ncreate extension postgres_fdw ;\ncreate server loopback foreign data wrapper postgres_fdw;\ncreate user mapping for current_user server loopback;\ncreate table p3base (a int check (a = 3), b text);\ncreate foreign table p3 partition of p for values in (3) server loopback\noptions (table_name 'p3base');\ndelete from p;\ninsert into p values (1, 'p1'), (2, 'p2'), (3, 'p3');\n\n\nselect tableoid::regclass, * from p;\n tableoid │ a │ b\n──────────┼───┼────\n p1 │ 1 │ p1\n p2 │ 2 │ p2\n p3 │ 3 │ p3\n(3 rows)\n\nupdate p set a = 3 returning tableoid::regclass, *;\n tableoid │ a │ b\n──────────┼───┼────\n p3 │ 3 │ p1\n p3 │ 3 │ p2\n p3 │ 3 │ p3\n p3 │ 3 │ p1\n p3 │ 3 │ p2\n(5 rows)\n\nselect tableoid::regclass, * from p;\n tableoid │ a │ b\n──────────┼───┼────\n p3 │ 3 │ p3\n p3 │ 3 │ p1\n p3 │ 3 │ p2\n(3 rows)\n\nI'm not sure if it's a bug that ought to be fixed, but thought I should\nhighlight it just in case.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Thu, 11 Apr 2019 14:33:57 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/11 14:33), Amit Langote wrote:\n> On 2019/04/10 17:38, Etsuro Fujita wrote:\n>> (2019/03/06 18:33), Amit Langote wrote:\n>>> I noticed a bug with how UPDATE tuple routing initializes ResultRelInfos\n>>> to use for partition routing targets. Specifically, the bug occurs when\n>>> UPDATE targets include a foreign partition that is locally modified (as\n>>> opposed to being modified directly on the remove server) and its\n>>> ResultRelInfo (called subplan result rel in the source code) is reused for\n>>> tuple routing:\n>>\n>>> The solution is to teach ExecSetupPartitionTupleRouting to avoid using a\n>>> subplan result rel if its ri_FdwState is already set.\n>>\n>> I'm not sure that is a good idea, because that requires to create a new\n>> ResultRelInfo, which is not free; I think it requires a lot of work.\n>\n> After considering this a bit more and studying your proposed solution, I\n> tend to agree. Beside the performance considerations you point out that I\n> too think are valid, I realize that going with my approach would mean\n> embedding some assumptions in the core code about ri_FdwState; in this\n> case, assuming that a subplan result rel's ri_FdwState is not to be\n> re-purposed for tuple routing insert, which is only based on looking at\n> postgres_fdw's implementation.\n\nYeah, that assumption might not apply to some other FDWs.\n\n> Not to mention the ensuing complexity in\n> the core tuple routing code -- it now needs to account for the fact that\n> not all subplan result rels may have been re-purposed for tuple-routing,\n> something that's too complicated to handle in PG 11.\n\nI think so too.\n\n> IOW, let's call this a postgres_fdw bug and fix it there as you propose.\n\nAgreed.\n\n>> Another solution to avoid that would be to store the fmstate created by\n>> postgresBeginForeignInsert() into the ri_FdwState, not overwrite the\n>> ri_FdwState, like the attached. This would not need any changes to the\n>> core, and this would not cause any overheads either, IIUC. What do you\n>> think about that?\n>\n> +1. Just one comment:\n>\n> + /*\n> + * If the given resultRelInfo already has PgFdwModifyState set, it means\n> + * the foreign table is an UPDATE subplan resultrel; in which case, store\n> + * the resulting state into the PgFdwModifyState.\n> + */\n>\n> The last \"PgFdwModifyState\" should be something else? Maybe,\n> sub-PgModifyState or sub_fmstate?\n\nOK, will revise. My favorite would be the latter.\n\n>>> I've also added a\n>>> test in postgres_fdw.sql to exercise this test case.\n>>\n>> Thanks again, but the test case you added works well without any fix:\n>\n> Oops, I should really adopt a habit of adding a test case before code.\n>\n>> +-- Check case where the foreign partition is a subplan target rel and\n>> +-- foreign parttion is locally modified (target table being joined\n>> +-- locally prevents a direct/remote modification plan).\n>> +explain (verbose, costs off)\n>> +update utrtest set a = 1 from (values (1), (2)) s(x) where a = s.x\n>> returning *;\n>> + QUERY PLAN\n>> +------------------------------------------------------------------------------\n>>\n>> + Update on public.utrtest\n>> + Output: remp.a, remp.b, \"*VALUES*\".column1\n>> + Foreign Update on public.remp\n>> + Remote SQL: UPDATE public.loct SET a = $2 WHERE ctid = $1 RETURNING\n>> a, b\n>> + Update on public.locp\n>> + -> Hash Join\n>> + Output: 1, remp.b, remp.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n>> + Hash Cond: (remp.a = \"*VALUES*\".column1)\n>> + -> Foreign Scan on public.remp\n>> + Output: remp.b, remp.ctid, remp.a\n>> + Remote SQL: SELECT a, b, ctid FROM public.loct FOR UPDATE\n>> + -> Hash\n>> + Output: \"*VALUES*\".*, \"*VALUES*\".column1\n>> + -> Values Scan on \"*VALUES*\"\n>> + Output: \"*VALUES*\".*, \"*VALUES*\".column1\n>> + -> Hash Join\n>> + Output: 1, locp.b, locp.ctid, \"*VALUES*\".*, \"*VALUES*\".column1\n>> + Hash Cond: (locp.a = \"*VALUES*\".column1)\n>> + -> Seq Scan on public.locp\n>> + Output: locp.b, locp.ctid, locp.a\n>> + -> Hash\n>> + Output: \"*VALUES*\".*, \"*VALUES*\".column1\n>> + -> Values Scan on \"*VALUES*\"\n>> + Output: \"*VALUES*\".*, \"*VALUES*\".column1\n>> +(24 rows)\n>>\n>> In this test case, the foreign partition is updated before ri_FdwState is\n>> overwritten by postgresBeginForeignInsert() that's invoked by the tuple\n>> routing code, so it would work without any fix. So I modified this test\n>> case as such.\n>\n> Thanks for fixing that. I hadn't noticed that foreign partition remp is\n> updated before local partition locp; should've added a locp0 for values\n> (0) so that remp appears second in the ModifyTable's subplans. Or, well,\n> make the foreign table remp appear later by making it a partition for\n> values in (3) as your patch does.\n>\n> BTW, have you noticed that the RETURNING clause returns the same row twice?\n\nI noticed this, but I didn't think it hard. :(\n\n> +update utrtest set a = 3 from (values (2), (3)) s(x) where a = s.x\n> returning *;\n> + a | b | x\n> +---+-------------------+---\n> + 3 | qux triggered ! | 2\n> + 3 | xyzzy triggered ! | 3\n> + 3 | qux triggered ! | 3\n> +(3 rows)\n>\n> You can see that the row that's moved into remp is returned twice (one\n> with \"qux triggered!\" in b column), whereas it should've been only once?\n\nYeah, this is unexpected behavior, so will look into this. Thanks for \npointing that out!\n\nBest regards,\nEtsuro Fujita\n\n\n\n", "msg_date": "Thu, 11 Apr 2019 20:31:06 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/11 20:31), Etsuro Fujita wrote:\n> (2019/04/11 14:33), Amit Langote wrote:\n>> BTW, have you noticed that the RETURNING clause returns the same row\n>> twice?\n>\n> I noticed this, but I didn't think it hard. :(\n>\n>> +update utrtest set a = 3 from (values (2), (3)) s(x) where a = s.x\n>> returning *;\n>> + a | b | x\n>> +---+-------------------+---\n>> + 3 | qux triggered ! | 2\n>> + 3 | xyzzy triggered ! | 3\n>> + 3 | qux triggered ! | 3\n>> +(3 rows)\n>>\n>> You can see that the row that's moved into remp is returned twice (one\n>> with \"qux triggered!\" in b column), whereas it should've been only once?\n>\n> Yeah, this is unexpected behavior, so will look into this.\n\nI think the reason for that is: the row routed to remp is incorrectly \nfetched as a to-be-updated row when updating remp as a subplan \ntargetrel. The right way to fix this would be to have some way in \npostgres_fdw in which we don't fetch such rows when updating such \nsubplan targetrels. I tried to figure out a (simple) way to do that, \nbut I couldn't. One probably-simple solution I came up with is to sort \nsubplan targetrels into the order in which foreign-table subplan \ntargetrels get processed first in ExecModifyTable(). (Note: currently, \nrows can be moved from local partitions to a foreign-table partition, \nbut rows cannot be moved from foreign-table partitions to another \npartition, so we wouldn't encounter this situation once we sort like \nthat.) But I think that's ugly, and I don't think it's a good idea to \nchange the core, just for postgres_fdw. So what I'm thinking is to \nthrow an error for cases like this. (Though, I think we should keep to \nallow rows to be moved from local partitions to a foreign-table subplan \ntargetrel that has been updated already.)\n\nWhat do you think about that?\n\nBest regards,\nEtsuro Fujita\n\n\n\n", "msg_date": "Wed, 17 Apr 2019 21:49:24 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "Fujita-san,\n\nOn 2019/04/17 21:49, Etsuro Fujita wrote:\n> (2019/04/11 20:31), Etsuro Fujita wrote:\n>> (2019/04/11 14:33), Amit Langote wrote:\n>>> BTW, have you noticed that the RETURNING clause returns the same row\n>>> twice?\n>>\n>> I noticed this, but I didn't think it hard. :(\n>>\n>>> +update utrtest set a = 3 from (values (2), (3)) s(x) where a = s.x\n>>> returning *;\n>>> + a | b | x\n>>> +---+-------------------+---\n>>> + 3 | qux triggered ! | 2\n>>> + 3 | xyzzy triggered ! | 3\n>>> + 3 | qux triggered ! | 3\n>>> +(3 rows)\n>>>\n>>> You can see that the row that's moved into remp is returned twice (one\n>>> with \"qux triggered!\" in b column), whereas it should've been only once?\n>>\n>> Yeah, this is unexpected behavior, so will look into this.\n\nThanks for investigating.\n\n> I think the reason for that is: the row routed to remp is incorrectly\n> fetched as a to-be-updated row when updating remp as a subplan targetrel.\n\nYeah. In the fully-local case, that is, where both the source and the\ntarget partition of a row movement operation are local tables, heap AM\nensures that tuples that's moved into a given relation in the same command\n(by way of row movement) are not returned as to-be-updated, because it\ndeems such tuples invisible. The \"same command\" part being crucial for\nthat to work.\n\nIn the case where the target of a row movement operation is a foreign\ntable partition, the INSERT used as part of row movement and subsequent\nUPDATE of the same foreign table are distinct commands for the remote\nserver. So, the rows inserted by the 1st command (as part of the row\nmovement) are deemed visible by the 2nd command (UPDATE) even if both are\noperating within the same transaction.\n\nI guess there's no easy way for postgres_fdw to make the remote server\nconsider them as a single command. IOW, no way to make the remote server\nnot touch those \"moved-in\" rows during the UPDATE part of the local query.\n \n> The right way to fix this would be to have some way in postgres_fdw in\n> which we don't fetch such rows when updating such subplan targetrels.  I\n> tried to figure out a (simple) way to do that, but I couldn't.\n\nYeah, that seems a bit hard to ensure with our current infrastructure.\n\n> One\n> probably-simple solution I came up with is to sort subplan targetrels into\n> the order in which foreign-table subplan targetrels get processed first in\n> ExecModifyTable().  (Note: currently, rows can be moved from local\n> partitions to a foreign-table partition, but rows cannot be moved from\n> foreign-table partitions to another partition, so we wouldn't encounter\n> this situation once we sort like that.)  But I think that's ugly, and I\n> don't think it's a good idea to change the core, just for postgres_fdw.\n\nAgreed that it seems like contorting the core code to accommodate\nlimitations of postgres_fdw.\n\n> So what I'm thinking is to throw an error for cases like this.  (Though, I\n> think we should keep to allow rows to be moved from local partitions to a\n> foreign-table subplan targetrel that has been updated already.)\n\nHmm, how would you distinguish (totally inside postgred_fdw I presume) the\ntwo cases?\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Thu, 18 Apr 2019 14:06:46 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "On 2019/04/18 14:06, Amit Langote wrote:\n> Fujita-san,\n> \n> On 2019/04/17 21:49, Etsuro Fujita wrote:\n>> (2019/04/11 20:31), Etsuro Fujita wrote:\n>>> (2019/04/11 14:33), Amit Langote wrote:\n>>>> BTW, have you noticed that the RETURNING clause returns the same row\n>>>> twice?\n>>>\n>>> I noticed this, but I didn't think it hard. :(\n>>>\n>>>> +update utrtest set a = 3 from (values (2), (3)) s(x) where a = s.x\n>>>> returning *;\n>>>> + a | b | x\n>>>> +---+-------------------+---\n>>>> + 3 | qux triggered ! | 2\n>>>> + 3 | xyzzy triggered ! | 3\n>>>> + 3 | qux triggered ! | 3\n>>>> +(3 rows)\n>>>>\n>>>> You can see that the row that's moved into remp is returned twice (one\n>>>> with \"qux triggered!\" in b column), whereas it should've been only once?\n>>>\n>>> Yeah, this is unexpected behavior, so will look into this.\n> \n> Thanks for investigating.\n> \n>> I think the reason for that is: the row routed to remp is incorrectly\n>> fetched as a to-be-updated row when updating remp as a subplan targetrel.\n> \n> Yeah. In the fully-local case, that is, where both the source and the\n> target partition of a row movement operation are local tables, heap AM\n> ensures that tuples that's moved into a given relation in the same command\n> (by way of row movement) are not returned as to-be-updated, because it\n> deems such tuples invisible. The \"same command\" part being crucial for\n> that to work.\n> \n> In the case where the target of a row movement operation is a foreign\n> table partition, the INSERT used as part of row movement and subsequent\n> UPDATE of the same foreign table are distinct commands for the remote\n> server. So, the rows inserted by the 1st command (as part of the row\n> movement) are deemed visible by the 2nd command (UPDATE) even if both are\n> operating within the same transaction.\n> \n> I guess there's no easy way for postgres_fdw to make the remote server\n> consider them as a single command. IOW, no way to make the remote server\n> not touch those \"moved-in\" rows during the UPDATE part of the local query.\n>  \n>> The right way to fix this would be to have some way in postgres_fdw in\n>> which we don't fetch such rows when updating such subplan targetrels.  I\n>> tried to figure out a (simple) way to do that, but I couldn't.\n> \n> Yeah, that seems a bit hard to ensure with our current infrastructure.\n> \n>> One\n>> probably-simple solution I came up with is to sort subplan targetrels into\n>> the order in which foreign-table subplan targetrels get processed first in\n>> ExecModifyTable().  (Note: currently, rows can be moved from local\n>> partitions to a foreign-table partition, but rows cannot be moved from\n>> foreign-table partitions to another partition, so we wouldn't encounter\n>> this situation once we sort like that.)  But I think that's ugly, and I\n>> don't think it's a good idea to change the core, just for postgres_fdw.\n> \n> Agreed that it seems like contorting the core code to accommodate\n> limitations of postgres_fdw.\n> \n>> So what I'm thinking is to throw an error for cases like this.  (Though, I\n>> think we should keep to allow rows to be moved from local partitions to a\n>> foreign-table subplan targetrel that has been updated already.)\n> \n> Hmm, how would you distinguish (totally inside postgred_fdw I presume) the\n> two cases?\n\nForgot to say that since this is a separate issue from the original bug\nreport, maybe we can first finish fixing the latter. What to do you think?\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Thu, 18 Apr 2019 14:08:57 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "Amit-san,\n\nThanks for the comments!\n\n(2019/04/18 14:08), Amit Langote wrote:\n> On 2019/04/18 14:06, Amit Langote wrote:\n>> On 2019/04/17 21:49, Etsuro Fujita wrote:\n\n>>> I think the reason for that is: the row routed to remp is incorrectly\n>>> fetched as a to-be-updated row when updating remp as a subplan targetrel.\n>>\n>> Yeah. In the fully-local case, that is, where both the source and the\n>> target partition of a row movement operation are local tables, heap AM\n>> ensures that tuples that's moved into a given relation in the same command\n>> (by way of row movement) are not returned as to-be-updated, because it\n>> deems such tuples invisible. The \"same command\" part being crucial for\n>> that to work.\n>>\n>> In the case where the target of a row movement operation is a foreign\n>> table partition, the INSERT used as part of row movement and subsequent\n>> UPDATE of the same foreign table are distinct commands for the remote\n>> server. So, the rows inserted by the 1st command (as part of the row\n>> movement) are deemed visible by the 2nd command (UPDATE) even if both are\n>> operating within the same transaction.\n\nYeah, I think so too.\n\n>> I guess there's no easy way for postgres_fdw to make the remote server\n>> consider them as a single command. IOW, no way to make the remote server\n>> not touch those \"moved-in\" rows during the UPDATE part of the local query.\n\nI agree.\n\n>>> The right way to fix this would be to have some way in postgres_fdw in\n>>> which we don't fetch such rows when updating such subplan targetrels. I\n>>> tried to figure out a (simple) way to do that, but I couldn't.\n>>\n>> Yeah, that seems a bit hard to ensure with our current infrastructure.\n\nYeah, I think we should leave that for future work.\n\n>>> So what I'm thinking is to throw an error for cases like this. (Though, I\n>>> think we should keep to allow rows to be moved from local partitions to a\n>>> foreign-table subplan targetrel that has been updated already.)\n>>\n>> Hmm, how would you distinguish (totally inside postgred_fdw I presume) the\n>> two cases?\n\nOne thing I came up with to do that is this:\n\n@@ -1917,6 +1937,23 @@ postgresBeginForeignInsert(ModifyTableState *mtstate,\n\n initStringInfo(&sql);\n\n+ /*\n+ * If the foreign table is an UPDATE subplan resultrel that \nhasn't yet\n+ * been updated, routing tuples to the table might yield incorrect\n+ * results, because if routing tuples, routed tuples will be \nmistakenly\n+ * read from the table and updated twice when updating the table \n--- it\n+ * would be nice if we could handle this case; but for now, \nthrow an error\n+ * for safety.\n+ */\n+ if (plan && plan->operation == CMD_UPDATE &&\n+ (resultRelInfo->ri_usesFdwDirectModify ||\n+ resultRelInfo->ri_FdwState) &&\n+ resultRelInfo > mtstate->resultRelInfo + \nmtstate->mt_whichplan)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot route tuples into \nforeign table to be updated \\\"%s\\\"\",\n+ \nRelationGetRelationName(rel))));\n\n> Forgot to say that since this is a separate issue from the original bug\n> report, maybe we can first finish fixing the latter. What to do you think?\n\nYeah, I think we can do that, but my favorite would be to fix the two \nissues in a single shot, because they seem to me rather close to each \nother. So I updated a new version in a single patch, which I'm attaching.\n\nNotes:\n\n* I kept all the changes in the previous patch, because otherwise \npostgres_fdw would fail to release resources for a foreign-insert \noperation created by postgresBeginForeignInsert() for a tuple-routable \nforeign table (ie, a foreign-table subplan resultrel that has been \nupdated already) during postgresEndForeignInsert().\n\n* I revised a comment according to your previous comment, though I \nchanged a state name: s/sub_fmstate/aux_fmstate/\n\n* DirectModify also has the latter issue. Here is an example:\n\npostgres=# create table p (a int, b text) partition by list (a);\npostgres=# create table p1 partition of p for values in (1);\npostgres=# create table p2base (a int check (a = 2 or a = 3), b text);\npostgres=# create foreign table p2 partition of p for values in (2, 3) \nserver loopback options (table_name 'p2base');\n\npostgres=# insert into p values (1, 'foo');\nINSERT 0 1\npostgres=# explain verbose update p set a = a + 1;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Update on public.p (cost=0.00..176.21 rows=2511 width=42)\n Update on public.p1\n Foreign Update on public.p2\n -> Seq Scan on public.p1 (cost=0.00..25.88 rows=1270 width=42)\n Output: (p1.a + 1), p1.b, p1.ctid\n -> Foreign Update on public.p2 (cost=100.00..150.33 rows=1241 \nwidth=42)\n Remote SQL: UPDATE public.p2base SET a = (a + 1)\n(7 rows)\n\npostgres=# update p set a = a + 1;\nUPDATE 2\npostgres=# select * from p;\n a | b\n---+-----\n 3 | foo\n(1 row)\n\nAs shown in the above bit added to postgresBeginForeignInsert(), I \nmodified the patch so that we throw an error for cases like this even \nwhen using a direct modification plan, and I also added regression test \ncases for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 18 Apr 2019 22:10:06 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/18 22:10), Etsuro Fujita wrote:\n> Notes:\n>\n> * I kept all the changes in the previous patch, because otherwise\n> postgres_fdw would fail to release resources for a foreign-insert\n> operation created by postgresBeginForeignInsert() for a tuple-routable\n> foreign table (ie, a foreign-table subplan resultrel that has been\n> updated already) during postgresEndForeignInsert().\n\nI noticed that this explanation was not right. Let me correct myself. \nThe reason why I kept those changes is: without those changes, we would \nfail to release the resources for a foreign-update operation (ie, \nfmstate) created by postgresBeginForeignModify(), replaced by the \nfmstate for the foreign-insert operation, because when doing \nExecEndPlan(), we first call postgresEndForeignModify() and then call \npostgresEndForeignInsert(); so, if we didn't keep those changes, we \nwould *mistakenly* release the fmstate for the foreign-insert operation \nin postgresEndForeignModify(), and we wouldn't do anything about the \nfmstate for the foreign-update operation in that function and in the \nsubsequent postgresEndForeignInsert().\n\nBest regards,\nEtsuro Fujita\n\n\n\n", "msg_date": "Fri, 19 Apr 2019 12:15:11 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "Fujita-san,\n\nOn 2019/04/18 22:10, Etsuro Fujita wrote:\n>> On 2019/04/18 14:06, Amit Langote wrote:\n>>> On 2019/04/17 21:49, Etsuro Fujita wrote:\n>>>> So what I'm thinking is to throw an error for cases like this. \n>>>> (Though, I\n>>>> think we should keep to allow rows to be moved from local partitions to a\n>>>> foreign-table subplan targetrel that has been updated already.)\n>>>\n>>> Hmm, how would you distinguish (totally inside postgred_fdw I presume) the\n>>> two cases?\n> \n> One thing I came up with to do that is this:\n> \n> @@ -1917,6 +1937,23 @@ postgresBeginForeignInsert(ModifyTableState *mtstate,\n> \n>         initStringInfo(&sql);\n> \n> +       /*\n> +        * If the foreign table is an UPDATE subplan resultrel that hasn't\n> yet\n> +        * been updated, routing tuples to the table might yield incorrect\n> +        * results, because if routing tuples, routed tuples will be\n> mistakenly\n> +        * read from the table and updated twice when updating the table\n> --- it\n> +        * would be nice if we could handle this case; but for now, throw\n> an error\n> +        * for safety.\n> +        */\n> +       if (plan && plan->operation == CMD_UPDATE &&\n> +               (resultRelInfo->ri_usesFdwDirectModify ||\n> +                resultRelInfo->ri_FdwState) &&\n> +               resultRelInfo > mtstate->resultRelInfo +\n> mtstate->mt_whichplan)\n> +               ereport(ERROR,\n> +                               (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +                                errmsg(\"cannot route tuples into foreign\n> table to be updated \\\"%s\\\"\",\n> + RelationGetRelationName(rel))));\n\nOh, I see.\n\nSo IIUC, you're making postgresBeginForeignInsert() check two things:\n\n1. Whether the foreign table it's about to begin inserting (moved) rows\ninto is a subplan result rel, checked by\n(resultRelInfo->ri_usesFdwDirectModify || resultRelInfo->ri_FdwState)\n\n2. Whether the foreign table it's about to begin inserting (moved) rows\ninto will be updated later, checked by (resultRelInfo >\nmtstate->resultRelInfo + mtstate->mt_whichplan)\n\nThis still allows a foreign table to receive rows moved from the local\npartitions if it has already finished the UPDATE operation.\n\nSeems reasonable to me.\n\n>> Forgot to say that since this is a separate issue from the original bug\n>> report, maybe we can first finish fixing the latter.  What to do you think?\n> \n> Yeah, I think we can do that, but my favorite would be to fix the two\n> issues in a single shot, because they seem to me rather close to each\n> other.  So I updated a new version in a single patch, which I'm attaching.\n\nI agree that we can move to fix the other issue right away as the fix you\noutlined above seems reasonable, but I wonder if it wouldn't be better to\ncommit the two fixes separately? The two fixes, although small, are\nsomewhat complicated and combining them in a single commit might be\nconfusing. Also, a combined commit might make it harder for the release\nnote author to list down the exact set of problems being fixed. But I\nguess your commit message will make it clear that two distinct problems\nare being solved, so maybe that shouldn't be a problem.\n\n> Notes:\n> \n> * I kept all the changes in the previous patch, because otherwise\n> postgres_fdw would fail to release resources for a foreign-insert\n> operation created by postgresBeginForeignInsert() for a tuple-routable\n> foreign table (ie, a foreign-table subplan resultrel that has been updated\n> already) during postgresEndForeignInsert().\n\nHmm are you saying that the cases for which we'll still allow tuple\nrouting (foreign table receiving moved-in rows has already been updated),\nthere will be two fmstates to be released -- the original fmstate\n(UPDATE's) and aux_fmstate (INSERT's)?\n\n> * I revised a comment according to your previous comment, though I changed\n> a state name: s/sub_fmstate/aux_fmstate/\n\nThat seems fine to me.\n\n> * DirectModify also has the latter issue.  Here is an example:\n> \n> postgres=# create table p (a int, b text) partition by list (a);\n> postgres=# create table p1 partition of p for values in (1);\n> postgres=# create table p2base (a int check (a = 2 or a = 3), b text);\n> postgres=# create foreign table p2 partition of p for values in (2, 3)\n> server loopback options (table_name 'p2base');\n> \n> postgres=# insert into p values (1, 'foo');\n> INSERT 0 1\n> postgres=# explain verbose update p set a = a + 1;\n>                                  QUERY PLAN\n> -----------------------------------------------------------------------------\n>  Update on public.p  (cost=0.00..176.21 rows=2511 width=42)\n>    Update on public.p1\n>    Foreign Update on public.p2\n>    ->  Seq Scan on public.p1  (cost=0.00..25.88 rows=1270 width=42)\n>          Output: (p1.a + 1), p1.b, p1.ctid\n>    ->  Foreign Update on public.p2  (cost=100.00..150.33 rows=1241 width=42)\n>          Remote SQL: UPDATE public.p2base SET a = (a + 1)\n> (7 rows)\n> \n> postgres=# update p set a = a + 1;\n> UPDATE 2\n> postgres=# select * from p;\n>  a |  b\n> ---+-----\n>  3 | foo\n> (1 row)\n\nAh, the expected out is \"(2, foo)\". Also, with RETURNING, you'd get this:\n\nupdate p set a = a + 1 returning tableoid::regclass, *;\n tableoid │ a │ b\n──────────┼───┼─────\n p2 │ 2 │ foo\n p2 │ 3 │ foo\n(2 rows)\n\n> As shown in the above bit added to postgresBeginForeignInsert(), I\n> modified the patch so that we throw an error for cases like this even when\n> using a direct modification plan, and I also added regression test cases\n> for that.\n\nThanks for adding detailed tests.\n\nSome mostly cosmetic comments on the code changes:\n\n* In the following comment:\n\n+ /*\n+ * If the foreign table is an UPDATE subplan resultrel that hasn't yet\n+ * been updated, routing tuples to the table might yield incorrect\n+ * results, because if routing tuples, routed tuples will be mistakenly\n+ * read from the table and updated twice when updating the table --- it\n+ * would be nice if we could handle this case; but for now, throw an\nerror\n+ * for safety.\n+ */\n\nthe part that start with \"because if routing tuples...\" reads a bit\nunclear to me. How about writing this as:\n\n /*\n * If the foreign table we are about to insert routed rows into is\n * also an UPDATE result rel and the UPDATE hasn't been performed yet,\n * proceeding with the INSERT will result in the later UPDATE\n * incorrectly modifying those routed rows, so prevent the INSERT ---\n * it would be nice if we could handle this case, but for now, throw\n * an error for safety.\n */\n\nI see that in all the hunks involving some manipulation of aux_fmstate,\nthere's a comment explaining what it is, which seems a bit repetitive. I\ncan see more or less the same explanation in postgresExecForeignInsert(),\npostgresBeginForeignInsert(), and postgresEndForeignInsert(). Maybe just\nkeep the description in postgresBeginForeignInsert as follows:\n\n@@ -1983,7 +2020,19 @@ postgresBeginForeignInsert(ModifyTableState *mtstate,\n retrieved_attrs != NIL,\n retrieved_attrs);\n\n- resultRelInfo->ri_FdwState = fmstate;\n+ /*\n+ * If the given resultRelInfo already has PgFdwModifyState set, it means\n+ * the foreign table is an UPDATE subplan resultrel; in which case, store\n+ * the resulting state into the aux_fmstate of the PgFdwModifyState.\n+ */\n\nand change the other sites to refer to postgresBeginForeingInsert for the\ndetailed explanation of what aux_fmstate is.\n\nBTW, do you think we should update the docs regarding the newly\nestablished limitation of row movement involving foreign tables?\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Fri, 19 Apr 2019 13:00:24 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "On 2019/04/19 13:00, Amit Langote wrote:\n> BTW, do you think we should update the docs regarding the newly\n> established limitation of row movement involving foreign tables?\n\nAh, maybe not, because it's a postgres_fdw limitation, not of the core\ntuple routing feature. OTOH, we don't mention at all in postgres-fdw.sgml\nthat postgres_fdw supports tuple routing. Maybe we should and list this\nlimitation or would it be too much burden to maintain?\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Fri, 19 Apr 2019 13:17:22 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/19 13:00), Amit Langote wrote:\n> On 2019/04/18 22:10, Etsuro Fujita wrote:\n>>> On 2019/04/18 14:06, Amit Langote wrote:\n>>>> On 2019/04/17 21:49, Etsuro Fujita wrote:\n>>>>> So what I'm thinking is to throw an error for cases like this.\n>>>>> (Though, I\n>>>>> think we should keep to allow rows to be moved from local partitions to a\n>>>>> foreign-table subplan targetrel that has been updated already.)\n>>>>\n>>>> Hmm, how would you distinguish (totally inside postgred_fdw I presume) the\n>>>> two cases?\n>>\n>> One thing I came up with to do that is this:\n>>\n>> @@ -1917,6 +1937,23 @@ postgresBeginForeignInsert(ModifyTableState *mtstate,\n>>\n>> initStringInfo(&sql);\n>>\n>> + /*\n>> + * If the foreign table is an UPDATE subplan resultrel that hasn't\n>> yet\n>> + * been updated, routing tuples to the table might yield incorrect\n>> + * results, because if routing tuples, routed tuples will be\n>> mistakenly\n>> + * read from the table and updated twice when updating the table\n>> --- it\n>> + * would be nice if we could handle this case; but for now, throw\n>> an error\n>> + * for safety.\n>> + */\n>> + if (plan&& plan->operation == CMD_UPDATE&&\n>> + (resultRelInfo->ri_usesFdwDirectModify ||\n>> + resultRelInfo->ri_FdwState)&&\n>> + resultRelInfo> mtstate->resultRelInfo +\n>> mtstate->mt_whichplan)\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> + errmsg(\"cannot route tuples into foreign\n>> table to be updated \\\"%s\\\"\",\n>> + RelationGetRelationName(rel))));\n>\n> Oh, I see.\n>\n> So IIUC, you're making postgresBeginForeignInsert() check two things:\n>\n> 1. Whether the foreign table it's about to begin inserting (moved) rows\n> into is a subplan result rel, checked by\n> (resultRelInfo->ri_usesFdwDirectModify || resultRelInfo->ri_FdwState)\n>\n> 2. Whether the foreign table it's about to begin inserting (moved) rows\n> into will be updated later, checked by (resultRelInfo>\n> mtstate->resultRelInfo + mtstate->mt_whichplan)\n>\n> This still allows a foreign table to receive rows moved from the local\n> partitions if it has already finished the UPDATE operation.\n>\n> Seems reasonable to me.\n\nGreat!\n\n>>> Forgot to say that since this is a separate issue from the original bug\n>>> report, maybe we can first finish fixing the latter. What to do you think?\n>>\n>> Yeah, I think we can do that, but my favorite would be to fix the two\n>> issues in a single shot, because they seem to me rather close to each\n>> other. So I updated a new version in a single patch, which I'm attaching.\n>\n> I agree that we can move to fix the other issue right away as the fix you\n> outlined above seems reasonable, but I wonder if it wouldn't be better to\n> commit the two fixes separately? The two fixes, although small, are\n> somewhat complicated and combining them in a single commit might be\n> confusing. Also, a combined commit might make it harder for the release\n> note author to list down the exact set of problems being fixed.\n\nOK, I'll separate the patch into two.\n\n>> Notes:\n>>\n>> * I kept all the changes in the previous patch, because otherwise\n>> postgres_fdw would fail to release resources for a foreign-insert\n>> operation created by postgresBeginForeignInsert() for a tuple-routable\n>> foreign table (ie, a foreign-table subplan resultrel that has been updated\n>> already) during postgresEndForeignInsert().\n>\n> Hmm are you saying that the cases for which we'll still allow tuple\n> routing (foreign table receiving moved-in rows has already been updated),\n> there will be two fmstates to be released -- the original fmstate\n> (UPDATE's) and aux_fmstate (INSERT's)?\n\nYeah, but I noticed that that explanation was not correct. (I think I \nwas really in hurry.) See the correction in [1].\n\n>> * I revised a comment according to your previous comment, though I changed\n>> a state name: s/sub_fmstate/aux_fmstate/\n>\n> That seems fine to me.\n\nCool!\n\n> Some mostly cosmetic comments on the code changes:\n>\n> * In the following comment:\n>\n> + /*\n> + * If the foreign table is an UPDATE subplan resultrel that hasn't yet\n> + * been updated, routing tuples to the table might yield incorrect\n> + * results, because if routing tuples, routed tuples will be mistakenly\n> + * read from the table and updated twice when updating the table --- it\n> + * would be nice if we could handle this case; but for now, throw an\n> error\n> + * for safety.\n> + */\n>\n> the part that start with \"because if routing tuples...\" reads a bit\n> unclear to me. How about writing this as:\n>\n> /*\n> * If the foreign table we are about to insert routed rows into is\n> * also an UPDATE result rel and the UPDATE hasn't been performed yet,\n> * proceeding with the INSERT will result in the later UPDATE\n> * incorrectly modifying those routed rows, so prevent the INSERT ---\n> * it would be nice if we could handle this case, but for now, throw\n> * an error for safety.\n> */\n\nI think that's better than mine; will use that wording.\n\n> I see that in all the hunks involving some manipulation of aux_fmstate,\n> there's a comment explaining what it is, which seems a bit repetitive. I\n> can see more or less the same explanation in postgresExecForeignInsert(),\n> postgresBeginForeignInsert(), and postgresEndForeignInsert(). Maybe just\n> keep the description in postgresBeginForeignInsert as follows:\n>\n> @@ -1983,7 +2020,19 @@ postgresBeginForeignInsert(ModifyTableState *mtstate,\n> retrieved_attrs != NIL,\n> retrieved_attrs);\n>\n> - resultRelInfo->ri_FdwState = fmstate;\n> + /*\n> + * If the given resultRelInfo already has PgFdwModifyState set, it means\n> + * the foreign table is an UPDATE subplan resultrel; in which case, store\n> + * the resulting state into the aux_fmstate of the PgFdwModifyState.\n> + */\n>\n> and change the other sites to refer to postgresBeginForeingInsert for the\n> detailed explanation of what aux_fmstate is.\n\nGood idea; will do.\n\nThanks for the comments!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/5CB93D3F.6050903%40lab.ntt.co.jp\n\n\n\n", "msg_date": "Fri, 19 Apr 2019 14:39:26 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/19 13:17), Amit Langote wrote:\n> OTOH, we don't mention at all in postgres-fdw.sgml\n> that postgres_fdw supports tuple routing. Maybe we should and list this\n> limitation or would it be too much burden to maintain?\n\nI think it's better to add this limitation to postgres-fdw.sgml. Will \ndo. Thanks for pointing that out!\n\nBest regards,\nEtsuro Fujita\n\n\n\n", "msg_date": "Fri, 19 Apr 2019 14:40:20 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "On 2019/04/19 14:39, Etsuro Fujita wrote:\n> (2019/04/19 13:00), Amit Langote wrote:\n>> On 2019/04/18 22:10, Etsuro Fujita wrote:\n>>> * I kept all the changes in the previous patch, because otherwise\n>>> postgres_fdw would fail to release resources for a foreign-insert\n>>> operation created by postgresBeginForeignInsert() for a tuple-routable\n>>> foreign table (ie, a foreign-table subplan resultrel that has been updated\n>>> already) during postgresEndForeignInsert().\n>>\n>> Hmm are you saying that the cases for which we'll still allow tuple\n>> routing (foreign table receiving moved-in rows has already been updated),\n>> there will be two fmstates to be released -- the original fmstate\n>> (UPDATE's) and aux_fmstate (INSERT's)?\n> \n> Yeah, but I noticed that that explanation was not correct.  (I think I was\n> really in hurry.)  See the correction in [1].\n\nAh, I hadn't noticed your corrected description in [1] even though your\nmessage was in my inbox before I sent my email.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Fri, 19 Apr 2019 14:55:53 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/19 14:39), Etsuro Fujita wrote:\n> (2019/04/19 13:00), Amit Langote wrote:\n>> On 2019/04/18 22:10, Etsuro Fujita wrote:\n>>>> On 2019/04/18 14:06, Amit Langote wrote:\n>>>>> On 2019/04/17 21:49, Etsuro Fujita wrote:\n>>>>>> So what I'm thinking is to throw an error for cases like this.\n>>>>>> (Though, I\n>>>>>> think we should keep to allow rows to be moved from local\n>>>>>> partitions to a\n>>>>>> foreign-table subplan targetrel that has been updated already.)\n>>>>>\n>>>>> Hmm, how would you distinguish (totally inside postgred_fdw I\n>>>>> presume) the\n>>>>> two cases?\n>>>\n>>> One thing I came up with to do that is this:\n>>>\n>>> @@ -1917,6 +1937,23 @@ postgresBeginForeignInsert(ModifyTableState\n>>> *mtstate,\n>>>\n>>> initStringInfo(&sql);\n>>>\n>>> + /*\n>>> + * If the foreign table is an UPDATE subplan resultrel that hasn't\n>>> yet\n>>> + * been updated, routing tuples to the table might yield incorrect\n>>> + * results, because if routing tuples, routed tuples will be\n>>> mistakenly\n>>> + * read from the table and updated twice when updating the table\n>>> --- it\n>>> + * would be nice if we could handle this case; but for now, throw\n>>> an error\n>>> + * for safety.\n>>> + */\n>>> + if (plan&& plan->operation == CMD_UPDATE&&\n>>> + (resultRelInfo->ri_usesFdwDirectModify ||\n>>> + resultRelInfo->ri_FdwState)&&\n>>> + resultRelInfo> mtstate->resultRelInfo +\n>>> mtstate->mt_whichplan)\n>>> + ereport(ERROR,\n>>> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>>> + errmsg(\"cannot route tuples into foreign\n>>> table to be updated \\\"%s\\\"\",\n>>> + RelationGetRelationName(rel))));\n>>\n>> Oh, I see.\n>>\n>> So IIUC, you're making postgresBeginForeignInsert() check two things:\n>>\n>> 1. Whether the foreign table it's about to begin inserting (moved) rows\n>> into is a subplan result rel, checked by\n>> (resultRelInfo->ri_usesFdwDirectModify || resultRelInfo->ri_FdwState)\n>>\n>> 2. Whether the foreign table it's about to begin inserting (moved) rows\n>> into will be updated later, checked by (resultRelInfo>\n>> mtstate->resultRelInfo + mtstate->mt_whichplan)\n>>\n>> This still allows a foreign table to receive rows moved from the local\n>> partitions if it has already finished the UPDATE operation.\n>>\n>> Seems reasonable to me.\n\n>>>> Forgot to say that since this is a separate issue from the original bug\n>>>> report, maybe we can first finish fixing the latter. What to do you\n>>>> think?\n>>>\n>>> Yeah, I think we can do that, but my favorite would be to fix the two\n>>> issues in a single shot, because they seem to me rather close to each\n>>> other. So I updated a new version in a single patch, which I'm\n>>> attaching.\n>>\n>> I agree that we can move to fix the other issue right away as the fix you\n>> outlined above seems reasonable, but I wonder if it wouldn't be better to\n>> commit the two fixes separately? The two fixes, although small, are\n>> somewhat complicated and combining them in a single commit might be\n>> confusing. Also, a combined commit might make it harder for the release\n>> note author to list down the exact set of problems being fixed.\n>\n> OK, I'll separate the patch into two.\n\nAfter I tried to separate the patch a bit, I changed my mind: I think we \nshould commit the two issues in a single patch, because the original \nissue that overriding fmstate for the UPDATE operation mistakenly by \nfmstate for the INSERT operation caused backend crash is fixed by what I \nproposed above. So I add the commit message to the previous single \npatch, as you suggested.\n\n>> Some mostly cosmetic comments on the code changes:\n>>\n>> * In the following comment:\n>>\n>> + /*\n>> + * If the foreign table is an UPDATE subplan resultrel that hasn't yet\n>> + * been updated, routing tuples to the table might yield incorrect\n>> + * results, because if routing tuples, routed tuples will be mistakenly\n>> + * read from the table and updated twice when updating the table --- it\n>> + * would be nice if we could handle this case; but for now, throw an\n>> error\n>> + * for safety.\n>> + */\n>>\n>> the part that start with \"because if routing tuples...\" reads a bit\n>> unclear to me. How about writing this as:\n>>\n>> /*\n>> * If the foreign table we are about to insert routed rows into is\n>> * also an UPDATE result rel and the UPDATE hasn't been performed yet,\n>> * proceeding with the INSERT will result in the later UPDATE\n>> * incorrectly modifying those routed rows, so prevent the INSERT ---\n>> * it would be nice if we could handle this case, but for now, throw\n>> * an error for safety.\n>> */\n>\n> I think that's better than mine; will use that wording.\n\nDone. I simplified your wording a little bit, though.\n\n>> I see that in all the hunks involving some manipulation of aux_fmstate,\n>> there's a comment explaining what it is, which seems a bit repetitive. I\n>> can see more or less the same explanation in postgresExecForeignInsert(),\n>> postgresBeginForeignInsert(), and postgresEndForeignInsert(). Maybe just\n>> keep the description in postgresBeginForeignInsert as follows:\n>>\n>> @@ -1983,7 +2020,19 @@ postgresBeginForeignInsert(ModifyTableState\n>> *mtstate,\n>> retrieved_attrs != NIL,\n>> retrieved_attrs);\n>>\n>> - resultRelInfo->ri_FdwState = fmstate;\n>> + /*\n>> + * If the given resultRelInfo already has PgFdwModifyState set, it means\n>> + * the foreign table is an UPDATE subplan resultrel; in which case,\n>> store\n>> + * the resulting state into the aux_fmstate of the PgFdwModifyState.\n>> + */\n>>\n>> and change the other sites to refer to postgresBeginForeingInsert for the\n>> detailed explanation of what aux_fmstate is.\n>\n> Good idea; will do.\n\nDone.\n\nOther changes:\n* I updated the docs in postgres-fdw.sgml to mention the limitation.\n* I did some cleanups for the regression tests.\n\nPlease find attached an updated version of the patch.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 22 Apr 2019 20:00:03 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "Fujita-san,\n\nOn 2019/04/22 20:00, Etsuro Fujita wrote:\n> (2019/04/19 14:39), Etsuro Fujita wrote:\n>> (2019/04/19 13:00), Amit Langote wrote:\n>>> I agree that we can move to fix the other issue right away as the fix you\n>>> outlined above seems reasonable, but I wonder if it wouldn't be better to\n>>> commit the two fixes separately? The two fixes, although small, are\n>>> somewhat complicated and combining them in a single commit might be\n>>> confusing. Also, a combined commit might make it harder for the release\n>>> note author to list down the exact set of problems being fixed.\n>>\n>> OK, I'll separate the patch into two.\n> \n> After I tried to separate the patch a bit, I changed my mind: I think we\n> should commit the two issues in a single patch, because the original issue\n> that overriding fmstate for the UPDATE operation mistakenly by fmstate for\n> the INSERT operation caused backend crash is fixed by what I proposed\n> above.  So I add the commit message to the previous single patch, as you\n> suggested.\n\nAh, you're right. The case that would return wrong result, that is now\nprevented per the latest patch, is also the case that would crash before.\nSo, it seems to OK to keep this commit this as one patch. Sorry for the\nnoise.\n\nI read your commit message and it seems to sufficiently explain the issues\nbeing fixed. Thanks for adding me as an author, but I think the latest\npatch is mostly your work, so I'm happy to be listed as just a reviewer. :)\n\n>>> Some mostly cosmetic comments on the code changes:\n>>>\n>>> * In the following comment:\n>>>\n>>> + /*\n>>> + * If the foreign table is an UPDATE subplan resultrel that hasn't yet\n>>> + * been updated, routing tuples to the table might yield incorrect\n>>> + * results, because if routing tuples, routed tuples will be mistakenly\n>>> + * read from the table and updated twice when updating the table --- it\n>>> + * would be nice if we could handle this case; but for now, throw an\n>>> error\n>>> + * for safety.\n>>> + */\n>>>\n>>> the part that start with \"because if routing tuples...\" reads a bit\n>>> unclear to me. How about writing this as:\n>>>\n>>> /*\n>>> * If the foreign table we are about to insert routed rows into is\n>>> * also an UPDATE result rel and the UPDATE hasn't been performed yet,\n>>> * proceeding with the INSERT will result in the later UPDATE\n>>> * incorrectly modifying those routed rows, so prevent the INSERT ---\n>>> * it would be nice if we could handle this case, but for now, throw\n>>> * an error for safety.\n>>> */\n>>\n>> I think that's better than mine; will use that wording.\n> \n> Done.  I simplified your wording a little bit, though.\n\nThanks, looks fine.\n\n> Other changes:\n> * I updated the docs in postgres-fdw.sgml to mention the limitation.\n\nLooks fine.\n\n> * I did some cleanups for the regression tests.\n\nHere too.\n\n> Please find attached an updated version of the patch.\n\nI don't have any more comments. Thanks for working on this.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Tue, 23 Apr 2019 10:03:15 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "(2019/04/23 10:03), Amit Langote wrote:\n> So, it seems to OK to keep this commit this as one patch.\n\n> I read your commit message and it seems to sufficiently explain the issues\n> being fixed.\n\nCool!\n\n> Thanks for adding me as an author, but I think the latest\n> patch is mostly your work, so I'm happy to be listed as just a reviewer. :)\n\nYou found this bug, analyzed it, and wrote the first version of the \npatch. I heavily modified the patch, but I used your test case, so I \nthink you deserve the first author of this fix.\n\n> I don't have any more comments. Thanks for working on this.\n\nPushed. Many thanks, Amit-san!\n\nBest regards,\nEtsuro Fujita\n\n\n\n", "msg_date": "Wed, 24 Apr 2019 18:55:12 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: bug in update tuple routing with foreign partitions" }, { "msg_contents": "Fujita-san,\n\nOn 2019/04/24 18:55, Etsuro Fujita wrote:\n> (2019/04/23 10:03), Amit Langote wrote:\n>> Thanks for adding me as an author, but I think the latest\n>> patch is mostly your work, so I'm happy to be listed as just a reviewer. :)\n> \n> You found this bug, analyzed it, and wrote the first version of the\n> patch.  I heavily modified the patch, but I used your test case, so I\n> think you deserve the first author of this fix.\n\nOK, thanks.\n\nRegards,\nAmit\n\n\n\n", "msg_date": "Wed, 24 Apr 2019 18:59:54 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: bug in update tuple routing with foreign partitions" } ]
[ { "msg_contents": "Hi,\n\nContext: I'm trying to compile the jsonpath v36 patches (these apply \nOK), and on top of those the jsonfunctions and jsontable patch series.\n\nThat fails for me (on \n0001-Implementation-of-SQL-JSON-path-language-v36.patch), and now I'm \nwondering why that does not agree with what the patch-tester page shows \n( http://commitfest.cputube.org/ ).\n\nThe patch-tester page does not explain what the colors and symbols mean. \n Of course one can guess 'red' and 'cross' is bad, and 'green' and \n'check' is good.\n\nBut some questions remain:\n- Some symbols' color is 'filled-in' (solid), and some are not. What \ndoes that mean?\n- For each patch there are three symbols; what do those three stand for?\n- I suppose there is a regular schedule of apply and compile of each \npatch. How often does it happen? Can I see how recent a particular \nreported state is?\n\nCan you throw some light on this?\n\nthanks,\n\nErik Rijkes\n\n", "msg_date": "Wed, 06 Mar 2019 11:11:51 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "patch tester symbols" }, { "msg_contents": "On 06.03.2019 13:11, Erik Rijkers wrote:\n\n> Hi,\n>\n> Context: I'm trying to compile the jsonpath v36 patches (these apply \n> OK), and on top of those the jsonfunctions and jsontable patch series.\n>\n> That fails for me (on \n> 0001-Implementation-of-SQL-JSON-path-language-v36.patch), and now I'm \n> wondering why that does not agree with what the patch-tester page \n> shows ( http://commitfest.cputube.org/ ).\n>\nPatch 0001-Implementation-of-SQL-JSON-path-language-v36.patch from the \n\"SQL/JSON: functions\" patch set is a simply squash of all 6 jsonpath \npatches on which this patch set depends. I included it into this patch \nset just for testing in patch-tester (I guess patch-tester cannot handle \npatch dependencies). If you apply SQL/JSON patch sets step by step, then \nyou need to apply only patch 0002 from \"SQL/JSON: functions\" and \n\"SQL/JSON: JSON_TABLE\".\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n", "msg_date": "Wed, 6 Mar 2019 15:00:07 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: patch tester symbols" }, { "msg_contents": "On 2019-03-06 13:00, Nikita Glukhov wrote:\n> On 06.03.2019 13:11, Erik Rijkers wrote:\n\n>> Context: I'm trying to compile the jsonpath v36 patches (these apply \n>> OK), and on top of those the jsonfunctions and jsontable patch series.\n\n> Patch 0001-Implementation-of-SQL-JSON-path-language-v36.patch from the\n> \"SQL/JSON: functions\" patch set is a simply squash of all 6 jsonpath\n> patches on which this patch set depends. I included it into this patch\n> set just for testing in patch-tester (I guess patch-tester cannot\n> handle patch dependencies). If you apply SQL/JSON patch sets step by\n> step, then you need to apply only patch 0002 from \"SQL/JSON:\n> functions\" and \"SQL/JSON: JSON_TABLE\".\n\nAh, that explains it. I suppose I should have guessed that.\n\nApplied in that way it built fine (apply, compile, check-world OK).\n\nThank you,\n\nErik Rijkers\n\n", "msg_date": "Wed, 06 Mar 2019 13:22:03 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: patch tester symbols" } ]
[ { "msg_contents": "... obviously due to commit c24dcd0. The following patch removes it.\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex ecd12fc53a..0fdd82a287 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -771,13 +771,11 @@ static const char *xlogSourceNames[] = {\"any\", \"archive\", \"pg_wal\", \"stream\"};\n \n /*\n * openLogFile is -1 or a kernel FD for an open log file segment.\n- * When it's open, openLogOff is the current seek offset in the file.\n- * openLogSegNo identifies the segment. These variables are only\n- * used to write the XLOG, and so will normally refer to the active segment.\n+ * openLogSegNo identifies the segment. These variables are only used to\n+ * write the XLOG, and so will normally refer to the active segment.\n */\n static int\topenLogFile = -1;\n static XLogSegNo openLogSegNo = 0;\n-static uint32 openLogOff = 0;\n \n /*\n * These variables are used similarly to the ones above, but for reading\n@@ -2447,7 +2445,6 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n \t\t\t/* create/use new log file */\n \t\t\tuse_existent = true;\n \t\t\topenLogFile = XLogFileInit(openLogSegNo, &use_existent, true);\n-\t\t\topenLogOff = 0;\n \t\t}\n \n \t\t/* Make sure we have the current logfile open */\n@@ -2456,7 +2453,6 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n \t\t\tXLByteToPrevSeg(LogwrtResult.Write, openLogSegNo,\n \t\t\t\t\t\t\twal_segment_size);\n \t\t\topenLogFile = XLogFileOpen(openLogSegNo);\n-\t\t\topenLogOff = 0;\n \t\t}\n \n \t\t/* Add current page to the set of pending pages-to-dump */\n@@ -2508,15 +2504,13 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n \t\t\t\t\t\t\t errmsg(\"could not write to log file %s \"\n \t\t\t\t\t\t\t\t\t\"at offset %u, length %zu: %m\",\n \t\t\t\t\t\t\t\t\tXLogFileNameP(ThisTimeLineID, openLogSegNo),\n-\t\t\t\t\t\t\t\t\topenLogOff, nbytes)));\n+\t\t\t\t\t\t\t\t\tstartoffset, nbytes)));\n \t\t\t\t}\n \t\t\t\tnleft -= written;\n \t\t\t\tfrom += written;\n \t\t\t\tstartoffset += written;\n \t\t\t} while (nleft > 0);\n \n-\t\t\t/* Update state for write */\n-\t\t\topenLogOff += nbytes;\n \t\t\tnpages = 0;\n \n \t\t\t/*\n@@ -2602,7 +2596,6 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n \t\t\t\tXLByteToPrevSeg(LogwrtResult.Write, openLogSegNo,\n \t\t\t\t\t\t\t\twal_segment_size);\n \t\t\t\topenLogFile = XLogFileOpen(openLogSegNo);\n-\t\t\t\topenLogOff = 0;\n \t\t\t}\n \n \t\t\tissue_xlog_fsync(openLogFile, openLogSegNo);\n\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n", "msg_date": "Wed, 06 Mar 2019 12:12:10 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "openLogOff is not needed anymore" }, { "msg_contents": "On Wed, Mar 6, 2019 at 6:10 AM Antonin Houska <ah@cybertec.at> wrote:\n> ... obviously due to commit c24dcd0. The following patch removes it.\n\nCommitted, after a short struggle to get the patch out of the email\nbody in a usable form.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 6 Mar 2019 09:47:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: openLogOff is not needed anymore" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Mar 6, 2019 at 6:10 AM Antonin Houska <ah@cybertec.at> wrote:\n> > ... obviously due to commit c24dcd0. The following patch removes it.\n> \n> Committed, after a short struggle to get the patch out of the email\n> body in a usable form.\n\nIt was just convenient for me to use vc-diff emacs command and copy & paste\nthe diff into the message because I also use emacs as email client. I thought\nit should work but perhaps something went wrong with spaces.\n\nI'll always attach diffs to the message next time. Sorry for the problem.\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n", "msg_date": "Wed, 06 Mar 2019 16:13:17 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: openLogOff is not needed anymore" } ]
[ { "msg_contents": "Hello, Postgres hackers,\n\nThe copy code has used batch insert with function heap_multi_insert() to\nspeed up. It seems that Create Table As or Materialized View could leverage\nthat code also to boost the performance also. Attached is a patch to\nimplement that. That was done by Taylor (cc-ed) and me.\n\nThe patch also modifies heap_multi_insert() a bit to do a bit further\ncode-level optimization by using static memory, instead of using memory\ncontext and dynamic allocation. For Modifytable->insert, it seems that\nthere are more limitations for batch insert (trigger, etc?) but it seems\nthat it is possible that we could do batch insert for the case that we\ncould do?\n\nBy the way, while looking at the code, I noticed that there are 9 local\narrays with large length in toast_insert_or_update() which seems to be a\nrisk of stack overflow. Maybe we should put it as static or global.\n\nHere is a quick simple performance testing on a mirrorless Postgres\ninstance with the SQLs below. The tests cover tables with small column\nlength, large column length and toast.\n\n-- tuples with small size.\ndrop table if exists t1;\ncreate table t1 (a int);\n\ninsert into t1 select * from generate_series(1, 10000000);\ndrop table if exists t2;\n\\timing\ncreate table t2 as select * from t1;\n\\timing\n\n-- tuples that are untoasted and data that is 1664 bytes wide\ndrop table if exists t1;\ncreate table t1 (a name, b name, c name, d name, e name, f name, g name, h\nname, i name, j name, k name, l name, m name, n name, o name, p name, q\nname, r name, s name, t name, u name, v name, w name, x name, y name, z\nname);\n\ninsert into t1 select 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',\n'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y',\n'z' from generate_series(1, 500000);\ndrop table if exists t2;\n\\timing\ncreate table t2 as select * from t1;\n\\timing\n\n-- tuples that are toastable.\ndrop table if exists t1;\ncreate table t1 (a text, b text, c text, d text, e text, f text, g text, h\ntext, i text, j text, k text, l text, m text, n text, o text, p text, q\ntext, r text, s text, t text, u text, v text, w text, x text, y text, z\ntext);\n\ninsert into t1 select i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i,\ni, i, i, i, i, i, i, i from (select repeat('123456789', 10000) from\ngenerate_series(1,2000)) i;\ndrop table if exists t2;\n\\timing\ncreate table t2 as select * from t1;\n\\timing\n\nHere are the timing results:\n\nWith the patch,\n\nTime: 4728.142 ms (00:04.728)\nTime: 14203.983 ms (00:14.204)\nTime: 1008.669 ms (00:01.009)\n\nBaseline,\nTime: 11096.146 ms (00:11.096)\nTime: 13106.741 ms (00:13.107)\nTime: 1100.174 ms (00:01.100)\n\nWhile for toast and large column size there is < 10% decrease but for small\ncolumn size the improvement is super good. Actually if I hardcode the batch\ncount as 4 all test cases are better but the improvement for small column\nsize is smaller than that with current patch. Pretty much the number 4 is\nquite case specific so I can not hardcode that in the patch. Of course we\ncould further tune that but the current value seems to be a good trade-off?\n\nThanks.", "msg_date": "Wed, 6 Mar 2019 22:06:27 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Batch insert in CTAS/MatView code" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 06, 2019 at 10:06:27PM +0800, Paul Guo wrote:\n> The copy code has used batch insert with function heap_multi_insert() to\n> speed up. It seems that Create Table As or Materialized View could leverage\n> that code also to boost the performance also. Attached is a patch to\n> implement that. That was done by Taylor (cc-ed) and me.\n\nPlease note that we are currently in the last commit fest of Postgres\n12, and it is too late to propose new features. Please feel free to\nadd an entry to the commit fest happening afterwards.\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 11:33:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On 06/03/2019 22:06, Paul Guo wrote:\n> The patch also modifies heap_multi_insert() a bit to do a bit further \n> code-level optimization by using static memory, instead of using memory \n> context and dynamic allocation.\n\nIf toasting is required, heap_prepare_insert() creates a palloc'd tuple. \nThat is still leaked to the current memory context.\n\nLeaking into the current memory context is not a bad thing, because \nresetting a memory context is faster than doing a lot of pfree() calls. \nThe callers just need to be prepared for that, and use a short-lived \nmemory context.\n\n> By the way, while looking at the code, I noticed that there are 9 local \n> arrays with large length in toast_insert_or_update() which seems to be a \n> risk of stack overflow. Maybe we should put it as static or global.\n\nHmm. We currently reserve 512 kB between the kernel's limit, and the \nlimit we check in check_stack_depth(). See STACK_DEPTH_SLOP. Those \narrays add up to 52800 bytes on a 64-bit maching, if I did my math \nright. So there's still a lot of headroom. I agree that it nevertheless \nseems a bit excessive, though.\n\n> With the patch,\n> \n> Time: 4728.142 ms (00:04.728)\n> Time: 14203.983 ms (00:14.204)\n> Time: 1008.669 ms (00:01.009)\n> \n> Baseline,\n> Time: 11096.146 ms (00:11.096)\n> Time: 13106.741 ms (00:13.107)\n> Time: 1100.174 ms (00:01.100)\n\nNice speedup!\n\n> While for toast and large column size there is < 10% decrease but for \n> small column size the improvement is super good. Actually if I hardcode \n> the batch count as 4 all test cases are better but the improvement for \n> small column size is smaller than that with current patch. Pretty much \n> the number 4 is quite case specific so I can not hardcode that in the \n> patch. Of course we could further tune that but the current value seems \n> to be a good trade-off?\n\nHave you done any profiling, on why the multi-insert is slower with \nlarge tuples? In principle, I don't see why it should be slower.\n\n- Heikki\n\n", "msg_date": "Thu, 7 Mar 2019 10:54:09 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Sorry for the late reply.\n\nTo Michael. Thank you. I know this commitfest is ongoing and I'm not\ntargeting for this.\n\nOn Thu, Mar 7, 2019 at 4:54 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 06/03/2019 22:06, Paul Guo wrote:\n> > The patch also modifies heap_multi_insert() a bit to do a bit further\n> > code-level optimization by using static memory, instead of using memory\n> > context and dynamic allocation.\n>\n> If toasting is required, heap_prepare_insert() creates a palloc'd tuple.\n> That is still leaked to the current memory context.\n>\n>\nThanks. I checked the code for that but apparently, I missed that one. I'll\nsee what proper context can be used for CTAS. For copy code maybe just\nrevert my change.\n\n\n\n> Leaking into the current memory context is not a bad thing, because\n> resetting a memory context is faster than doing a lot of pfree() calls.\n> The callers just need to be prepared for that, and use a short-lived\n> memory context.\n>\n> > By the way, while looking at the code, I noticed that there are 9 local\n> > arrays with large length in toast_insert_or_update() which seems to be a\n> > risk of stack overflow. Maybe we should put it as static or global.\n>\n> Hmm. We currently reserve 512 kB between the kernel's limit, and the\n> limit we check in check_stack_depth(). See STACK_DEPTH_SLOP. Those\n> arrays add up to 52800 bytes on a 64-bit maching, if I did my math\n> right. So there's still a lot of headroom. I agree that it nevertheless\n> seems a bit excessive, though.\n>\n\nI was worried about some recursive calling of it, but probably there should\nbe no worry for toast_insert_or_update().\n\n\n> > With the patch,\n> >\n> > Time: 4728.142 ms (00:04.728)\n> > Time: 14203.983 ms (00:14.204)\n> > Time: 1008.669 ms (00:01.009)\n> >\n> > Baseline,\n> > Time: 11096.146 ms (00:11.096)\n> > Time: 13106.741 ms (00:13.107)\n> > Time: 1100.174 ms (00:01.100)\n>\n> Nice speedup!\n>\n> > While for toast and large column size there is < 10% decrease but for\n> > small column size the improvement is super good. Actually if I hardcode\n> > the batch count as 4 all test cases are better but the improvement for\n> > small column size is smaller than that with current patch. Pretty much\n> > the number 4 is quite case specific so I can not hardcode that in the\n> > patch. Of course we could further tune that but the current value seems\n> > to be a good trade-off?\n>\n> Have you done any profiling, on why the multi-insert is slower with\n> large tuples? In principle, I don't see why it should be slower.\n>\n\nThanks for the suggestion. I'll explore a bit more on this.\n\n\n>\n> - Heikki\n>\n\nSorry for the late reply.To Michael. Thank you. I know this commitfest is ongoing and I'm not targeting for this.On Thu, Mar 7, 2019 at 4:54 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 06/03/2019 22:06, Paul Guo wrote:> The patch also modifies heap_multi_insert() a bit to do a bit further > code-level optimization by using static memory, instead of using memory > context and dynamic allocation.\nIf toasting is required, heap_prepare_insert() creates a palloc'd tuple. That is still leaked to the current memory context.\nThanks. I checked the code for that but apparently, I missed that one. I'll see what proper context can be used for CTAS. For copy code maybe just revert my change. Leaking into the current memory context is not a bad thing, because resetting a memory context is faster than doing a lot of pfree() calls. The callers just need to be prepared for that, and use a short-lived memory context.\n> By the way, while looking at the code, I noticed that there are 9 local > arrays with large length in toast_insert_or_update() which seems to be a > risk of stack overflow. Maybe we should put it as static or global.\nHmm. We currently reserve 512 kB between the kernel's limit, and the limit we check in check_stack_depth(). See STACK_DEPTH_SLOP. Those arrays add up to 52800 bytes on a 64-bit maching, if I did my math right. So there's still a lot of headroom. I agree that it nevertheless seems a bit excessive, though. I was worried about some recursive calling of it, but probably there should be no worry for toast_insert_or_update().\n> With the patch,> > Time: 4728.142 ms (00:04.728)> Time: 14203.983 ms (00:14.204)> Time: 1008.669 ms (00:01.009)> > Baseline,> Time: 11096.146 ms (00:11.096)> Time: 13106.741 ms (00:13.107)> Time: 1100.174 ms (00:01.100)\nNice speedup!\n> While for toast and large column size there is < 10% decrease but for > small column size the improvement is super good. Actually if I hardcode > the batch count as 4 all test cases are better but the improvement for > small column size is smaller than that with current patch. Pretty much > the number 4 is quite case specific so I can not hardcode that in the > patch. Of course we could further tune that but the current value seems > to be a good trade-off?\nHave you done any profiling, on why the multi-insert is slower with large tuples? In principle, I don't see why it should be slower.Thanks for the suggestion. I'll explore a bit more on this. \n- Heikki", "msg_date": "Sun, 10 Mar 2019 21:55:35 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Wed, Mar 06, 2019 at 10:06:27PM +0800, Paul Guo wrote:\n> Hello, Postgres hackers,\n> \n> The copy code has used batch insert with function heap_multi_insert() to\n> speed up. It seems that Create Table As or Materialized View could leverage\n> that code also to boost the performance also. Attached is a patch to\n> implement that.\n\nThis is great!\n\nIs this optimization doable for multi-row INSERTs, either with tuples\nspelled out in the body of the query or in constructs like INSERT ...\nSELECT ...?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n", "msg_date": "Sun, 10 Mar 2019 19:58:07 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Mon, Mar 11, 2019 at 2:58 AM David Fetter <david@fetter.org> wrote:\n\n> On Wed, Mar 06, 2019 at 10:06:27PM +0800, Paul Guo wrote:\n> > Hello, Postgres hackers,\n> >\n> > The copy code has used batch insert with function heap_multi_insert() to\n> > speed up. It seems that Create Table As or Materialized View could\n> leverage\n> > that code also to boost the performance also. Attached is a patch to\n> > implement that.\n>\n> This is great!\n>\n> Is this optimization doable for multi-row INSERTs, either with tuples\n> spelled out in the body of the query or in constructs like INSERT ...\n> SELECT ...?\n>\n\nYes. That's \"batch insert\" in the ModifyTable nodes which I mentioned in\nthe first email.\nBy the way, batch is a usual optimization mechanism for iteration kind\nmodel (like postgres executor),\nso batch should benefit many executor nodes in theory also.\n\n\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org>\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__fetter.org_&d=DwIBAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=wgGDTDFzZV7nnMm0NFt-yGKmm_KZk18RXKP9HL8h6UE&s=tnaoLdajjR0Ew-93XUliHW1FUspVl09pIFd9aXxvqc8&e=\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nOn Mon, Mar 11, 2019 at 2:58 AM David Fetter <david@fetter.org> wrote:On Wed, Mar 06, 2019 at 10:06:27PM +0800, Paul Guo wrote:> Hello, Postgres hackers,> > The copy code has used batch insert with function heap_multi_insert() to> speed up. It seems that Create Table As or Materialized View could leverage> that code also to boost the performance also. Attached is a patch to> implement that.\nThis is great!\nIs this optimization doable for multi-row INSERTs, either with tuplesspelled out in the body of the query or in constructs like INSERT ...SELECT ...?Yes. That's \"batch insert\" in the ModifyTable nodes which I mentioned in the first email.By the way, batch is a usual optimization mechanism for iteration kind model (like postgres executor),so batch should benefit many executor nodes in theory also. \nBest,David.-- David Fetter <david(at)fetter(dot)org> https://urldefense.proofpoint.com/v2/url?u=http-3A__fetter.org_&d=DwIBAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=wgGDTDFzZV7nnMm0NFt-yGKmm_KZk18RXKP9HL8h6UE&s=tnaoLdajjR0Ew-93XUliHW1FUspVl09pIFd9aXxvqc8&e=Phone: +1 415 235 3778\nRemember to vote!Consider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Wed, 13 Mar 2019 10:39:11 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Hi all,\n\nI've been working other things until recently I restarted the work,\nprofiling & refactoring the code.\nIt's been a long time since the last patch was proposed. The new patch has\nnow been firstly refactored due to\n4da597edf1bae0cf0453b5ed6fc4347b6334dfe1 (Make TupleTableSlots extensible,\nfinish split of existing slot type).\n\nNow that TupleTableSlot, instead of HeapTuple is one argument of\nintorel_receive() so we can not get the\ntuple length directly. This patch now gets the tuple length if we know all\ncolumns are with fixed widths, else\nwe calculate an avg. tuple length using the first MAX_MULTI_INSERT_SAMPLES\n(defined as 1000) tuples\nand use for the total length of tuples in a batch.\n\nI noticed that to do batch insert, we might need additional memory copy\nsometimes comparing with \"single insert\"\n(that should be the reason that we previously saw a bit regressions) so a\ngood solution seems to fall back\nto \"single insert\" if the tuple length is larger than a threshold. I set\nthis as 2000 after quick testing.\n\nTo make test stable and strict, I run checkpoint before each ctas, the test\nscript looks like this:\n\ncheckpoint;\n\\timing\ncreate table tt as select a,b,c from t11;\n\\timing\ndrop table tt;\n\nAlso previously I just tested the BufferHeapTupleTableSlot (i.e. create\ntable tt as select * from t11),\nthis time I test VirtualTupleTableSlot (i.e. create table tt as select\na,b,c from t11) additionally.\nIt seems that VirtualTupleTableSlot is very common in real cases.\n\nI tested four kinds of tables, see below SQLs.\n\n-- tuples with small size.\ncreate table t11 (a int, b int, c int, d int);\ninsert into t11 select s,s,s,s from generate_series(1, 10000000) s;\nanalyze t11;\n\n-- tuples that are untoasted and tuple size is 1984 bytes.\ncreate table t12 (a name, b name, c name, d name, e name, f name, g name, h\nname, i name, j name, k name, l name, m name, n name, o name, p name, q\nname, r name, s name, t name, u name, v name, w name, x name, y name, z\nname, a1 name, a2 name, a3 name, a4 name, a5 name);\ninsert into t12 select 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',\n'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y',\n'z', 'a', 'b', 'c', 'd', 'e' from generate_series(1, 500000);\nanalyze t12;\n\n-- tuples that are untoasted and tuple size is 2112 bytes.\ncreate table t13 (a name, b name, c name, d name, e name, f name, g name, h\nname, i name, j name, k name, l name, m name, n name, o name, p name, q\nname, r name, s name, t name, u name, v name, w name, x name, y name, z\nname, a1 name, a2 name, a3 name, a4 name, a5 name, a6 name, a7 name);\ninsert into t13 select 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',\n'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y',\n'z', 'a', 'b', 'c', 'd', 'e', 'f', 'g' from generate_series(1, 500000);\nanalyze t13;\n\n-- tuples that are toastable and tuple compressed size is 1084.\ncreate table t14 (a text, b text, c text, d text, e text, f text, g text, h\ntext, i text, j text, k text, l text, m text, n text, o text, p text, q\ntext, r text, s text, t text, u text, v text, w text, x text, y text, z\ntext);\ninsert into t14 select i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i,\ni, i, i, i, i, i, i, i, i from (select repeat('123456789', 10000) from\ngenerate_series(1,5000)) i;\nanalyze t14;\n\n\nI also tested two scenarios for each testing.\n\nOne is to clean up all kernel caches (page & inode & dentry on Linux) using\nthe command below and then run the test,\n sync; echo 3 > /proc/sys/vm/drop_caches\nAfter running all tests all relation files will be in kernel cache (my test\nsystem memory is large enough to accommodate all relation files),\nthen I run the tests again. I run like this because in real scenario the\nresult of the test should be among the two results. Also I rerun\neach test and finally I calculate the average results as the experiment\nresults. Below are some results:\n\n\nscenario1: All related kernel caches are cleaned up (note the first two\ncolumns are time with second).\n\nbaseline patch diff% SQL\n\n\n\n\n10.1 5.57 44.85% create table tt as select * from t11;\n\n10.7 5.52 48.41% create table tt as select a,b,c from t11;\n\n9.57 10.2 -6.58% create table tt as select * from t12;\n\n9.64 8.63 10.48% create table tt as select\na,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4 from t12;\n\n14.2 14.46 -1.83% create table tt as select * from t13;\n\n11.88 12.05 -1.43% create table tt as select\na,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4,a5,a6 from\nt13;\n\n3.17 3.25 -2.52% create table tt as select * from t14;\n\n\n2.93 3.12 -6.48% create table tt as select\na,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y from t14;\n\n\n\nscenario2: all related kernel caches are populated after previous testing.\n\n\n\n\n\nbaseline patch diff% SQL\n\n\n\n\n9.6 4.97 48.23% create table tt as select * from t11;\n\n10.41 5.32 48.90% create table tt as select a,b,c from t11;\n\n9.12 9.52 -4.38% create table tt as select * from t12;\n\n9.66 8.6 10.97% create table tt as select\na,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4 from t12;\n\n13.56 13.6 -0.30% create table tt as select * from t13;\n\n\n11.36 11.7 -2.99% create table tt as select\na,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4,a5,a6 from\nt13;\n\n3.08 3.13 -1.62% create table tt as select * from t14;\n\n\n2.95 3.03 -2.71% create table tt as select\na,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y from t14;\n\n From above we can get some tentative conclusions:\n\n1. t11: For short-size tables, batch insert improves much (40%+).\n\n2. t12: For BufferHeapTupleTableSlot, the patch slows down 4.x%-6.x%, but\nfor VirtualTupleTableSlot it improves 10.x%.\n If we look at execTuples.c, it looks like this is quite relevant to\nadditional memory copy. It seems that VirtualTupleTableSlot is\nmore common than the BufferHeapTupleTableSlot so probably the current code\nshould be fine for most real cases. Or it's possible\nto determine multi-insert also according to the input slot tuple but this\nseems to be ugly in code. Or continue to lower the threshold a bit\nso that \"create table tt as select * from t12;\" also improves although this\nhurts the VirtualTupleTableSlot case.\n\n3. for t13, new code still uses single insert so the difference should be\nsmall. I just want to see the regression when even we use \"single insert\".\n\n4. For toast case t14, the degradation is small, not a big deal.\n\nBy the way, did we try or think about allow better prefetch (on Linux) for\nseqscan. i.e. POSIX_FADV_SEQUENTIAL in posix_fadvise() to enlarge the\nkernel readahead window. Suppose this should help if seq tuple handling is\nfaster than default kernel readahead setting.\n\n\nv2 patch is attached.\n\n\nOn Thu, Mar 7, 2019 at 4:54 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 06/03/2019 22:06, Paul Guo wrote:\n> > The patch also modifies heap_multi_insert() a bit to do a bit further\n> > code-level optimization by using static memory, instead of using memory\n> > context and dynamic allocation.\n>\n> If toasting is required, heap_prepare_insert() creates a palloc'd tuple.\n> That is still leaked to the current memory context.\n>\n> Leaking into the current memory context is not a bad thing, because\n> resetting a memory context is faster than doing a lot of pfree() calls.\n> The callers just need to be prepared for that, and use a short-lived\n> memory context.\n>\n> > By the way, while looking at the code, I noticed that there are 9 local\n> > arrays with large length in toast_insert_or_update() which seems to be a\n> > risk of stack overflow. Maybe we should put it as static or global.\n>\n> Hmm. We currently reserve 512 kB between the kernel's limit, and the\n> limit we check in check_stack_depth(). See STACK_DEPTH_SLOP. Those\n> arrays add up to 52800 bytes on a 64-bit maching, if I did my math\n> right. So there's still a lot of headroom. I agree that it nevertheless\n> seems a bit excessive, though.\n>\n> > With the patch,\n> >\n> > Time: 4728.142 ms (00:04.728)\n> > Time: 14203.983 ms (00:14.204)\n> > Time: 1008.669 ms (00:01.009)\n> >\n> > Baseline,\n> > Time: 11096.146 ms (00:11.096)\n> > Time: 13106.741 ms (00:13.107)\n> > Time: 1100.174 ms (00:01.100)\n>\n> Nice speedup!\n>\n> > While for toast and large column size there is < 10% decrease but for\n> > small column size the improvement is super good. Actually if I hardcode\n> > the batch count as 4 all test cases are better but the improvement for\n> > small column size is smaller than that with current patch. Pretty much\n> > the number 4 is quite case specific so I can not hardcode that in the\n> > patch. Of course we could further tune that but the current value seems\n> > to be a good trade-off?\n>\n> Have you done any profiling, on why the multi-insert is slower with\n> large tuples? In principle, I don't see why it should be slower.\n>\n> - Heikki\n>", "msg_date": "Mon, 17 Jun 2019 20:53:38 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Mon, Jun 17, 2019 at 8:53 PM Paul Guo <pguo@pivotal.io> wrote:\n\n> Hi all,\n>\n> I've been working other things until recently I restarted the work,\n> profiling & refactoring the code.\n> It's been a long time since the last patch was proposed. The new patch has\n> now been firstly refactored due to\n> 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1 (Make TupleTableSlots extensible,\n> finish split of existing slot type).\n>\n> Now that TupleTableSlot, instead of HeapTuple is one argument of\n> intorel_receive() so we can not get the\n> tuple length directly. This patch now gets the tuple length if we know all\n> columns are with fixed widths, else\n> we calculate an avg. tuple length using the first MAX_MULTI_INSERT_SAMPLES\n> (defined as 1000) tuples\n> and use for the total length of tuples in a batch.\n>\n> I noticed that to do batch insert, we might need additional memory copy\n> sometimes comparing with \"single insert\"\n> (that should be the reason that we previously saw a bit regressions) so a\n> good solution seems to fall back\n> to \"single insert\" if the tuple length is larger than a threshold. I set\n> this as 2000 after quick testing.\n>\n> To make test stable and strict, I run checkpoint before each ctas, the\n> test script looks like this:\n>\n> checkpoint;\n> \\timing\n> create table tt as select a,b,c from t11;\n> \\timing\n> drop table tt;\n>\n> Also previously I just tested the BufferHeapTupleTableSlot (i.e. create\n> table tt as select * from t11),\n> this time I test VirtualTupleTableSlot (i.e. create table tt as select\n> a,b,c from t11) additionally.\n> It seems that VirtualTupleTableSlot is very common in real cases.\n>\n> I tested four kinds of tables, see below SQLs.\n>\n> -- tuples with small size.\n> create table t11 (a int, b int, c int, d int);\n> insert into t11 select s,s,s,s from generate_series(1, 10000000) s;\n> analyze t11;\n>\n> -- tuples that are untoasted and tuple size is 1984 bytes.\n> create table t12 (a name, b name, c name, d name, e name, f name, g name,\n> h name, i name, j name, k name, l name, m name, n name, o name, p name, q\n> name, r name, s name, t name, u name, v name, w name, x name, y name, z\n> name, a1 name, a2 name, a3 name, a4 name, a5 name);\n> insert into t12 select 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',\n> 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y',\n> 'z', 'a', 'b', 'c', 'd', 'e' from generate_series(1, 500000);\n> analyze t12;\n>\n> -- tuples that are untoasted and tuple size is 2112 bytes.\n> create table t13 (a name, b name, c name, d name, e name, f name, g name,\n> h name, i name, j name, k name, l name, m name, n name, o name, p name, q\n> name, r name, s name, t name, u name, v name, w name, x name, y name, z\n> name, a1 name, a2 name, a3 name, a4 name, a5 name, a6 name, a7 name);\n> insert into t13 select 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',\n> 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y',\n> 'z', 'a', 'b', 'c', 'd', 'e', 'f', 'g' from generate_series(1, 500000);\n> analyze t13;\n>\n> -- tuples that are toastable and tuple compressed size is 1084.\n> create table t14 (a text, b text, c text, d text, e text, f text, g text,\n> h text, i text, j text, k text, l text, m text, n text, o text, p text, q\n> text, r text, s text, t text, u text, v text, w text, x text, y text, z\n> text);\n> insert into t14 select i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i, i,\n> i, i, i, i, i, i, i, i, i from (select repeat('123456789', 10000) from\n> generate_series(1,5000)) i;\n> analyze t14;\n>\n>\n> I also tested two scenarios for each testing.\n>\n> One is to clean up all kernel caches (page & inode & dentry on Linux)\n> using the command below and then run the test,\n> sync; echo 3 > /proc/sys/vm/drop_caches\n> After running all tests all relation files will be in kernel cache (my\n> test system memory is large enough to accommodate all relation files),\n> then I run the tests again. I run like this because in real scenario the\n> result of the test should be among the two results. Also I rerun\n> each test and finally I calculate the average results as the experiment\n> results. Below are some results:\n>\n>\n> scenario1: All related kernel caches are cleaned up (note the first two\n> columns are time with second).\n>\n> baseline patch diff% SQL\n>\n>\n>\n>\n> 10.1 5.57 44.85% create table tt as select * from t11;\n>\n> 10.7 5.52 48.41% create table tt as select a,b,c from t11;\n>\n> 9.57 10.2 -6.58% create table tt as select * from t12;\n>\n> 9.64 8.63 10.48% create table tt as select\n> a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4 from t12;\n>\n> 14.2 14.46 -1.83% create table tt as select * from t13;\n>\n> 11.88 12.05 -1.43% create table tt as select\n> a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4,a5,a6 from\n> t13;\n>\n> 3.17 3.25 -2.52% create table tt as select * from t14;\n>\n>\n> 2.93 3.12 -6.48% create table tt as select\n> a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y from t14;\n>\n>\n>\n> scenario2: all related kernel caches are populated after previous testing.\n>\n>\n>\n>\n>\n> baseline patch diff% SQL\n>\n>\n>\n>\n> 9.6 4.97 48.23% create table tt as select * from t11;\n>\n> 10.41 5.32 48.90% create table tt as select a,b,c from t11;\n>\n> 9.12 9.52 -4.38% create table tt as select * from t12;\n>\n> 9.66 8.6 10.97% create table tt as select\n> a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4 from t12;\n>\n>\n> 13.56 13.6 -0.30% create table tt as select * from t13;\n>\n>\n> 11.36 11.7 -2.99% create table tt as select\n> a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,a1,a2,a3,a4,a5,a6 from\n> t13;\n>\n> 3.08 3.13 -1.62% create table tt as select * from t14;\n>\n>\n> 2.95 3.03 -2.71% create table tt as select\n> a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y from t14;\n>\n> From above we can get some tentative conclusions:\n>\n> 1. t11: For short-size tables, batch insert improves much (40%+).\n>\n> 2. t12: For BufferHeapTupleTableSlot, the patch slows down 4.x%-6.x%, but\n> for VirtualTupleTableSlot it improves 10.x%.\n> If we look at execTuples.c, it looks like this is quite relevant to\n> additional memory copy. It seems that VirtualTupleTableSlot is\n> more common than the BufferHeapTupleTableSlot so probably the current code\n> should be fine for most real cases. Or it's possible\n> to determine multi-insert also according to the input slot tuple but this\n> seems to be ugly in code. Or continue to lower the threshold a bit\n> so that \"create table tt as select * from t12;\" also improves although\n> this hurts the VirtualTupleTableSlot case.\n>\n>\nTo alleviate this. I tuned MAX_TUP_LEN_FOR_MULTI_INSERT a bit and set it\nfrom 2000 to 1600. With a table with 24 name-typed columns (total size\n1536), I tried both\ncase1: create table tt as select * from t12;\ncase2: create table tt as\nselect a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w from t12;\n\nThis patch increases the performance for both. Note, of course, this change\n(MAX_TUP_LEN_FOR_MULTI_INSERT) does not affect the test results of previous\nt11, t13, t14 in theory since the code path is not affected.\n\nkernel caches cleaned up:\n\n baseline(s) patch(s) diff%\ncase1: 7.65 7.30 4.6%\ncase2: 7.75 6.80 12.2%\n\n\nrelation files are in cache:\n\ncase1: 7.09 6.66 6.1%\ncase2: 7.49 6.83 8.8%\n\nWe do not need to find a larger threshold that just makes the case1\nimprovement near to zero since on other test environments the threshold\nmight be a bit different so it should be set as a rough value, and it seems\nthat 1600 should benefit most cases.\n\nI attached the v3 patch which just has the MAX_TUP_LEN_FOR_MULTI_INSERT\nchange.\n\nThanks.\n\n\n\n> 3. for t13, new code still uses single insert so the difference should be\n> small. I just want to see the regression when even we use \"single insert\".\n>\n> 4. For toast case t14, the degradation is small, not a big deal.\n>\n> By the way, did we try or think about allow better prefetch (on Linux) for\n> seqscan. i.e. POSIX_FADV_SEQUENTIAL in posix_fadvise() to enlarge the\n> kernel readahead window. Suppose this should help if seq tuple handling is\n> faster than default kernel readahead setting.\n>\n>\n> v2 patch is attached.\n>\n>\n> On Thu, Mar 7, 2019 at 4:54 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n>> On 06/03/2019 22:06, Paul Guo wrote:\n>> > The patch also modifies heap_multi_insert() a bit to do a bit further\n>> > code-level optimization by using static memory, instead of using memory\n>> > context and dynamic allocation.\n>>\n>> If toasting is required, heap_prepare_insert() creates a palloc'd tuple.\n>> That is still leaked to the current memory context.\n>>\n>> Leaking into the current memory context is not a bad thing, because\n>> resetting a memory context is faster than doing a lot of pfree() calls.\n>> The callers just need to be prepared for that, and use a short-lived\n>> memory context.\n>>\n>> > By the way, while looking at the code, I noticed that there are 9 local\n>> > arrays with large length in toast_insert_or_update() which seems to be\n>> a\n>> > risk of stack overflow. Maybe we should put it as static or global.\n>>\n>> Hmm. We currently reserve 512 kB between the kernel's limit, and the\n>> limit we check in check_stack_depth(). See STACK_DEPTH_SLOP. Those\n>> arrays add up to 52800 bytes on a 64-bit maching, if I did my math\n>> right. So there's still a lot of headroom. I agree that it nevertheless\n>> seems a bit excessive, though.\n>>\n>> > With the patch,\n>> >\n>> > Time: 4728.142 ms (00:04.728)\n>> > Time: 14203.983 ms (00:14.204)\n>> > Time: 1008.669 ms (00:01.009)\n>> >\n>> > Baseline,\n>> > Time: 11096.146 ms (00:11.096)\n>> > Time: 13106.741 ms (00:13.107)\n>> > Time: 1100.174 ms (00:01.100)\n>>\n>> Nice speedup!\n>>\n>> > While for toast and large column size there is < 10% decrease but for\n>> > small column size the improvement is super good. Actually if I hardcode\n>> > the batch count as 4 all test cases are better but the improvement for\n>> > small column size is smaller than that with current patch. Pretty much\n>> > the number 4 is quite case specific so I can not hardcode that in the\n>> > patch. Of course we could further tune that but the current value seems\n>> > to be a good trade-off?\n>>\n>> Have you done any profiling, on why the multi-insert is slower with\n>> large tuples? In principle, I don't see why it should be slower.\n>>\n>> - Heikki\n>>\n>", "msg_date": "Tue, 18 Jun 2019 18:06:36 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On 17/06/2019 15:53, Paul Guo wrote:\n> I noticed that to do batch insert, we might need additional memory copy\n> sometimes comparing with \"single insert\"\n> (that should be the reason that we previously saw a bit regressions) so a\n> good solution seems to fall back\n> to \"single insert\" if the tuple length is larger than a threshold. I set\n> this as 2000 after quick testing.\n\nWhere does the additional memory copy come from? Can we avoid doing it \nin the multi-insert case?\n\n- Heikki\n\n\n", "msg_date": "Thu, 1 Aug 2019 21:54:56 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Fri, Aug 2, 2019 at 2:55 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 17/06/2019 15:53, Paul Guo wrote:\n> > I noticed that to do batch insert, we might need additional memory copy\n> > sometimes comparing with \"single insert\"\n> > (that should be the reason that we previously saw a bit regressions) so a\n> > good solution seems to fall back\n> > to \"single insert\" if the tuple length is larger than a threshold. I set\n> > this as 2000 after quick testing.\n>\n> Where does the additional memory copy come from? Can we avoid doing it\n> in the multi-insert case?\n\n\nHi Heikki,\n\nSorry for the late reply. I took some time on looking at & debugging the\ncode of TupleTableSlotOps\nof various TupleTableSlot types carefully, especially the\nBufferHeapTupleTableSlot case on which\nwe seemed to see regression if no threshold is set, also debugging &\ntesting more of the CTAS case.\nI found my previous word \"additional memory copy\" (mainly tuple content\ncopy against single insert)\nis wrong based on the latest code (probably is wrong also with previous\ncode). So in theory\nwe should not worry about additional tuple copy overhead now, and then I\ntried the patch without setting\nmulti-insert threshold as attached.\n\nTo make test results more stable, this time I run a simple ' select\ncount(*) from tbl' before each CTAS to\nwarm up the shared buffer, run checkpoint before each CTAS, disable\nautovacuum by setting\n'autovacuum = off', set larger shared buffers (but < 25% of total memory\nwhich is recommended\nby PG doc) so that CTAS all hits shared buffer read if there exists warm\nbuffers (double-checked via\nexplain(analyze, buffers)). These seem to be reasonable for performance\ntesting. Each kind of CTAS\ntesting is run three times (Note before each run we do warm up and\ncheckpoint as mentioned).\n\nI mainly tested the t12 (normal table with tuple size ~ 2k) case since for\nothers our patch either\nperforms better or similarly.\n\nPatch: 1st_run 2nd_run 3rd_run\n\nt12_BufferHeapTuple 7883.400 7549.966 8090.080\nt12_Virtual 8041.637 8191.317 8182.404\n\nBaseline: 1st_run 2nd_run 3rd_run\n\nt12_BufferHeapTuple: 8264.290 7508.410 7681.702\nt12_Virtual 8167.792 7970.537 8106.874\n\nI actually roughly tested other tables we mentioned also (t11 and t14) -\nthe test results and conclusions are same.\nt12_BufferHeapTuple means: create table tt as select * from t12;\nt12_Virtual means: create table tt as select *partial columns* from t12;\n\nSo it looks like for t12 the results between our code and baseline are\nsimilar so not setting\nthreshoud seem to be good though it looks like t12_BufferHeapTuple test\nresults varies a\nlot (at most 0.5 seconds) for both our patch and baseline vs the virtual\ncase which is quite stable.\n\nThis actually confused me a bit given we've cached the source table in\nshared buffers. I suspected checkpoint affects,\nso I disabled checkpoint by setting max_wal_size = 3000 during CTAS, the\nBufferHeapTuple case (see below)\nstill varies some. I'm not sure what's the reason but this does not seem to\na be blocker for the patch.\nPatch: 1st_run 2nd_run 3rd_run\nt12_BufferHeapTuple 7717.304 7413.259 7452.773\nt12_Virtual 7445.742 7483.148 7593.583\n\nBaseline: 1st_run 2nd_run 3rd_run\nt12_BufferHeapTuple 8186.302 7736.541 7759.056\nt12_Virtual 8004.880 8096.712 7961.483", "msg_date": "Mon, 9 Sep 2019 18:31:54 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Mon, Sep 9, 2019 at 4:02 PM Paul Guo <pguo@pivotal.io> wrote:\n>\n> So in theory\n> we should not worry about additional tuple copy overhead now, and then I\ntried the patch without setting\n> multi-insert threshold as attached.\n>\n\nI reviewed your patch today. It looks good overall. My concern is that\nthe ExecFetchSlotHeapTuple call does not seem appropriate. In a generic\nplace such as createas.c, we should be using generic tableam API only.\nHowever, I can also see that there is no better alternative. We need to\ncompute the size of accumulated tuples so far, in order to decide whether\nto stop accumulating tuples. There is no convenient way to obtain the\nlength of the tuple, given a slot. How about making that decision solely\nbased on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call\naltogether?\n\nThe multi insert copy code deals with index tuples also, which I don't see\nin the patch. Don't we need to consider populating indexes?\n\nAsim\n\nOn Mon, Sep 9, 2019 at 4:02 PM Paul Guo <pguo@pivotal.io> wrote:>> So in theory> we should not worry about additional tuple copy overhead now, and then I tried the patch without setting> multi-insert threshold as attached.>I reviewed your patch today.  It looks good overall.  My concern is that the ExecFetchSlotHeapTuple call does not seem appropriate.  In a generic place such as createas.c, we should be using generic tableam API only.  However, I can also see that there is no better alternative.  We need to compute the size of accumulated tuples so far, in order to decide whether to stop accumulating tuples.  There is no convenient way to obtain the length of the tuple, given a slot.  How about making that decision solely based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call altogether?The multi insert copy code deals with index tuples also, which I don't see in the patch.  Don't we need to consider populating indexes?Asim", "msg_date": "Wed, 25 Sep 2019 16:09:19 +0530", "msg_from": "Asim R P <apraveen@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On 2019-Sep-25, Asim R P wrote:\n\n> I reviewed your patch today. It looks good overall. My concern is that\n> the ExecFetchSlotHeapTuple call does not seem appropriate. In a generic\n> place such as createas.c, we should be using generic tableam API only.\n> However, I can also see that there is no better alternative. We need to\n> compute the size of accumulated tuples so far, in order to decide whether\n> to stop accumulating tuples. There is no convenient way to obtain the\n> length of the tuple, given a slot. How about making that decision solely\n> based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call\n> altogether?\n\n... maybe we should add a new operation to slots, that returns the\n(approximate?) size of a tuple? That would make this easy. (I'm not\nsure however what to do about TOAST considerations -- is it size in\nmemory that we're worried about?)\n\nAlso:\n\n+ myState->mi_slots_size >= 65535)\n\nThis magic number should be placed in a define next to the other one,\nbut I'm not sure that heapam.h is a good location, since surely this\napplies to matviews in other table AMs too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Sep 2019 10:43:27 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Asim Thanks for the review.\n\nOn Wed, Sep 25, 2019 at 6:39 PM Asim R P <apraveen@pivotal.io> wrote:\n\n>\n>\n>\n> On Mon, Sep 9, 2019 at 4:02 PM Paul Guo <pguo@pivotal.io> wrote:\n> >\n> > So in theory\n> > we should not worry about additional tuple copy overhead now, and then I\n> tried the patch without setting\n> > multi-insert threshold as attached.\n> >\n>\n> I reviewed your patch today. It looks good overall. My concern is that\n> the ExecFetchSlotHeapTuple call does not seem appropriate. In a generic\n> place such as createas.c, we should be using generic tableam API only.\n> However, I can also see that there is no better alternative. We need to\n> compute the size of accumulated tuples so far, in order to decide whether\n> to stop accumulating tuples. There is no convenient way to obtain the\n> length of the tuple, given a slot. How about making that decision solely\n> based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call\n> altogether?\n>\n\nFor heapam, ExecFetchSlotHeapTuple() will be called again in\nheap_multi_insert() to prepare the final multi-insert. if we check\nExecFetchSlotHeapTuple(), we could find that calling it multiple time just\ninvolves very very few overhead for the BufferHeapTuple case. Note for\nvirtual tuple case the 2nd ExecFetchSlotHeapTuple() call still copies slot\ncontents, but we've called ExecCopySlot(batchslot, slot); to copy to a\nBufferHeap case so no worries for the virtual tuple case (as a source).\n\nPreviously (long ago) I probably understood the code incorrectly so had the\nconcern also. I used sampling to do that (for variable-length tuple), but\nnow apparently we do not need that.\n\n>\n> The multi insert copy code deals with index tuples also, which I don't see\n> in the patch. Don't we need to consider populating indexes?\n>\n\ncreate table as/create mat view DDL does not involve index creation for the\ntable/matview. The code seems to be able to used in RefreshMatView also,\nfor that we need to consider if we use multi-insert in that code.\n\nAsim Thanks for the review.On Wed, Sep 25, 2019 at 6:39 PM Asim R P <apraveen@pivotal.io> wrote:On Mon, Sep 9, 2019 at 4:02 PM Paul Guo <pguo@pivotal.io> wrote:>> So in theory> we should not worry about additional tuple copy overhead now, and then I tried the patch without setting> multi-insert threshold as attached.>I reviewed your patch today.  It looks good overall.  My concern is that the ExecFetchSlotHeapTuple call does not seem appropriate.  In a generic place such as createas.c, we should be using generic tableam API only.  However, I can also see that there is no better alternative.  We need to compute the size of accumulated tuples so far, in order to decide whether to stop accumulating tuples.  There is no convenient way to obtain the length of the tuple, given a slot.  How about making that decision solely based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call altogether?For heapam, ExecFetchSlotHeapTuple() will be called again in heap_multi_insert() to prepare the final multi-insert. if we check ExecFetchSlotHeapTuple(), we could find that calling it multiple time just involves very very few overhead for the BufferHeapTuple case. Note for virtual tuple case the 2nd ExecFetchSlotHeapTuple() call still copies slot contents, but we've called ExecCopySlot(batchslot, slot); to copy to a BufferHeap case so no worries for the virtual tuple case (as a source). Previously (long ago) I probably understood the code incorrectly so had the concern also. I used sampling to do that (for variable-length tuple), but now apparently we do not need that.The multi insert copy code deals with index tuples also, which I don't see in the patch.  Don't we need to consider populating indexes?create table as/create mat view DDL does not involve index creation for the table/matview. The code seems to be able to used in RefreshMatView also, for that we need to consider if we use multi-insert in that code.", "msg_date": "Fri, 27 Sep 2019 12:18:31 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Thu, Sep 26, 2019 at 9:43 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Sep-25, Asim R P wrote:\n>\n> > I reviewed your patch today. It looks good overall. My concern is that\n> > the ExecFetchSlotHeapTuple call does not seem appropriate. In a generic\n> > place such as createas.c, we should be using generic tableam API only.\n> > However, I can also see that there is no better alternative. We need to\n> > compute the size of accumulated tuples so far, in order to decide whether\n> > to stop accumulating tuples. There is no convenient way to obtain the\n> > length of the tuple, given a slot. How about making that decision solely\n> > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple\n> call\n> > altogether?\n>\n> ... maybe we should add a new operation to slots, that returns the\n> (approximate?) size of a tuple? That would make this easy. (I'm not\n> sure however what to do about TOAST considerations -- is it size in\n> memory that we're worried about?)\n>\n> Also:\n>\n> + myState->mi_slots_size >= 65535)\n>\n> This magic number should be placed in a define next to the other one,\n> but I'm not sure that heapam.h is a good location, since surely this\n> applies to matviews in other table AMs too.\n>\n> yes defining 65535 seems better. Let's fix this one later when having\nmore feedback. Thanks.\n\nOn Thu, Sep 26, 2019 at 9:43 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Sep-25, Asim R P wrote:\n\n> I reviewed your patch today.  It looks good overall.  My concern is that\n> the ExecFetchSlotHeapTuple call does not seem appropriate.  In a generic\n> place such as createas.c, we should be using generic tableam API only.\n> However, I can also see that there is no better alternative.  We need to\n> compute the size of accumulated tuples so far, in order to decide whether\n> to stop accumulating tuples.  There is no convenient way to obtain the\n> length of the tuple, given a slot.  How about making that decision solely\n> based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call\n> altogether?\n\n... maybe we should add a new operation to slots, that returns the\n(approximate?) size of a tuple?  That would make this easy.  (I'm not\nsure however what to do about TOAST considerations -- is it size in\nmemory that we're worried about?)\n\nAlso:\n\n+       myState->mi_slots_size >= 65535)\n\nThis magic number should be placed in a define next to the other one,\nbut I'm not sure that heapam.h is a good location, since surely this\napplies to matviews in other table AMs too.yes defining 65535 seems better. Let's fix this one later when having more feedback. Thanks.", "msg_date": "Fri, 27 Sep 2019 12:22:50 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Thu, Sep 26, 2019 at 7:13 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n>\n> On 2019-Sep-25, Asim R P wrote:\n>\n> > I reviewed your patch today. It looks good overall. My concern is that\n> > the ExecFetchSlotHeapTuple call does not seem appropriate. In a generic\n> > place such as createas.c, we should be using generic tableam API only.\n> > However, I can also see that there is no better alternative. We need to\n> > compute the size of accumulated tuples so far, in order to decide\nwhether\n> > to stop accumulating tuples. There is no convenient way to obtain the\n> > length of the tuple, given a slot. How about making that decision\nsolely\n> > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple\ncall\n> > altogether?\n>\n> ... maybe we should add a new operation to slots, that returns the\n> (approximate?) size of a tuple? That would make this easy. (I'm not\n> sure however what to do about TOAST considerations -- is it size in\n> memory that we're worried about?)\n\nThat will help. For slots containing heap tuples, heap_compute_data_size()\nis what we need. Approximate size is better than nothing.\nIn case of CTAS, we are dealing with slots returned by a scan node.\nWouldn't TOAST datums be already expanded in those slots?\n\nAsim\n\nOn Thu, Sep 26, 2019 at 7:13 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:>> On 2019-Sep-25, Asim R P wrote:>> > I reviewed your patch today.  It looks good overall.  My concern is that> > the ExecFetchSlotHeapTuple call does not seem appropriate.  In a generic> > place such as createas.c, we should be using generic tableam API only.> > However, I can also see that there is no better alternative.  We need to> > compute the size of accumulated tuples so far, in order to decide whether> > to stop accumulating tuples.  There is no convenient way to obtain the> > length of the tuple, given a slot.  How about making that decision solely> > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call> > altogether?>> ... maybe we should add a new operation to slots, that returns the> (approximate?) size of a tuple?  That would make this easy.  (I'm not> sure however what to do about TOAST considerations -- is it size in> memory that we're worried about?)That will help.  For slots containing heap tuples, heap_compute_data_size() is what we need.  Approximate size is better than nothing.In case of CTAS, we are dealing with slots returned by a scan node.  Wouldn't TOAST datums be already expanded in those slots?Asim", "msg_date": "Fri, 27 Sep 2019 12:33:47 +0530", "msg_from": "Asim R P <apraveen@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Hi,\n\nOn 2019-09-26 10:43:27 -0300, Alvaro Herrera wrote:\n> On 2019-Sep-25, Asim R P wrote:\n> \n> > I reviewed your patch today. It looks good overall. My concern is that\n> > the ExecFetchSlotHeapTuple call does not seem appropriate. In a generic\n> > place such as createas.c, we should be using generic tableam API\n> > only.\n\nIndeed.\n\n\n> > However, I can also see that there is no better alternative. We need to\n> > compute the size of accumulated tuples so far, in order to decide whether\n> > to stop accumulating tuples. There is no convenient way to obtain the\n> > length of the tuple, given a slot. How about making that decision solely\n> > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call\n> > altogether?\n> \n> ... maybe we should add a new operation to slots, that returns the\n> (approximate?) size of a tuple?\n\nHm, I'm not convinced that it's worth adding that as a dedicated\noperation. It's not that clear what it'd exactly mean anyway - what\nwould it measure? As referenced in the slot? As if it were stored on\ndisk? etc?\n\nI wonder if the right answer wouldn't be to just measure the size of a\nmemory context containing the batch slots, or something like that.\n\n\n> That would make this easy. (I'm not sure however what to do about\n> TOAST considerations -- is it size in memory that we're worried\n> about?)\n\nThe in-memory size is probably fine, because in all likelihood the\ntoasted cols are just going to point to on-disk datums, no?\n\n\n> Also:\n> \n> + myState->mi_slots_size >= 65535)\n> \n> This magic number should be placed in a define next to the other one,\n> but I'm not sure that heapam.h is a good location, since surely this\n> applies to matviews in other table AMs too.\n\nRight. I think it'd be better to move this into an AM independent place.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Sep 2019 14:46:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Hi,\n\nOn 2019-09-09 18:31:54 +0800, Paul Guo wrote:\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index e9544822bf..8a844b3b5f 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -2106,7 +2106,6 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,\n> \t\t\t\t CommandId cid, int options, BulkInsertState bistate)\n> {\n> \tTransactionId xid = GetCurrentTransactionId();\n> -\tHeapTuple *heaptuples;\n> \tint\t\t\ti;\n> \tint\t\t\tndone;\n> \tPGAlignedBlock scratch;\n> @@ -2115,6 +2114,10 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,\n> \tSize\t\tsaveFreeSpace;\n> \tbool\t\tneed_tuple_data = RelationIsLogicallyLogged(relation);\n> \tbool\t\tneed_cids = RelationIsAccessibleInLogicalDecoding(relation);\n> +\t/* Declare it as static to let this memory be not on stack. */\n> +\tstatic HeapTuple\theaptuples[MAX_MULTI_INSERT_TUPLES];\n> +\n> +\tAssert(ntuples <= MAX_MULTI_INSERT_TUPLES);\n> \n> \t/* currently not needed (thus unsupported) for heap_multi_insert() */\n> \tAssertArg(!(options & HEAP_INSERT_NO_LOGICAL));\n> @@ -2124,7 +2127,6 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,\n> \t\t\t\t\t\t\t\t\t\t\t\t HEAP_DEFAULT_FILLFACTOR);\n> \n> \t/* Toast and set header data in all the slots */\n> -\theaptuples = palloc(ntuples * sizeof(HeapTuple));\n> \tfor (i = 0; i < ntuples; i++)\n> \t{\n> \t\tHeapTuple\ttuple;\n\nI don't think this is a good idea. We shouldn't unnecessarily allocate\n8KB on the stack. Is there any actual evidence this is a performance\nbenefit? To me this just seems like it'll reduce the flexibility of the\nAPI, without any benefit. I'll also note that you've apparently not\nupdated tableam.h to document this new restriction.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Sep 2019 14:49:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Sat, Sep 28, 2019 at 5:49 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-09-09 18:31:54 +0800, Paul Guo wrote:\n> > diff --git a/src/backend/access/heap/heapam.c\n> b/src/backend/access/heap/heapam.c\n> > index e9544822bf..8a844b3b5f 100644\n> > --- a/src/backend/access/heap/heapam.c\n> > +++ b/src/backend/access/heap/heapam.c\n> > @@ -2106,7 +2106,6 @@ heap_multi_insert(Relation relation,\n> TupleTableSlot **slots, int ntuples,\n> > CommandId cid, int options,\n> BulkInsertState bistate)\n> > {\n> > TransactionId xid = GetCurrentTransactionId();\n> > - HeapTuple *heaptuples;\n> > int i;\n> > int ndone;\n> > PGAlignedBlock scratch;\n> > @@ -2115,6 +2114,10 @@ heap_multi_insert(Relation relation,\n> TupleTableSlot **slots, int ntuples,\n> > Size saveFreeSpace;\n> > bool need_tuple_data =\n> RelationIsLogicallyLogged(relation);\n> > bool need_cids =\n> RelationIsAccessibleInLogicalDecoding(relation);\n> > + /* Declare it as static to let this memory be not on stack. */\n> > + static HeapTuple heaptuples[MAX_MULTI_INSERT_TUPLES];\n> > +\n> > + Assert(ntuples <= MAX_MULTI_INSERT_TUPLES);\n> >\n> > /* currently not needed (thus unsupported) for heap_multi_insert()\n> */\n> > AssertArg(!(options & HEAP_INSERT_NO_LOGICAL));\n> > @@ -2124,7 +2127,6 @@ heap_multi_insert(Relation relation,\n> TupleTableSlot **slots, int ntuples,\n> >\n> HEAP_DEFAULT_FILLFACTOR);\n> >\n> > /* Toast and set header data in all the slots */\n> > - heaptuples = palloc(ntuples * sizeof(HeapTuple));\n> > for (i = 0; i < ntuples; i++)\n> > {\n> > HeapTuple tuple;\n>\n> I don't think this is a good idea. We shouldn't unnecessarily allocate\n> 8KB on the stack. Is there any actual evidence this is a performance\n> benefit? To me this just seems like it'll reduce the flexibility of the\n>\n\nPrevious heaptuples is palloc-ed in each batch, which should be slower than\npre-allocated & reusing memory in theory.\n\nAPI, without any benefit. I'll also note that you've apparently not\n> updated tableam.h to document this new restriction.\n>\n\nYes it should be moved from heapam.h to that file along with the 65535\ndefinition.\n\nOn Sat, Sep 28, 2019 at 5:49 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-09-09 18:31:54 +0800, Paul Guo wrote:\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index e9544822bf..8a844b3b5f 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -2106,7 +2106,6 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,\n>                                 CommandId cid, int options, BulkInsertState bistate)\n>  {\n>       TransactionId xid = GetCurrentTransactionId();\n> -     HeapTuple  *heaptuples;\n>       int                     i;\n>       int                     ndone;\n>       PGAlignedBlock scratch;\n> @@ -2115,6 +2114,10 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,\n>       Size            saveFreeSpace;\n>       bool            need_tuple_data = RelationIsLogicallyLogged(relation);\n>       bool            need_cids = RelationIsAccessibleInLogicalDecoding(relation);\n> +     /* Declare it as static to let this memory be not on stack. */\n> +     static HeapTuple        heaptuples[MAX_MULTI_INSERT_TUPLES];\n> +\n> +     Assert(ntuples <= MAX_MULTI_INSERT_TUPLES);\n>  \n>       /* currently not needed (thus unsupported) for heap_multi_insert() */\n>       AssertArg(!(options & HEAP_INSERT_NO_LOGICAL));\n> @@ -2124,7 +2127,6 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,\n>                                                                                                  HEAP_DEFAULT_FILLFACTOR);\n>  \n>       /* Toast and set header data in all the slots */\n> -     heaptuples = palloc(ntuples * sizeof(HeapTuple));\n>       for (i = 0; i < ntuples; i++)\n>       {\n>               HeapTuple       tuple;\n\nI don't think this is a good idea. We shouldn't unnecessarily allocate\n8KB on the stack. Is there any actual evidence this is a performance\nbenefit? To me this just seems like it'll reduce the flexibility of thePrevious  heaptuples is palloc-ed in each batch, which should be slower thanpre-allocated & reusing memory in theory.\nAPI, without any benefit.  I'll also note that you've apparently not\nupdated tableam.h to document this new restriction. Yes it should be moved from heapam.h to that file along with the 65535 definition.", "msg_date": "Mon, 30 Sep 2019 12:01:14 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": ">\n>\n> > > However, I can also see that there is no better alternative. We need\n> to\n> > > compute the size of accumulated tuples so far, in order to decide\n> whether\n> > > to stop accumulating tuples. There is no convenient way to obtain the\n> > > length of the tuple, given a slot. How about making that decision\n> solely\n> > > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple\n> call\n> > > altogether?\n> >\n> > ... maybe we should add a new operation to slots, that returns the\n> > (approximate?) size of a tuple?\n>\n> Hm, I'm not convinced that it's worth adding that as a dedicated\n> operation. It's not that clear what it'd exactly mean anyway - what\n> would it measure? As referenced in the slot? As if it were stored on\n> disk? etc?\n>\n> I wonder if the right answer wouldn't be to just measure the size of a\n> memory context containing the batch slots, or something like that.\n>\n>\nProbably a better way is to move those logic (append slot to slots, judge\nwhen to flush, flush, clean up slots) into table_multi_insert()? Generally\nthe final implementation of table_multi_insert() should be able to know\nthe sizes easily. One concern is that currently just COPY in the repo uses\nmulti insert, so not sure if other callers in the future want their own\nlogic (or set up a flag to allow customization but seems a bit\nover-designed?).\n\n\n> > However, I can also see that there is no better alternative.  We need to\n> > compute the size of accumulated tuples so far, in order to decide whether\n> > to stop accumulating tuples.  There is no convenient way to obtain the\n> > length of the tuple, given a slot.  How about making that decision solely\n> > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple call\n> > altogether?\n> \n> ... maybe we should add a new operation to slots, that returns the\n> (approximate?) size of a tuple?\n\nHm, I'm not convinced that it's worth adding that as a dedicated\noperation. It's not that clear what it'd exactly mean anyway - what\nwould it measure? As referenced in the slot? As if it were stored on\ndisk? etc?\n\nI wonder if the right answer wouldn't be to just measure the size of a\nmemory context containing the batch slots, or something like that.\nProbably a better way is to move those logic (append slot to slots, judgewhen to flush, flush, clean up slots) into table_multi_insert()? Generallythe final implementation of table_multi_insert() should be able to knowthe sizes easily. One concern is that currently just COPY in the repo usesmulti insert, so not sure if other callers in the future want their ownlogic (or set up a flag to allow customization but seems a bit over-designed?).", "msg_date": "Mon, 30 Sep 2019 12:12:31 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Hi,\n\nOn 2019-09-30 12:12:31 +0800, Paul Guo wrote:\n> > > > However, I can also see that there is no better alternative. We need\n> > to\n> > > > compute the size of accumulated tuples so far, in order to decide\n> > whether\n> > > > to stop accumulating tuples. There is no convenient way to obtain the\n> > > > length of the tuple, given a slot. How about making that decision\n> > solely\n> > > > based on number of tuples, so that we can avoid ExecFetchSlotHeapTuple\n> > call\n> > > > altogether?\n> > >\n> > > ... maybe we should add a new operation to slots, that returns the\n> > > (approximate?) size of a tuple?\n> >\n> > Hm, I'm not convinced that it's worth adding that as a dedicated\n> > operation. It's not that clear what it'd exactly mean anyway - what\n> > would it measure? As referenced in the slot? As if it were stored on\n> > disk? etc?\n> >\n> > I wonder if the right answer wouldn't be to just measure the size of a\n> > memory context containing the batch slots, or something like that.\n> >\n> >\n> Probably a better way is to move those logic (append slot to slots, judge\n> when to flush, flush, clean up slots) into table_multi_insert()?\n\nThat does not strike me as a good idea. The upper layer is going to need\nto manage some resources (e.g. it's the only bit that knows about how to\nmanage lifetime of the incoming data), and by exposing it to each AM\nwe're going to duplicate the necessary code too.\n\n\n> Generally the final implementation of table_multi_insert() should be\n> able to know the sizes easily. One concern is that currently just COPY\n> in the repo uses multi insert, so not sure if other callers in the\n> future want their own logic (or set up a flag to allow customization\n> but seems a bit over-designed?).\n\nAnd that is also a concern, it seems unlikely that we'll get the\ninterface good.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Sep 2019 00:38:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Mon, Sep 30, 2019 at 12:38:02AM -0700, Andres Freund wrote:\n> That does not strike me as a good idea. The upper layer is going to need\n> to manage some resources (e.g. it's the only bit that knows about how to\n> manage lifetime of the incoming data), and by exposing it to each AM\n> we're going to duplicate the necessary code too.\n\nLatest status of the thread maps with a patch which still applies,\nstill the discussion could go on as more review is needed. So I have\nmoved it to next CF.\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 18:18:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "I took some time on digging the issue yesterday so the main concern of the\npatch is to get the tuple length from ExecFetchSlotHeapTuple().\n\n+ ExecCopySlot(batchslot, slot);\n+ tuple = ExecFetchSlotHeapTuple(batchslot, true, NULL);\n+\n+ myState->mi_slots_num++;\n+ myState->mi_slots_size += tuple->t_len;\n\nWe definitely should remove ExecFetchSlotHeapTuple() here but we need to\nknow the tuple length (at least rough length). One solution might be using\nmemory context stat info as mentioned, the code looks like this.\n\ntup_len = MemoryContextUsedSize(batchslot->tts_mcxt);\nExecCopySlot(batchslot, slot);\nExecMaterializeSlot(batchslot);\ntup_len = MemoryContextUsedSize(batchslot->tts_mcxt) - tup_len;\n\nMemoryContextUsedSize() is added to calculate the total used size (simply\nhack to use the stats interface).\n\n+int\n+MemoryContextUsedSize(MemoryContext context)\n+{\n+ MemoryContextCounters total;\n+\n+ memset(&total, 0, sizeof(total));\n+ context->methods->stats(context, NULL, NULL, &total);\n+\n+ return total.totalspace - total.freespace;\n+}\n\nThis basically works but there are concerns:\n\n The length is not accurate (though we do not need to be that accurate)\nsince there are\n some additional memory allocations, but we are not sure if the size is\nnot much\n larger than the real length for some slot types in the future and I'm\nnot sure whether we\n definitely allocate at least the tuple length in the memory context\nafter materialization\n for all slot types in the future. Last is that the code seems to be a\nbit ugly also.\n\n As a reference, For \"create table t1 as select * from t2\", the above\ncode returns\n \"tuple length\" is 88 (real tuple length is 4).\n\nAnother solution is that maybe return the real size\nin ExecMaterializeSlot()? e.g.\nExecMaterializeSlot(slot, &tup_len); For this we probably want to store the\nlength\nin the slot struct for performance.\n\nFor the COPY case the tuple length is known in advance but I can image more\ncases\nwhich do not know the size but need that for the multi_insert interface, at\nleast I'm\nwondering if we should use that in 'refresh matview' and fast path for\nInsert node\n(I heard some complaints about the performance of \"insert into tbl from\nselect...\"\nfrom some of our users)? So the concern is not just for the case in this\npatch.\n\nBesides, My colleagues Ashwin Agrawal and Adam Lee found maybe could\ntry raw_heap_insert() similar code for ctas and compare since it is do\ninsert in\na new created table. That would involve more discussions, much more code\nchange and need to test more (stability and performance). So multi insert\nseems\nto be a stable solution in a short time given that has been used in COPY\nfor a\nlong time?\n\nWhatever the solution for CTAS we need to address the concern of tuple size\nfor multi insert cases.\n\nI took some time on digging the issue yesterday so the main concern of thepatch is to get the tuple length from ExecFetchSlotHeapTuple().+   ExecCopySlot(batchslot, slot);+   tuple = ExecFetchSlotHeapTuple(batchslot, true, NULL);++   myState->mi_slots_num++;+   myState->mi_slots_size += tuple->t_len;We definitely should remove ExecFetchSlotHeapTuple() here but we need toknow the tuple length (at least rough length). One solution might be usingmemory context stat info as mentioned, the code looks like this. tup_len = MemoryContextUsedSize(batchslot->tts_mcxt);ExecCopySlot(batchslot, slot);ExecMaterializeSlot(batchslot);tup_len = MemoryContextUsedSize(batchslot->tts_mcxt) - tup_len;MemoryContextUsedSize() is added to calculate the total used size (simply hack to use the stats interface).+int+MemoryContextUsedSize(MemoryContext context)+{+   MemoryContextCounters total;++   memset(&total, 0, sizeof(total));+   context->methods->stats(context, NULL, NULL, &total);++   return total.totalspace - total.freespace;+}This basically works but there are concerns:   The length is not accurate (though we do not need to be that accurate) since there are   some additional memory allocations, but we are not sure if the size is not much   larger than the real length for some slot types in the future and I'm not sure whether we   definitely allocate at least the tuple length in the memory context after materialization   for all slot types in the future. Last is that the code seems to be a bit ugly also.   As a reference, For \"create table t1 as select * from t2\", the above code returns   \"tuple length\" is 88 (real tuple length is 4). Another solution is that maybe return the real size in ExecMaterializeSlot()? e.g.ExecMaterializeSlot(slot, &tup_len); For this we probably want to store the lengthin the slot struct for performance.For the COPY case the tuple length is known in advance but I can image more caseswhich do not know the size but need that for the multi_insert interface, at least I'mwondering if we should use that in 'refresh matview' and fast path for Insert node(I heard some complaints about the performance of \"insert into tbl from select...\"from some of our users)? So the concern is not just for the case in this patch.Besides, My colleagues Ashwin Agrawal and Adam Lee found maybe couldtry raw_heap_insert() similar code for ctas and compare since it is do insert ina new created table. That would involve more discussions, much more codechange and need to test more (stability and performance). So multi insert seemsto be a stable solution in a short time given that has been used in COPY for along time?Whatever the solution for CTAS we need to address the concern of tuple sizefor multi insert cases.", "msg_date": "Fri, 17 Jan 2020 15:02:06 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "In an off-list discussion with Paul, we decided to withdraw this patch for now\nand instead create a new entry when there is a re-worked patch. This has now\nbeen done in the CF app.\n\ncheers ./daniel\n\n\n", "msg_date": "Fri, 31 Jul 2020 00:08:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "Hello Paul-san,\n\nFrom: Daniel Gustafsson <daniel@yesql.se>\n> In an off-list discussion with Paul, we decided to withdraw this patch for now\n> and instead create a new entry when there is a re-worked patch. This has\n> now\n> been done in the CF app.\n\nWould you mind if I take over this patch for PG 15? I find this promising, as Bharath-san demonstrated good performance by combining your patch and the parallel CTAS that Bharath-san has been working on. We'd like to do things that enhance parallelism.\n\nPlease allow me to start posting the revised patches next week if I can't get your reply.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Wed, 26 May 2021 06:48:43 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Wed, May 26, 2021 at 12:18 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> Hello Paul-san,\n>\n> From: Daniel Gustafsson <daniel@yesql.se>\n> > In an off-list discussion with Paul, we decided to withdraw this patch for now\n> > and instead create a new entry when there is a re-worked patch. This has\n> > now\n> > been done in the CF app.\n>\n> Would you mind if I take over this patch for PG 15? I find this promising, as Bharath-san demonstrated good performance by combining your patch and the parallel CTAS that Bharath-san has been working on. We'd like to do things that enhance parallelism.\n>\n> Please allow me to start posting the revised patches next week if I can't get your reply.\n\nHi,\n\nI think the \"New Table Access Methods for Multi and Single Inserts\"\npatches at [1] make multi insert usage easy for COPY, CTAS/Create Mat\nView, Refresh Mat View and so on. It also has a patch for multi\ninserts in CTAS and Refresh Mat View\n(v6-0002-CTAS-and-REFRESH-Mat-View-With-New-Multi-Insert-T.patch).\nPlease see that thread and feel free to review it.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACXdrOmB6Na9amHWZHKvRT3Z0nwTRsCwoMT-npOBtmXLXg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 12:36:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" }, { "msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n> I think the \"New Table Access Methods for Multi and Single Inserts\"\r\n> patches at [1] make multi insert usage easy for COPY, CTAS/Create Mat\r\n> View, Refresh Mat View and so on. It also has a patch for multi\r\n> inserts in CTAS and Refresh Mat View\r\n> (v6-0002-CTAS-and-REFRESH-Mat-View-With-New-Multi-Insert-T.patch).\r\n> Please see that thread and feel free to review it.\r\n\r\nOuch, I didn't notice that the patch was reborn in that thread. OK.\r\n\r\nCould you add it to the CF if you haven't yet?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 26 May 2021 07:19:01 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Batch insert in CTAS/MatView code" }, { "msg_contents": "On Wed, May 26, 2021 at 12:49 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > I think the \"New Table Access Methods for Multi and Single Inserts\"\n> > patches at [1] make multi insert usage easy for COPY, CTAS/Create Mat\n> > View, Refresh Mat View and so on. It also has a patch for multi\n> > inserts in CTAS and Refresh Mat View\n> > (v6-0002-CTAS-and-REFRESH-Mat-View-With-New-Multi-Insert-T.patch).\n> > Please see that thread and feel free to review it.\n>\n> Ouch, I didn't notice that the patch was reborn in that thread. OK.\n>\n> Could you add it to the CF if you haven't yet?\n\nIt already has one - https://commitfest.postgresql.org/33/2871/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 May 2021 13:00:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Batch insert in CTAS/MatView code" } ]
[ { "msg_contents": "Hi,\n\nSteps to reproduce on Master Sources -\n\n.) Perform initdb ( ./initdb -D data)\n\n.) set wal_level=logical in postgresql.conf file\n\n.)Connect to psql in single-user mode  ( ./postgres --single  -D data  \npostgres)\n\n.)Create logical replication slot followed by select * from \npg_logical_slot_get_changes\n\nbackend>  SELECT * FROM pg_create_logical_replication_slot('m7', \n'test_decoding','f');\n      1: slot_name    (typeid = 19, len = 64, typmod = -1, byval = f)\n      2: lsn    (typeid = 3220, len = 8, typmod = -1, byval = t)\n     ----\n2019-03-06 17:27:42.080 GMT [1132] LOG:  logical decoding found \nconsistent point at 0/163B9C8\n2019-03-06 17:27:42.080 GMT [1132] DETAIL:  There are no running \ntransactions.\n2019-03-06 17:27:42.080 GMT [1132] STATEMENT:   SELECT * FROM \npg_create_logical_replication_slot('m7', 'test_decoding','f');\n\n      1: slot_name = \"m7\"    (typeid = 19, len = 64, typmod = -1, byval = f)\n      2: lsn = \"0/163BA00\"    (typeid = 3220, len = 8, typmod = -1, \nbyval = t)\n     ----\nbackend> select * from pg_logical_slot_get_changes('m7',null,null);\n      1: lsn    (typeid = 3220, len = 8, typmod = -1, byval = t)\n      2: xid    (typeid = 28, len = 4, typmod = -1, byval = t)\n      3: data    (typeid = 25, len = -1, typmod = -1, byval = f)\n     ----\n2019-03-06 17:28:04.979 GMT [1132] LOG:  starting logical decoding for \nslot \"m7\"\n2019-03-06 17:28:04.979 GMT [1132] DETAIL:  Streaming transactions \ncommitting after 0/163BA00, reading WAL from 0/163B9C8.\n2019-03-06 17:28:04.979 GMT [1132] STATEMENT:  select * from \npg_logical_slot_get_changes('m7',null,null);\n\n2019-03-06 17:28:04.979 GMT [1132] LOG:  logical decoding found \nconsistent point at 0/163B9C8\n2019-03-06 17:28:04.979 GMT [1132] DETAIL:  There are no running \ntransactions.\n2019-03-06 17:28:04.979 GMT [1132] STATEMENT:  select * from \npg_logical_slot_get_changes('m7',null,null);\n\nTRAP: FailedAssertion(\"!(slot != ((void *)0) && slot->active_pid != 0)\", \nFile: \"slot.c\", Line: 428)\nAborted (core dumped)\n\nStack trace -\n\n(gdb) bt\n#0  0x0000003746e325e5 in raise () from /lib64/libc.so.6\n#1  0x0000003746e33dc5 in abort () from /lib64/libc.so.6\n#2  0x00000000008a96ad in ExceptionalCondition (conditionName=<value \noptimized out>, errorType=<value optimized out>, fileName=<value \noptimized out>, lineNumber=<value optimized out>)\n     at assert.c:54\n#3  0x0000000000753253 in ReplicationSlotRelease () at slot.c:428\n#4  0x0000000000734dbd in pg_logical_slot_get_changes_guts \n(fcinfo=0x2771e48, confirm=true, binary=false) at logicalfuncs.c:355\n#5  0x0000000000640aa5 in ExecMakeTableFunctionResult \n(setexpr=0x27704f8, econtext=0x27703a8, argContext=<value optimized \nout>, expectedDesc=0x2792c10, randomAccess=false)\n     at execSRF.c:233\n#6  0x0000000000650c43 in FunctionNext (node=0x2770290) at \nnodeFunctionscan.c:95\n#7  0x000000000063fbad in ExecScanFetch (node=0x2770290, \naccessMtd=0x650950 <FunctionNext>, recheckMtd=0x6501d0 \n<FunctionRecheck>) at execScan.c:93\n#8  ExecScan (node=0x2770290, accessMtd=0x650950 <FunctionNext>, \nrecheckMtd=0x6501d0 <FunctionRecheck>) at execScan.c:143\n#9  0x00000000006390a7 in ExecProcNode (queryDesc=0x276fc28, \ndirection=<value optimized out>, count=0, execute_once=144) at \n../../../src/include/executor/executor.h:241\n#10 ExecutePlan (queryDesc=0x276fc28, direction=<value optimized out>, \ncount=0, execute_once=144) at execMain.c:1643\n#11 standard_ExecutorRun (queryDesc=0x276fc28, direction=<value \noptimized out>, count=0, execute_once=144) at execMain.c:362\n#12 0x000000000079e30b in PortalRunSelect (portal=0x27156b8, \nforward=<value optimized out>, count=0, dest=<value optimized out>) at \npquery.c:929\n#13 0x000000000079f671 in PortalRun (portal=0x27156b8, \ncount=9223372036854775807, isTopLevel=true, run_once=true, \ndest=0xa826a0, altdest=0xa826a0, completionTag=0x7ffc04af2690 \"\")\n     at pquery.c:770\n#14 0x000000000079ba7b in exec_simple_query (query_string=0x27236f8 \n\"select * from pg_logical_slot_get_changes('m7',null,null);\\n\") at \npostgres.c:1215\n#15 0x000000000079d044 in PostgresMain (argc=<value optimized out>, \nargv=<value optimized out>, dbname=0x26c9010 \"postgres\", username=<value \noptimized out>) at postgres.c:4256\n#16 0x00000000006874eb in main (argc=5, argv=0x26a7e20) at main.c:224\n(gdb) q\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 6 Mar 2019 20:07:24 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Server Crash in logical decoding if used inside --single mode" }, { "msg_contents": "On 2019-Mar-06, tushar wrote:\n\n> backend> select * from pg_logical_slot_get_changes('m7',null,null);\n [...]\n> TRAP: FailedAssertion(\"!(slot != ((void *)0) && slot->active_pid != 0)\",\n> File: \"slot.c\", Line: 428)\n> Aborted (core dumped)\n\nSee argumentation in\nhttps://www.postgresql.org/message-id/flat/3b2f809f-326c-38dd-7a9e-897f957a4eb1%40enterprisedb.com\n-- essentially, we don't really care to support this case.\n\nIf you want to submit a patch to report an error before crashing, that's\nfine, but don't ask for this functionality to actually work.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 6 Mar 2019 14:51:06 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Server Crash in logical decoding if used inside --single mode" } ]
[ { "msg_contents": "Removed unused variable, openLogOff.\n\nAntonin Houska\n\nDiscussion: http://postgr.es/m/30413.1551870730@localhost\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/93473c6ac805994a74e74ed13828c6c9433c8faf\n\nModified Files\n--------------\nsrc/backend/access/transam/xlog.c | 13 +++----------\n1 file changed, 3 insertions(+), 10 deletions(-)\n\n", "msg_date": "Wed, 06 Mar 2019 14:47:16 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Wed, Mar 06, 2019 at 02:47:16PM +0000, Robert Haas wrote:\n> Removed unused variable, openLogOff.\n\nIs that right for the report if data is written in chunks? The same\npatch has been proposed a couple of weeks ago, and I commented about\nit as follows:\nhttps://www.postgresql.org/message-id/20190129043439.GB3121@paquier.xyz\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 08:52:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Wed, Mar 6, 2019 at 6:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Mar 06, 2019 at 02:47:16PM +0000, Robert Haas wrote:\n> > Removed unused variable, openLogOff.\n>\n> Is that right for the report if data is written in chunks? The same\n> patch has been proposed a couple of weeks ago, and I commented about\n> it as follows:\n> https://www.postgresql.org/message-id/20190129043439.GB3121@paquier.xyz\n\nOh, sorry, I didn't see the earlier thread. You're right that this is\nmessed up: if we're going to report startOffset rather than\nopenLogOff, we need to report nleft rather than bytes. I would prefer\nto change nbytes -> nleft rather than anything else, though, because\nit seems to me that we should report the offset and length that\nactually failed, not the start of the whole chunk which partially\nsucceeded.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 7 Mar 2019 08:03:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Thu, Mar 07, 2019 at 08:03:19AM -0500, Robert Haas wrote:\n> Oh, sorry, I didn't see the earlier thread. You're right that this is\n> messed up: if we're going to report startOffset rather than\n> openLogOff, we need to report nleft rather than bytes. I would prefer\n> to change nbytes -> nleft rather than anything else, though, because\n> it seems to me that we should report the offset and length that\n> actually failed, not the start of the whole chunk which partially\n> succeeded.\n\nI found the original coding cleaner logically (perhaps a matter of\npersonal taste and I am quite used to it so I am under influence!) by\nreporting at which position it has failed when writing a given chunk.\nNow it is the second time that somebody is sending a patch for that in\na couple of weeks, so visibly people obviously would like to simplify\nthis code :)\n\nIf you want to keep this formulation, that's fine for me. However you\nshould really change it to nleft as you suggest, and not keep nbytes.\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 23:11:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Thu, Mar 07, 2019 at 11:11:16PM +0900, Michael Paquier wrote:\n> If you want to keep this formulation, that's fine for me. However you\n> should really change it to nleft as you suggest, and not keep nbytes.\n\nAfter sleeping on it, let's live with just switching to nleft in the\nmessage, without openLogOff as that's the second time folks complain\nabout the previous code. So I just propose the attached. Robert,\nothers, any objections? Perhaps you would prefer fixing it yourself?\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 10:27:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Fri, Mar 08, 2019 at 10:27:52AM +0900, Michael Paquier wrote:\n> After sleeping on it, let's live with just switching to nleft in the\n> message, without openLogOff as that's the second time folks complain\n> about the previous code. So I just propose the attached. Robert,\n> others, any objections? Perhaps you would prefer fixing it yourself?\n\nOkay, done.\n--\nMichael", "msg_date": "Mon, 11 Mar 2019 09:40:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Thu, Mar 7, 2019 at 8:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> After sleeping on it, let's live with just switching to nleft in the\n> message, without openLogOff as that's the second time folks complain\n> about the previous code. So I just propose the attached. Robert,\n> others, any objections? Perhaps you would prefer fixing it yourself?\n\nSorry that I didn't get to this before you did -- I was on PTO on\nFriday and did not work on the weekend.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Mon, 11 Mar 2019 15:30:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Mon, Mar 11, 2019 at 03:30:22PM -0400, Robert Haas wrote:\n> Sorry that I didn't get to this before you did -- I was on PTO on\n> Friday and did not work on the weekend.\n\nMy apologies, Robert. It seems that I have been too much hasty\nthen. There are so many things going around lately, it is hard to\nkeep track of everything...\n--\nMichael", "msg_date": "Tue, 12 Mar 2019 11:23:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." }, { "msg_contents": "On Mon, Mar 11, 2019 at 10:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Mar 11, 2019 at 03:30:22PM -0400, Robert Haas wrote:\n> > Sorry that I didn't get to this before you did -- I was on PTO on\n> > Friday and did not work on the weekend.\n>\n> My apologies, Robert. It seems that I have been too much hasty\n> then. There are so many things going around lately, it is hard to\n> keep track of everything...\n\nNo, it's fine. I just wanted to explain why I didn't take care of it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 12 Mar 2019 10:29:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Removed unused variable, openLogOff." } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15672\nLogged by: Jianing Yang\nEmail address: jianingy.yang@gmail.com\nPostgreSQL version: 11.2\nOperating system: Ubuntu 18.10\nDescription: \n\nReproduce steps:\r\n\r\n1. create a partition table with the following constraints\r\n a. with a unique key on partition key and a varchar type field\r\n b. using hash partition\r\n2. alter the length of the varchar type field\r\n3. drop the partition table in a transaction\r\n4. crash\r\n\r\nScreenshot:\r\n→ pgcli -h localhost -p 5555 -U postgres -W\r\nServer: PostgreSQL 11.2\r\nVersion: 2.0.1\r\nChat: https://gitter.im/dbcli/pgcli\r\nMail: https://groups.google.com/forum/#!forum/pgcli\r\nHome: http://pgcli.com\r\n\r\npostgres@localhost:postgres> show server_version; \n \r\n+-------------------------------+\r\n| server_version |\r\n|-------------------------------|\r\n| 11.2 (Debian 11.2-1.pgdg90+1) |\r\n+-------------------------------+\r\nSHOW\r\nTime: 0.011s\r\npostgres@localhost:postgres> create table users(user_id int, name varchar\n(64), unique (user_id, name)) partition by hash(user_id); \n \r\nCREATE TABLE\r\nTime: 0.004s\r\npostgres@localhost:postgres> create table users_000 partition of users for\nvalues with (modulus 2, remainder 0); \n \r\nCREATE TABLE\r\nTime: 0.012s\r\npostgres@localhost:postgres> create table users_001 partition of users for\nvalues with (modulus 2, remainder 1); \n \r\nCREATE TABLE\r\nTime: 0.012s\r\npostgres@localhost:postgres> alter table users alter column name type\nvarchar(127); \r\nYou're about to run a destructive command.\r\nDo you want to proceed? (y/n): y\r\nYour call!\r\nALTER TABLE\r\nTime: 0.007s\r\npostgres@localhost:postgres> \\d users; \n \r\n+----------+------------------------+-------------+\r\n| Column | Type | Modifiers |\r\n|----------+------------------------+-------------|\r\n| user_id | integer | |\r\n| name | character varying(127) | |\r\n+----------+------------------------+-------------+\r\nIndexes:\r\n \"users_user_id_name_key\" UNIQUE CONSTRAINT, btree (user_id, name)\r\nPartition key: HASH (user_id)\r\nNumber of partitions 2: (Use \\d+ to list them.)\r\n\r\nTime: 0.012s\r\npostgres@localhost:postgres> begin; \n \r\nBEGIN\r\nTime: 0.001s\r\npostgres@localhost:postgres> drop table users; \n \r\nYou're about to run a destructive command.\r\nDo you want to proceed? (y/n): y\r\nYour call!\r\nDROP TABLE\r\nTime: 0.002s\r\npostgres@localhost:postgres> commit; \r\nConnection reset. Reconnect (Y/n):\r\n\r\n\r\nServer Log:\r\n\r\n2019-03-06 14:59:26.101 UTC [61] ERROR: SMgrRelation hashtable corrupted\r\n2019-03-06 14:59:26.101 UTC [61] STATEMENT: commit\r\n2019-03-06 14:59:26.101 UTC [61] WARNING: AbortTransaction while in COMMIT\nstate\r\n2019-03-06 14:59:26.101 UTC [61] PANIC: cannot abort transaction 573, it\nwas already committed\r\n2019-03-06 14:59:26.178 UTC [1] LOG: server process (PID 61) was terminated\nby signal 6: Aborted\r\n2019-03-06 14:59:26.178 UTC [1] DETAIL: Failed process was running:\ncommit\r\n2019-03-06 14:59:26.178 UTC [1] LOG: terminating any other active server\nprocesses\r\n2019-03-06 14:59:26.178 UTC [58] WARNING: terminating connection because of\ncrash of another server process\r\n2019-03-06 14:59:26.178 UTC [58] DETAIL: The postmaster has commanded this\nserver process to roll back the current transaction and exit, because\nanother server process exited abnormally and possibly corrupted shared\nmemory.\r\n2019-03-06 14:59:26.178 UTC [58] HINT: In a moment you should be able to\nreconnect to the database and repeat your command.\r\n2019-03-06 14:59:26.179 UTC [1] LOG: all server processes terminated;\nreinitializing\r\n2019-03-06 14:59:26.186 UTC [68] LOG: database system was interrupted; last\nknown up at 2019-03-06 14:58:30 UTC\r\n2019-03-06 14:59:26.212 UTC [68] LOG: database system was not properly shut\ndown; automatic recovery in progress\r\n2019-03-06 14:59:26.214 UTC [68] LOG: redo starts at 0/1650278\r\n2019-03-06 14:59:26.215 UTC [68] FATAL: SMgrRelation hashtable corrupted\r\n2019-03-06 14:59:26.215 UTC [68] CONTEXT: WAL redo at 0/16768C8 for\nTransaction/COMMIT: 2019-03-06 14:59:26.099739+00; rels: base/13067/16389\nbase/13067/16387 base/13067/16394 base/13067/16387; inval msgs: catcache 41\ncatcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7\ncatcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6\ncatcache 7 catcache 6 catcache 50 catcache 49 catcache 19 catcache 32\ncatcache 7 catcache 6 catcache 7 catcache 6 catcache 50 catcache 49 catcache\n74 catcache 73 catcache 74 catcache 73 catcache 7 catcache 6 catcache 7\ncatcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6\ncatcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache\n50 catcache 49 catcache 19 catcache 32 catcache 7 catcache 6 catcache 7\ncatcache 6 catcache 50 catcache 49 catcache 74 catcache 73 catcache 74\ncatcache 73 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache\n6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache\n7 catcache 6 catcache 7 catcache 6 catcache 50 catcache 49 catcache 19\ncatcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 50 catcache\n49 catcache 74 catcache 73 catcache 74 catcache 73 relcache 16384 snapshot\n2608 snapshot 2608 relcache 16399 relcache 16384 snapshot 2608 snapshot 2608\nsnapshot 2608 relcache 16389 relcache 16384 snapshot 2608 snapshot 2608\nrelcache 16401 relcache 16389 snapshot 2608 snapshot 2608 snapshot 2608\nrelcache 16394 relcache 16384 snapshot 2608 snapshot 2608 relcache 16403\nrelcache 16394 snapshot 2608 snapshot 2608 snapshot 2608\r\n2019-03-06 14:59:26.216 UTC [1] LOG: startup process (PID 68) exited with\nexit code 1\r\n2019-03-06 14:59:26.216 UTC [1] LOG: aborting startup due to startup\nprocess failure\r\n2019-03-06 14:59:26.217 UTC [1] LOG: database system is shut down\r\n\r\n\r\nAffected Server Version:\r\n\r\n11.1\r\n11.2", "msg_date": "Wed, 06 Mar 2019 15:06:53 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a partition\n table" }, { "msg_contents": "On Wed, Mar 06, 2019 at 03:06:53PM +0000, PG Bug reporting form wrote:\n> 1. create a partition table with the following constraints\n> a. with a unique key on partition key and a varchar type field\n> b. using hash partition\n> 2. alter the length of the varchar type field\n> 3. drop the partition table in a transaction\n> 4. crash\n\nI can reproduce the failure easily, not on HEAD but with\nREL_11_STABLE:\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f585729b535 in __GI_abort () at abort.c:79\n#2 0x000055eef597e60a in errfinish (dummy=0) at elog.c:555\n#3 0x000055eef5980c50 in elog_finish (elevel=22, fmt=0x55eef5a41408\n\"cannot abort transaction %u, it was already committed\") at\nelog.c:1376\n#4 0x000055eef5479647 in RecordTransactionAbort (isSubXact=false) at\nxact.c:1580\n#5 0x000055eef547a6c0 in AbortTransaction () at xact.c:2602\n#6 0x000055eef547aef4 in AbortCurrentTransaction () at xact.c:3104\n\nThat's worth an investigation, SMgrRelationHash is getting messed up\nwhich causes the transaction commit to fail where it should not.\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 09:03:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "Hi,\n\nOn 2019/03/07 9:03, Michael Paquier wrote:\n> On Wed, Mar 06, 2019 at 03:06:53PM +0000, PG Bug reporting form wrote:\n>> 1. create a partition table with the following constraints\n>> a. with a unique key on partition key and a varchar type field\n>> b. using hash partition\n>> 2. alter the length of the varchar type field\n>> 3. drop the partition table in a transaction\n>> 4. crash\n> \n> I can reproduce the failure easily, not on HEAD but with\n> REL_11_STABLE:\n\nSame here. I could reproduce it with 11.0.\n\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at\n> ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007f585729b535 in __GI_abort () at abort.c:79\n> #2 0x000055eef597e60a in errfinish (dummy=0) at elog.c:555\n> #3 0x000055eef5980c50 in elog_finish (elevel=22, fmt=0x55eef5a41408\n> \"cannot abort transaction %u, it was already committed\") at\n> elog.c:1376\n> #4 0x000055eef5479647 in RecordTransactionAbort (isSubXact=false) at\n> xact.c:1580\n> #5 0x000055eef547a6c0 in AbortTransaction () at xact.c:2602\n> #6 0x000055eef547aef4 in AbortCurrentTransaction () at xact.c:3104\n> \n> That's worth an investigation, SMgrRelationHash is getting messed up\n> which causes the transaction commit to fail where it should not.\n\nLooking at what was causing the SMgrRelationHash corruption, it seems\nthere were entries with same/duplicated relnode value in pendingDeletes list.\n\nThe problem start when ALTER TABLE users ALTER COLUMN is executed.\ncreate table users(user_id int, name varchar(64), unique (user_id, name))\npartition by list(user_id);\n\ncreate table users_000 partition of users for values in(0);\ncreate table users_001 partition of users for values in(1);\nselect relname, relfilenode from pg_class where relname like 'users%';\n relname │ relfilenode\n────────────────────────────┼─────────────\n users │ 16441\n users_000 │ 16446\n users_000_user_id_name_key │ 16449\n users_001 │ 16451\n users_001_user_id_name_key │ 16454\n users_user_id_name_key │ 16444\n(6 rows)\n\nalter table users alter column name type varchar(127);\nselect relname, relfilenode from pg_class where relname like 'users%';\n relname │ relfilenode\n────────────────────────────┼─────────────\n users │ 16441\n users_000 │ 16446\n users_000_user_id_name_key │ 16444 <=== duplicated\n users_001 │ 16451\n users_001_user_id_name_key │ 16444 <=== duplicated\n users_user_id_name_key │ 16444 <=== duplicated\n(6 rows)\n\nRan out off time...\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 7 Mar 2019 11:17:11 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On Thu, Mar 7, 2019 at 11:17 AM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> The problem start when ALTER TABLE users ALTER COLUMN is executed.\n> create table users(user_id int, name varchar(64), unique (user_id, name))\n> partition by list(user_id);\n>\n> create table users_000 partition of users for values in(0);\n> create table users_001 partition of users for values in(1);\n> select relname, relfilenode from pg_class where relname like 'users%';\n> relname │ relfilenode\n> ────────────────────────────┼─────────────\n> users │ 16441\n> users_000 │ 16446\n> users_000_user_id_name_key │ 16449\n> users_001 │ 16451\n> users_001_user_id_name_key │ 16454\n> users_user_id_name_key │ 16444\n> (6 rows)\n>\n> alter table users alter column name type varchar(127);\n> select relname, relfilenode from pg_class where relname like 'users%';\n> relname │ relfilenode\n> ────────────────────────────┼─────────────\n> users │ 16441\n> users_000 │ 16446\n> users_000_user_id_name_key │ 16444 <=== duplicated\n> users_001 │ 16451\n> users_001_user_id_name_key │ 16444 <=== duplicated\n> users_user_id_name_key │ 16444 <=== duplicated\n> (6 rows)\n\nI checked why users_000's and user_0001's indexes end up reusing\nusers_user_id_name_key's relfilenode. At the surface, it's because\nDefineIndex(<parent's-index-to-be-recreated>) is carrying oldNode =\n<old-parents-index's-relfilenode> in IndexStmt, which is recursively\npassed down to DefineIndex(<child-indexes-to-be-recreated>). This\nDefineIndex() chain is running due to ATPostAlterTypeCleanup() on the\nparent rel. This surface problem may be solved in DefineIndex() by\njust resetting oldNode in each child IndexStmt before recursing, but\nthat means child indexes are recreated with new relfilenodes. That\nsolves the immediate problem of relfilenodes being wrongly duplicated,\nthat's leading to madness such as SMgrRelationHash corruption being\nseen in the original bug report.\n\nBut, the root problem seems to be that ATPostAlterTypeCleanup() on\nchild tables isn't setting up their own\nDefineIndex(<child-index-to-be-rewritten>) step. That's because the\nparent's ATPostAlterTypeCleanup() dropped child copies of the UNIQUE\nconstraint due to dependencies (+ CCI). So, ATExecAlterColumnType()\non child relations isn't able to find the constraint on the individual\nchild relations to turn into their own\nDefineIndex(<child-index-to-be-rewritten>). If we manage to handle\neach relation's ATPostAlterTypeCleanup() independently, child's\nrecreated indexes will be able to reuse their old relfilenodes and\neverything will be fine. But maybe that will require significant\noverhaul of how this post-alter-type-cleanup occurs?\n\nThanks,\nAmit\n\n", "msg_date": "Thu, 7 Mar 2019 20:36:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/03/07 20:36, Amit Langote wrote:\n> On Thu, Mar 7, 2019 at 11:17 AM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> The problem start when ALTER TABLE users ALTER COLUMN is executed.\n>> create table users(user_id int, name varchar(64), unique (user_id, name))\n>> partition by list(user_id);\n>>\n>> create table users_000 partition of users for values in(0);\n>> create table users_001 partition of users for values in(1);\n>> select relname, relfilenode from pg_class where relname like 'users%';\n>> relname │ relfilenode\n>> ────────────────────────────┼─────────────\n>> users │ 16441\n>> users_000 │ 16446\n>> users_000_user_id_name_key │ 16449\n>> users_001 │ 16451\n>> users_001_user_id_name_key │ 16454\n>> users_user_id_name_key │ 16444\n>> (6 rows)\n>>\n>> alter table users alter column name type varchar(127);\n>> select relname, relfilenode from pg_class where relname like 'users%';\n>> relname │ relfilenode\n>> ────────────────────────────┼─────────────\n>> users │ 16441\n>> users_000 │ 16446\n>> users_000_user_id_name_key │ 16444 <=== duplicated\n>> users_001 │ 16451\n>> users_001_user_id_name_key │ 16444 <=== duplicated\n>> users_user_id_name_key │ 16444 <=== duplicated\n>> (6 rows)\n> \n> I checked why users_000's and user_0001's indexes end up reusing\n> users_user_id_name_key's relfilenode. At the surface, it's because\n> DefineIndex(<parent's-index-to-be-recreated>) is carrying oldNode =\n> <old-parents-index's-relfilenode> in IndexStmt, which is recursively\n> passed down to DefineIndex(<child-indexes-to-be-recreated>). This\n> DefineIndex() chain is running due to ATPostAlterTypeCleanup() on the\n> parent rel. This surface problem may be solved in DefineIndex() by\n> just resetting oldNode in each child IndexStmt before recursing, but\n> that means child indexes are recreated with new relfilenodes. That\n> solves the immediate problem of relfilenodes being wrongly duplicated,\n> that's leading to madness such as SMgrRelationHash corruption being\n> seen in the original bug report.\n\nThis doesn't happen in HEAD, because in HEAD we got 807ae415c5, which\nchanged things so that partitioned relations always have their relfilenode\nset to 0. So, there's no question of parent's relfilenode being passed to\nchildren and hence being duplicated.\n\nBut that also means child indexes are unnecessarily rewritten, that is,\nwith new relfilenodes.\n\n> But, the root problem seems to be that ATPostAlterTypeCleanup() on\n> child tables isn't setting up their own\n> DefineIndex(<child-index-to-be-rewritten>) step. That's because the\n> parent's ATPostAlterTypeCleanup() dropped child copies of the UNIQUE\n> constraint due to dependencies (+ CCI). So, ATExecAlterColumnType()\n> on child relations isn't able to find the constraint on the individual\n> child relations to turn into their own\n> DefineIndex(<child-index-to-be-rewritten>). If we manage to handle\n> each relation's ATPostAlterTypeCleanup() independently, child's\n> recreated indexes will be able to reuse their old relfilenodes and\n> everything will be fine. But maybe that will require significant\n> overhaul of how this post-alter-type-cleanup occurs?\n\nWe could try to solve this dividing ATPostAlterTypeCleanup processing into\ntwo functions:\n\n1 The first one runs right after ATExecAlterColumnType() is finished for a\ngiven table (like it does today), which then runs ATPostAlterTypeParse to\ngenerate commands for constraints and/or indexes to re-add. This part\nwon't drop the old constraints/indexes just yet, so child\nconstraints/indexes will remain for ATExecAlterColumnType to see when\nexecuted for the children.\n\n2. Dropping the old constraints/indexes is the responsibility of the 2nd\npart, which runs just before executing ATExecReAddIndex or\nATExecAddConstraint (with is_readd = true), so that the new constraints\ndon't collide with the existing ones.\n\nThis arrangement allows step 1 to generate the commands to recreate even\nthe child indexes such that the old relfilenode can be be preserved by\nsetting IndexStmt.oldNode.\n\nAttached patch is a very rough sketch, which fails some regression tests,\nbut I ran out of time today.\n\nThanks,\nAmit", "msg_date": "Fri, 8 Mar 2019 19:22:46 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/03/08 19:22, Amit Langote wrote:\n> On 2019/03/07 20:36, Amit Langote wrote:\n>> On Thu, Mar 7, 2019 at 11:17 AM Amit Langote\n>> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>> The problem start when ALTER TABLE users ALTER COLUMN is executed.\n>>> create table users(user_id int, name varchar(64), unique (user_id, name))\n>>> partition by list(user_id);\n>>>\n>>> create table users_000 partition of users for values in(0);\n>>> create table users_001 partition of users for values in(1);\n>>> select relname, relfilenode from pg_class where relname like 'users%';\n>>> relname │ relfilenode\n>>> ────────────────────────────┼─────────────\n>>> users │ 16441\n>>> users_000 │ 16446\n>>> users_000_user_id_name_key │ 16449\n>>> users_001 │ 16451\n>>> users_001_user_id_name_key │ 16454\n>>> users_user_id_name_key │ 16444\n>>> (6 rows)\n>>>\n>>> alter table users alter column name type varchar(127);\n>>> select relname, relfilenode from pg_class where relname like 'users%';\n>>> relname │ relfilenode\n>>> ────────────────────────────┼─────────────\n>>> users │ 16441\n>>> users_000 │ 16446\n>>> users_000_user_id_name_key │ 16444 <=== duplicated\n>>> users_001 │ 16451\n>>> users_001_user_id_name_key │ 16444 <=== duplicated\n>>> users_user_id_name_key │ 16444 <=== duplicated\n>>> (6 rows)\n>>\n>> I checked why users_000's and user_0001's indexes end up reusing\n>> users_user_id_name_key's relfilenode. At the surface, it's because\n>> DefineIndex(<parent's-index-to-be-recreated>) is carrying oldNode =\n>> <old-parents-index's-relfilenode> in IndexStmt, which is recursively\n>> passed down to DefineIndex(<child-indexes-to-be-recreated>). This\n>> DefineIndex() chain is running due to ATPostAlterTypeCleanup() on the\n>> parent rel. This surface problem may be solved in DefineIndex() by\n>> just resetting oldNode in each child IndexStmt before recursing, but\n>> that means child indexes are recreated with new relfilenodes. That\n>> solves the immediate problem of relfilenodes being wrongly duplicated,\n>> that's leading to madness such as SMgrRelationHash corruption being\n>> seen in the original bug report.\n> \n> This doesn't happen in HEAD, because in HEAD we got 807ae415c5, which\n> changed things so that partitioned relations always have their relfilenode\n> set to 0. So, there's no question of parent's relfilenode being passed to\n> children and hence being duplicated.\n> \n> But that also means child indexes are unnecessarily rewritten, that is,\n> with new relfilenodes.\n> \n>> But, the root problem seems to be that ATPostAlterTypeCleanup() on\n>> child tables isn't setting up their own\n>> DefineIndex(<child-index-to-be-rewritten>) step. That's because the\n>> parent's ATPostAlterTypeCleanup() dropped child copies of the UNIQUE\n>> constraint due to dependencies (+ CCI). So, ATExecAlterColumnType()\n>> on child relations isn't able to find the constraint on the individual\n>> child relations to turn into their own\n>> DefineIndex(<child-index-to-be-rewritten>). If we manage to handle\n>> each relation's ATPostAlterTypeCleanup() independently, child's\n>> recreated indexes will be able to reuse their old relfilenodes and\n>> everything will be fine. But maybe that will require significant\n>> overhaul of how this post-alter-type-cleanup occurs?\n> \n> We could try to solve this dividing ATPostAlterTypeCleanup processing into\n> two functions:\n> \n> 1 The first one runs right after ATExecAlterColumnType() is finished for a\n> given table (like it does today), which then runs ATPostAlterTypeParse to\n> generate commands for constraints and/or indexes to re-add. This part\n> won't drop the old constraints/indexes just yet, so child\n> constraints/indexes will remain for ATExecAlterColumnType to see when\n> executed for the children.\n> \n> 2. Dropping the old constraints/indexes is the responsibility of the 2nd\n> part, which runs just before executing ATExecReAddIndex or\n> ATExecAddConstraint (with is_readd = true), so that the new constraints\n> don't collide with the existing ones.\n> \n> This arrangement allows step 1 to generate the commands to recreate even\n> the child indexes such that the old relfilenode can be be preserved by\n> setting IndexStmt.oldNode.\n> \n> Attached patch is a very rough sketch, which fails some regression tests,\n> but I ran out of time today.\n\nI'm thinking of adding this to open items under Older Bugs. Attached the\npatch that I had posted on -bugs, but it's only a rough sketch as\ndescribed above, not a full fix.\n\nLink to the original bug report:\nhttps://www.postgresql.org/message-id/flat/15672-b9fa7db32698269f%40postgresql.org\n\nThanks,\nAmit", "msg_date": "Wed, 27 Mar 2019 11:40:12 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On Wed, Mar 27, 2019 at 11:40:12AM +0900, Amit Langote wrote:\n> I'm thinking of adding this to open items under Older Bugs. Attached the\n> patch that I had posted on -bugs, but it's only a rough sketch as\n> described above, not a full fix.\n\nAdding it to the section for older bugs sounds fine to me. Thanks for\ndoing so.\n--\nMichael", "msg_date": "Wed, 27 Mar 2019 11:56:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On Wed, Mar 27, 2019 at 11:56:20AM +0900, Michael Paquier wrote:\n> Adding it to the section for older bugs sounds fine to me. Thanks for\n> doing so.\n\nI have begun looking at this issue. Hopefully I'll be able to provide\nan update soon.\n--\nMichael", "msg_date": "Thu, 11 Apr 2019 15:57:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/04/11 15:57, Michael Paquier wrote:\n> On Wed, Mar 27, 2019 at 11:56:20AM +0900, Michael Paquier wrote:\n>> Adding it to the section for older bugs sounds fine to me. Thanks for\n>> doing so.\n> \n> I have begun looking at this issue. Hopefully I'll be able to provide\n> an update soon.\n\nGreat, thanks.\n\nRegards,\nAmit\n\n\n\n\n", "msg_date": "Thu, 11 Apr 2019 15:58:55 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "ISTM we could work around the problem with the attached, which I think\nis a good change independently of anything else.\n\nThere is still an issue, which manifests in both 11 and HEAD, namely\nthat the code also propagates the parent index's comment to any child\nindexes. You can see that with this extended test case:\n\ncreate table users(user_id int, name varchar(64), unique (user_id, name)) partition by hash(user_id); \ncomment on index users_user_id_name_key is 'parent index';\ncreate table users_000 partition of users for values with (modulus 2, remainder 0); \ncreate table users_001 partition of users for values with (modulus 2, remainder 1); \n\nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class where relname like 'users%';\nalter table users alter column name type varchar(127); \nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class where relname like 'users%';\n\nwhich gives me (in 11, with this patch)\n\n...\n relname | relfilenode | obj_description \n----------------------------+-------------+-----------------\n users | 89389 | \n users_000 | 89394 | \n users_000_user_id_name_key | 89397 | \n users_001 | 89399 | \n users_001_user_id_name_key | 89402 | \n users_user_id_name_key | 89392 | parent index\n(6 rows)\n\nALTER TABLE\n relname | relfilenode | obj_description \n----------------------------+-------------+-----------------\n users | 89389 | \n users_000 | 89394 | \n users_000_user_id_name_key | 89406 | parent index\n users_001 | 89399 | \n users_001_user_id_name_key | 89408 | parent index\n users_user_id_name_key | 89404 | parent index\n(6 rows)\n\nHowever, I doubt that that's bad enough to justify a major rewrite\nof the ALTER TABLE code in 11 ... and maybe not in HEAD either;\nI wouldn't be too unhappy to leave it to v13.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 23 Apr 2019 18:03:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "Thanks for looking at this.\n\nOn 2019/04/24 7:03, Tom Lane wrote:\n> ISTM we could work around the problem with the attached, which I think\n> is a good change independently of anything else.\n\nAgreed.\n\n> There is still an issue, which manifests in both 11 and HEAD, namely\n> that the code also propagates the parent index's comment to any child\n> indexes. You can see that with this extended test case:\n> \n> create table users(user_id int, name varchar(64), unique (user_id, name)) partition by hash(user_id); \n> comment on index users_user_id_name_key is 'parent index';\n> create table users_000 partition of users for values with (modulus 2, remainder 0); \n> create table users_001 partition of users for values with (modulus 2, remainder 1); \n> \n> select relname, relfilenode, obj_description(oid,'pg_class') from pg_class where relname like 'users%';\n> alter table users alter column name type varchar(127); \n> select relname, relfilenode, obj_description(oid,'pg_class') from pg_class where relname like 'users%';\n> \n> which gives me (in 11, with this patch)\n> \n> ...\n> relname | relfilenode | obj_description \n> ----------------------------+-------------+-----------------\n> users | 89389 | \n> users_000 | 89394 | \n> users_000_user_id_name_key | 89397 | \n> users_001 | 89399 | \n> users_001_user_id_name_key | 89402 | \n> users_user_id_name_key | 89392 | parent index\n> (6 rows)\n> \n> ALTER TABLE\n> relname | relfilenode | obj_description \n> ----------------------------+-------------+-----------------\n> users | 89389 | \n> users_000 | 89394 | \n> users_000_user_id_name_key | 89406 | parent index\n> users_001 | 89399 | \n> users_001_user_id_name_key | 89408 | parent index\n> users_user_id_name_key | 89404 | parent index\n> (6 rows)\n\nThis may be seen as slightly worse if the child indexes had their own\ncomments, which would get overwritten by the parent's.\n\ncreate table pp (a int, b text, unique (a, b)) partition by list (a);\ncreate table pp1 partition of pp for values in (1);\ncreate table pp2 partition of pp for values in (2);\ncomment on index pp_a_b_key is 'parent index';\ncomment on index pp1_a_b_key is 'child index 1';\ncomment on index pp2_a_b_key is 'child index 2';\nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class\nwhere relname like 'pp%';\n relname │ relfilenode │ obj_description\n─────────────┼─────────────┼─────────────────\n pp │ 16420 │\n pp1 │ 16425 │\n pp1_a_b_key │ 16428 │ child index 1\n pp2 │ 16433 │\n pp2_a_b_key │ 16436 │ child index 2\n pp_a_b_key │ 16423 │ parent index\n(6 rows)\n\nalter table pp alter b type varchar(128);\nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class\nwhere relname like 'pp%';\n relname │ relfilenode │ obj_description\n─────────────┼─────────────┼─────────────────\n pp │ 16420 │\n pp1 │ 16447 │\n pp1_a_b_key │ 16450 │ parent index\n pp2 │ 16451 │\n pp2_a_b_key │ 16454 │ parent index\n pp_a_b_key │ 16441 │ parent index\n(6 rows)\n\n> However, I doubt that that's bad enough to justify a major rewrite\n> of the ALTER TABLE code in 11 ... and maybe not in HEAD either;\n> I wouldn't be too unhappy to leave it to v13.\n\nYeah, it's probably decent amount of code churn to undertake as a bug-fix.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Wed, 24 Apr 2019 10:30:55 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/04/24 7:03, Tom Lane wrote:\n>> ISTM we could work around the problem with the attached, which I think\n>> is a good change independently of anything else.\n\n> Agreed.\n\nAfter thinking about it more, it seems like a bad idea to put the check\nin CheckIndexCompatible; that could interfere with any other use of the\nfunction to match up potential child indexes. (I wonder why this test\nisn't the same as what DefineIndex uses to spot potential child indexes,\nBTW --- it uses a completely separate function CompareIndexInfo, which\nseems both wasteful and trouble waiting to happen.)\n\nSo I think we should test the relkind in TryReuseIndex, instead.\nI think it would be a good idea to *also* do what you suggested\nupthread and have DefineIndex clear the field when cloning an\nIndexStmt to create a child; no good could possibly come of\npassing that down when we intend to create a new index.\n\nIn short, I think the right short-term fix is as attached (plus\na new regression test case based on the submitted example).\n\nLonger term, it's clearly bad that we fail to reuse child indexes\nin this scenario; the point about mangled comments is minor by\ncomparison. I'm inclined to think that what we want to do is\n*not* recurse when creating the parent index, but to initially\nmake it NOT VALID, and then do ALTER ATTACH PARTITION with each\nre-used child index. This would successfully reproduce the\nprevious state in case only some of the partitions have attached\nindexes, which I don't think works correctly right now.\n\nBTW, I hadn't ever looked closely at what the index reuse code\ndoes, and now that I have, my heart just sinks. I think that\nlogic needs to be nuked from orbit. RelationPreserveStorage was\nnever meant to be abused in this way; I invite you to contemplate\nwhether it's not a problem that it doesn't check the backend and\nnestLevel fields of PendingRelDelete entries before killing them.\n(In its originally-designed use for mapped rels at transaction end,\nthat wasn't a problem, but I'm afraid that it may be one now.)\n\nThe right way to do this IMO would be something closer to the\nheap-swap logic in cluster.c, where we exchange the relfilenodes\nof two live indexes, rather than what is happening now. Or for\nthat matter, do we really need to delete the old indexes at all?\n\nNone of that looks very practical for v12, let alone back-patching\nto v11, though.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 24 Apr 2019 19:27:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/04/25 8:27, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> On 2019/04/24 7:03, Tom Lane wrote:\n>>> ISTM we could work around the problem with the attached, which I think\n>>> is a good change independently of anything else.\n> \n>> Agreed.\n> \n> After thinking about it more, it seems like a bad idea to put the check\n> in CheckIndexCompatible; that could interfere with any other use of the\n> function to match up potential child indexes.\n\nCheckIndexCompatible() seems to be invented solely for the purposes of\npost-ALTER-COLUMN-TYPE, which is what I get from the header comment of\nthat function:\n\n * This is tailored to the needs of ALTER TABLE ALTER TYPE, which recreates\n * any indexes that depended on a changing column from their pg_get_indexdef\n * or pg_get_constraintdef definitions. We omit some of the sanity checks of\n * DefineIndex. We assume that the old and new indexes have the same number\n * of columns and that if one has an expression column or predicate, both do.\n * Errors arising from the attribute list still apply.\n\nGiven that, it may have been better to keep the function local to\ntablecmds.c to avoid potential misuse; I see no other callers of this\nfunction beside TryReuseIndex(), which in turn is only used by\npost-ALTER-COLUMN-TYPE processing.\n\n> (I wonder why this test\n> isn't the same as what DefineIndex uses to spot potential child indexes,\n> BTW --- it uses a completely separate function CompareIndexInfo, which\n> seems both wasteful and trouble waiting to happen.)\n\nI don't think that CheckIndexCompatible's job is to spot child indexes\ncompatible with a given parent index; that's CompareIndexInfo's job. The\nlatter was invented for the partition index DDL code, complete with\nprovisions to do mapping between parent and child attributes before\nmatching if their TupleDescs are different.\n\nCheckIndexCompatible, as mentioned above, is to do the errands of\nATPostAlterTypeParse().\n\n/*\n * CheckIndexCompatible\n * Determine whether an existing index definition is compatible with a\n * prospective index definition, such that the existing index storage\n * could become the storage of the new index, avoiding a rebuild.\n\nSo, maybe a bigger (different?) charter than CompareIndexInfo's, or so I\nthink. Although, they may be able share the code.\n\n> So I think we should test the relkind in TryReuseIndex, instead.\n\nWhich would work too. Or we could just not call TryReuseIndex() if the\nthe index in question is partitioned.\n\n> I think it would be a good idea to *also* do what you suggested\n> upthread and have DefineIndex clear the field when cloning an\n> IndexStmt to create a child; no good could possibly come of\n> passing that down when we intend to create a new index.\n\nNote that oldNode is only set by TryReuseIndex() today. Being careful in\nDefineIndex might be a good idea anyway.\n\n> In short, I think the right short-term fix is as attached (plus\n> a new regression test case based on the submitted example).\n\nSounds good.\n\n> Longer term, it's clearly bad that we fail to reuse child indexes\n> in this scenario; the point about mangled comments is minor by\n> comparison.\n\nAgreed.\n\n> I'm inclined to think that what we want to do is\n> *not* recurse when creating the parent index, but to initially\n> make it NOT VALID, and then do ALTER ATTACH PARTITION with each\n> re-used child index. This would successfully reproduce the\n> previous state in case only some of the partitions have attached\n> indexes, which I don't think works correctly right now.\n\nWell, an index in the parent must be present in all partitions, so I don't\nunderstand which case you're thinking of.\n\n> BTW, I hadn't ever looked closely at what the index reuse code\n> does, and now that I have, my heart just sinks. I think that\n> logic needs to be nuked from orbit. RelationPreserveStorage was\n> never meant to be abused in this way; I invite you to contemplate\n> whether it's not a problem that it doesn't check the backend and\n> nestLevel fields of PendingRelDelete entries before killing them.\n> (In its originally-designed use for mapped rels at transaction end,\n> that wasn't a problem, but I'm afraid that it may be one now.)\n> \n> The right way to do this IMO would be something closer to the\n> heap-swap logic in cluster.c, where we exchange the relfilenodes\n> of two live indexes, rather than what is happening now. Or for\n> that matter, do we really need to delete the old indexes at all?\n\nYeah, we wouldn't need TryReuseIndex and subsequent\nRelationPreserveStorage if we hadn't dropped the old indexes to begin\nwith. Instead, in ATPostAlterTypeParse, check if the index after ALTER\nTYPE is still incompatible (CheckIndexCompatible) and if it is, don't add\na new AT_ReAddIndex command. If it's not, *then* drop the index, and\nrecreate the index from scratch using an IndexStmt generated from the old\nindex definition. I guess We can get rid of IndexStmt.oldNode too.\n\n> None of that looks very practical for v12, let alone back-patching\n> to v11, though.\n\nYes.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Thu, 25 Apr 2019 11:21:41 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/04/25 11:21, Amit Langote wrote:\n> On 2019/04/25 8:27, Tom Lane wrote:\n>> BTW, I hadn't ever looked closely at what the index reuse code\n>> does, and now that I have, my heart just sinks. I think that\n>> logic needs to be nuked from orbit. RelationPreserveStorage was\n>> never meant to be abused in this way; I invite you to contemplate\n>> whether it's not a problem that it doesn't check the backend and\n>> nestLevel fields of PendingRelDelete entries before killing them.\n>> (In its originally-designed use for mapped rels at transaction end,\n>> that wasn't a problem, but I'm afraid that it may be one now.)\n>>\n>> The right way to do this IMO would be something closer to the\n>> heap-swap logic in cluster.c, where we exchange the relfilenodes\n>> of two live indexes, rather than what is happening now. Or for\n>> that matter, do we really need to delete the old indexes at all?\n> \n> Yeah, we wouldn't need TryReuseIndex and subsequent\n> RelationPreserveStorage if we hadn't dropped the old indexes to begin\n> with. Instead, in ATPostAlterTypeParse, check if the index after ALTER\n> TYPE is still incompatible (CheckIndexCompatible) and if it is, don't add\n> a new AT_ReAddIndex command. If it's not, *then* drop the index, and\n> recreate the index from scratch using an IndexStmt generated from the old\n> index definition. I guess We can get rid of IndexStmt.oldNode too.\n\nThinking on this more and growing confident that we could indeed avoid\ndrop index + recreate-it-while-preserving-storage, instead by just not\ndoing anything when CheckIndexCompatible says the old index will be fine\ndespite ALTER TYPE, but only if the table is not rewritten. I gave this a\ntry and came up with the attached patch. It fixes the bug related to\npartitioned indexes (the originally reported one) and then some.\n\nBasically, I aimed to rewrite the code in ATPostAlterTypeCleanup and\nATPostAlterTypeParse such that we no longer have to rely on an\nimplementation based on setting \"oldNode\" to preserve old indexes. With\nthe attached, for the cases in which the table won't be rewritten and\nhence the indexes not rebuilt, ATPostAlterTypeParse() simply won't queue a\nAT_ReAddIndex command to rebuild index while preserving the storage. That\nmeans both ATAddIndex() and DefineIndex can be freed of the duty of\nlooking out for the \"oldNode\" case, because that case no longer exists.\n\nAnother main change is that inherited (!conislocal) constraints are now\nrecognized by ATPostAlterTypeParse directly, instead of\nATPostAlterTypeCleanup checking for them and skipping\nATPostAlterTypeParse() as a whole for such constraints. For one, I had to\nmake that change to make the above-described approach work. Also, doing\nthat allowed to fix another bug whereby the comments of child constraints\nwould go away when they're reconstructed. Notice what happens on\nun-patched PG 11:\n\ncreate table pp (a int, b text, unique (a, b), c varchar(64)) partition by\nlist (a);\ncreate table pp1 partition of pp for values in (1);\ncreate table pp2 partition of pp for values in (2);\nalter table pp add constraint c_chk check (c <> '');\ncomment on constraint c_chk ON pp is 'parent check constraint';\ncomment on constraint c_chk ON pp1 is 'child check constraint 1';\ncomment on constraint c_chk ON pp2 is 'child check constraint 2';\nselect conname, obj_description(oid, 'pg_constraint') from pg_constraint\nwhere conname = 'c_chk';\n conname │ obj_description\n─────────┼──────────────────────────\n c_chk │ parent check constraint\n c_chk │ child check constraint 1\n c_chk │ child check constraint 2\n(3 rows)\n\nalter table pp alter c type varchar(64);\n\nselect conname, obj_description(oid, 'pg_constraint') from pg_constraint\nwhere conname = 'c_chk';\n conname │ obj_description\n─────────┼─────────────────────────\n c_chk │ parent check constraint\n c_chk │\n c_chk │\n(3 rows)\n\nThe patch fixes that with some surgery of RebuildConstraintComment\ncombined with aforementioned changes. With the patch:\n\nalter table pp alter c type varchar(64);\n\nselect conname, obj_description(oid, 'pg_constraint') from pg_constraint\nwhere conname = 'c_chk';\n conname │ obj_description\n─────────┼──────────────────────────\n c_chk │ parent check constraint\n c_chk │ child check constraint 1\n c_chk │ child check constraint 2\n(3 rows)\n\nalter table pp alter c type varchar(128);\n\nselect conname, obj_description(oid, 'pg_constraint') from pg_constraint\nwhere conname = 'c_chk';\n conname │ obj_description\n─────────┼──────────────────────────\n c_chk │ parent check constraint\n c_chk │ child check constraint 1\n c_chk │ child check constraint 2\n(3 rows)\n\nAlso for index comments, but only in the case when indexes are not rebuilt.\n\ncomment on index pp_a_b_key is 'parent index';\ncomment on index pp1_a_b_key is 'child index 1';\ncomment on index pp2_a_b_key is 'child index 2';\n\nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class\nwhere relname like 'pp%key';\n relname │ relfilenode │ obj_description\n─────────────┼─────────────┼─────────────────\n pp1_a_b_key │ 17280 │ child index 1\n pp2_a_b_key │ 17284 │ child index 2\n pp_a_b_key │ 17271 │ parent index\n(3 rows)\n\n-- no rewrite, indexes untouched, comments preserved\nalter table pp alter b type varchar(128);\n\nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class\nwhere relname like 'pp%key';\n relname │ relfilenode │ obj_description\n─────────────┼─────────────┼─────────────────\n pp1_a_b_key │ 17280 │ child index 1\n pp2_a_b_key │ 17284 │ child index 2\n pp_a_b_key │ 17271 │ parent index\n(3 rows)\n\n-- table rewritten, indexes rebuild, child indexes' comments gone\nalter table pp alter b type varchar(64);\n\nselect relname, relfilenode, obj_description(oid,'pg_class') from pg_class\nwhere relname like 'pp%key';\n relname │ relfilenode │ obj_description\n─────────────┼─────────────┼─────────────────\n pp1_a_b_key │ 17294 │\n pp2_a_b_key │ 17298 │\n pp_a_b_key │ 17285 │ parent index\n(3 rows)\n\n\nI've also added tests for both the originally reported bug and the comment\nones.\n\nThe patch applies to PG 11.\n\nThanks,\nAmit", "msg_date": "Thu, 25 Apr 2019 19:02:12 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "Haven't read the patch, but I tried applying it on top of my tablespace\nfixing patch ... and my first report is that this query in regress fails\n(three times):\n\n select conname, obj_description(oid, 'pg_constraint') from pg_constraint where conname = 'c_chk' order by 1, 2;\n conname | obj_description \n ---------+---------------------------------------\n+ c_chk | alttype_cleanup_idx check constraint\n c_chk | alttype_cleanup_idx1 check constraint\n c_chk | alttype_cleanup_idx2 check constraint\n- c_chk | alttype_cleanup_idx check constraint\n (3 rows)\n\nI think you should use 'ORDER BY 2 COLLATE \"C\"' to avoid the problem.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 25 Apr 2019 09:46:23 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On Thu, Apr 25, 2019 at 10:46 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Haven't read the patch, but I tried applying it on top of my tablespace\n> fixing patch ... and my first report is that this query in regress fails\n> (three times):\n>\n> select conname, obj_description(oid, 'pg_constraint') from pg_constraint where conname = 'c_chk' order by 1, 2;\n> conname | obj_description\n> ---------+---------------------------------------\n> + c_chk | alttype_cleanup_idx check constraint\n> c_chk | alttype_cleanup_idx1 check constraint\n> c_chk | alttype_cleanup_idx2 check constraint\n> - c_chk | alttype_cleanup_idx check constraint\n> (3 rows)\n>\n> I think you should use 'ORDER BY 2 COLLATE \"C\"' to avoid the problem.\n\nOops, will do. Thanks.\n\nRegards,\nAmit\n\n\n", "msg_date": "Thu, 25 Apr 2019 23:45:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/04/25 19:02, Amit Langote wrote:\n> On 2019/04/25 11:21, Amit Langote wrote:\n>> On 2019/04/25 8:27, Tom Lane wrote:\n>>> BTW, I hadn't ever looked closely at what the index reuse code\n>>> does, and now that I have, my heart just sinks. I think that\n>>> logic needs to be nuked from orbit. RelationPreserveStorage was\n>>> never meant to be abused in this way; I invite you to contemplate\n>>> whether it's not a problem that it doesn't check the backend and\n>>> nestLevel fields of PendingRelDelete entries before killing them.\n>>> (In its originally-designed use for mapped rels at transaction end,\n>>> that wasn't a problem, but I'm afraid that it may be one now.)\n>>>\n>>> The right way to do this IMO would be something closer to the\n>>> heap-swap logic in cluster.c, where we exchange the relfilenodes\n>>> of two live indexes, rather than what is happening now. Or for\n>>> that matter, do we really need to delete the old indexes at all?\n>>\n>> Yeah, we wouldn't need TryReuseIndex and subsequent\n>> RelationPreserveStorage if we hadn't dropped the old indexes to begin\n>> with. Instead, in ATPostAlterTypeParse, check if the index after ALTER\n>> TYPE is still incompatible (CheckIndexCompatible) and if it is, don't add\n>> a new AT_ReAddIndex command. If it's not, *then* drop the index, and\n>> recreate the index from scratch using an IndexStmt generated from the old\n>> index definition. I guess We can get rid of IndexStmt.oldNode too.\n> \n> Thinking on this more and growing confident that we could indeed avoid\n> drop index + recreate-it-while-preserving-storage, instead by just not\n> doing anything when CheckIndexCompatible says the old index will be fine\n> despite ALTER TYPE, but only if the table is not rewritten. I gave this a\n> try and came up with the attached patch. It fixes the bug related to\n> partitioned indexes (the originally reported one) and then some.\n> \n> Basically, I aimed to rewrite the code in ATPostAlterTypeCleanup and\n> ATPostAlterTypeParse such that we no longer have to rely on an\n> implementation based on setting \"oldNode\" to preserve old indexes. With\n> the attached, for the cases in which the table won't be rewritten and\n> hence the indexes not rebuilt, ATPostAlterTypeParse() simply won't queue a\n> AT_ReAddIndex command to rebuild index while preserving the storage. That\n> means both ATAddIndex() and DefineIndex can be freed of the duty of\n> looking out for the \"oldNode\" case, because that case no longer exists.\n> \n> Another main change is that inherited (!conislocal) constraints are now\n> recognized by ATPostAlterTypeParse directly, instead of\n> ATPostAlterTypeCleanup checking for them and skipping\n> ATPostAlterTypeParse() as a whole for such constraints. For one, I had to\n> make that change to make the above-described approach work. Also, doing\n> that allowed to fix another bug whereby the comments of child constraints\n> would go away when they're reconstructed. Notice what happens on\n> un-patched PG 11:\n> \n> create table pp (a int, b text, unique (a, b), c varchar(64)) partition by\n> list (a);\n> create table pp1 partition of pp for values in (1);\n> create table pp2 partition of pp for values in (2);\n> alter table pp add constraint c_chk check (c <> '');\n> comment on constraint c_chk ON pp is 'parent check constraint';\n> comment on constraint c_chk ON pp1 is 'child check constraint 1';\n> comment on constraint c_chk ON pp2 is 'child check constraint 2';\n> select conname, obj_description(oid, 'pg_constraint') from pg_constraint\n> where conname = 'c_chk';\n> conname │ obj_description\n> ─────────┼──────────────────────────\n> c_chk │ parent check constraint\n> c_chk │ child check constraint 1\n> c_chk │ child check constraint 2\n> (3 rows)\n> \n> alter table pp alter c type varchar(64);\n> \n> select conname, obj_description(oid, 'pg_constraint') from pg_constraint\n> where conname = 'c_chk';\n> conname │ obj_description\n> ─────────┼─────────────────────────\n> c_chk │ parent check constraint\n> c_chk │\n> c_chk │\n> (3 rows)\n> \n> The patch fixes that with some surgery of RebuildConstraintComment\n> combined with aforementioned changes. With the patch:\n> \n> alter table pp alter c type varchar(64);\n> \n> select conname, obj_description(oid, 'pg_constraint') from pg_constraint\n> where conname = 'c_chk';\n> conname │ obj_description\n> ─────────┼──────────────────────────\n> c_chk │ parent check constraint\n> c_chk │ child check constraint 1\n> c_chk │ child check constraint 2\n> (3 rows)\n> \n> alter table pp alter c type varchar(128);\n> \n> select conname, obj_description(oid, 'pg_constraint') from pg_constraint\n> where conname = 'c_chk';\n> conname │ obj_description\n> ─────────┼──────────────────────────\n> c_chk │ parent check constraint\n> c_chk │ child check constraint 1\n> c_chk │ child check constraint 2\n> (3 rows)\n> \n> Also for index comments, but only in the case when indexes are not rebuilt.\n> \n> comment on index pp_a_b_key is 'parent index';\n> comment on index pp1_a_b_key is 'child index 1';\n> comment on index pp2_a_b_key is 'child index 2';\n> \n> select relname, relfilenode, obj_description(oid,'pg_class') from pg_class\n> where relname like 'pp%key';\n> relname │ relfilenode │ obj_description\n> ─────────────┼─────────────┼─────────────────\n> pp1_a_b_key │ 17280 │ child index 1\n> pp2_a_b_key │ 17284 │ child index 2\n> pp_a_b_key │ 17271 │ parent index\n> (3 rows)\n> \n> -- no rewrite, indexes untouched, comments preserved\n> alter table pp alter b type varchar(128);\n> \n> select relname, relfilenode, obj_description(oid,'pg_class') from pg_class\n> where relname like 'pp%key';\n> relname │ relfilenode │ obj_description\n> ─────────────┼─────────────┼─────────────────\n> pp1_a_b_key │ 17280 │ child index 1\n> pp2_a_b_key │ 17284 │ child index 2\n> pp_a_b_key │ 17271 │ parent index\n> (3 rows)\n> \n> -- table rewritten, indexes rebuild, child indexes' comments gone\n> alter table pp alter b type varchar(64);\n> \n> select relname, relfilenode, obj_description(oid,'pg_class') from pg_class\n> where relname like 'pp%key';\n> relname │ relfilenode │ obj_description\n> ─────────────┼─────────────┼─────────────────\n> pp1_a_b_key │ 17294 │\n> pp2_a_b_key │ 17298 │\n> pp_a_b_key │ 17285 │ parent index\n> (3 rows)\n> \n> \n> I've also added tests for both the originally reported bug and the comment\n> ones.\n> \n> The patch applies to PG 11.\n\nPer Alvaro's report, regression tests added weren't portable. Fixed that\nin the attached updated patch.\n\nThanks,\nAmit", "msg_date": "Fri, 26 Apr 2019 12:52:40 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "Hi,\n\nPlease trim the quoted text in your reply.\n\nOn 2019-Apr-26, Amit Langote wrote:\n\n> Per Alvaro's report, regression tests added weren't portable. Fixed that\n> in the attached updated patch.\n\nUm, this one doesn't apply because of yesterday's 87259588d0ab.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Apr 2019 11:39:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Um, this one doesn't apply because of yesterday's 87259588d0ab.\n\nBefore we spend too much time on minutiae, we should ask ourselves whether\nthis patch is even going in the right direction. I'm not sure.\n\nOne point is that if we simply adopt the old index as-is, we won't see\nany updates in its metadata. An example is that if we have an index\non a varchar(10) column, and we alter the column to varchar(12),\nthe current behavior is to generate a new index that agrees with that:\n\nregression=# create table pp(f1 varchar(10) unique);\nCREATE TABLE\nregression=# \\d pp_f1_key\n Index \"public.pp_f1_key\"\n Column | Type | Key? | Definition \n--------+-----------------------+------+------------\n f1 | character varying(10) | yes | f1\nunique, btree, for table \"public.pp\"\n\nregression=# alter table pp alter column f1 type varchar(12);\nALTER TABLE\nregression=# \\d pp_f1_key\n Index \"public.pp_f1_key\"\n Column | Type | Key? | Definition \n--------+-----------------------+------+------------\n f1 | character varying(12) | yes | f1\nunique, btree, for table \"public.pp\"\n\nWith this patch, I believe, the index column will still claim to be\nvarchar(10). Is that OK? It might not actually break anything\nright now, but at the very least it's likely to be confusing.\nAlso, it'd essentially render the declared types/typmods of index\ncolumns untrustworthy, which seems like something that would come\nback to bite us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2019 14:57:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "I went ahead and pushed the stopgap patches, along with regression tests\nbased on yours. The tests show the current (i.e. wrong) behavior for\nindex comment and relfilenode reuse. I think that whenever we fix that,\nwe can just adjust the expected output instead of adding more tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2019 17:21:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" }, { "msg_contents": "On 2019/04/27 3:57, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> Um, this one doesn't apply because of yesterday's 87259588d0ab.\n> \n> Before we spend too much time on minutiae, we should ask ourselves whether\n> this patch is even going in the right direction. I'm not sure.\n> \n> One point is that if we simply adopt the old index as-is, we won't see\n> any updates in its metadata. An example is that if we have an index\n> on a varchar(10) column, and we alter the column to varchar(12),\n> the current behavior is to generate a new index that agrees with that:\n> \n> regression=# create table pp(f1 varchar(10) unique);\n> CREATE TABLE\n> regression=# \\d pp_f1_key\n> Index \"public.pp_f1_key\"\n> Column | Type | Key? | Definition \n> --------+-----------------------+------+------------\n> f1 | character varying(10) | yes | f1\n> unique, btree, for table \"public.pp\"\n> \n> regression=# alter table pp alter column f1 type varchar(12);\n> ALTER TABLE\n> regression=# \\d pp_f1_key\n> Index \"public.pp_f1_key\"\n> Column | Type | Key? | Definition \n> --------+-----------------------+------+------------\n> f1 | character varying(12) | yes | f1\n> unique, btree, for table \"public.pp\"\n> \n> With this patch, I believe, the index column will still claim to be\n> varchar(10).\n\nYou're right, that's what happens.\n\n> Is that OK? It might not actually break anything\n> right now, but at the very least it's likely to be confusing.\n> Also, it'd essentially render the declared types/typmods of index\n> columns untrustworthy, which seems like something that would come\n> back to bite us.\n\nThat's definitely misleading.\n\nStill, I think it'd be nice if we didn't have to do full-blown\nDefineIndex() in this case if only to update the pg_attribute tuples of\nthe index relation. Maybe we could update them directly in the ALTER\nCOLUMN TYPE's code path?\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Tue, 7 May 2019 17:10:34 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: BUG #15672: PostgreSQL 11.1/11.2 crashed after dropping a\n partition table" } ]
[ { "msg_contents": "tableam: introduce table AM infrastructure.\n\nThis introduces the concept of table access methods, i.e. CREATE\n ACCESS METHOD ... TYPE TABLE and\n CREATE TABLE ... USING (storage-engine).\nNo table access functionality is delegated to table AMs as of this\ncommit, that'll be done in following commits.\n\nSubsequent commits will incrementally abstract table access\nfunctionality to be routed through table access methods. That change\nis too large to be reviewed & committed at once, so it'll be done\nincrementally.\n\nDocs will be updated at the end, as adding them incrementally would\nlikely make them less coherent, and definitely is a lot more work,\nwithout a lot of benefit.\n\nTable access methods are specified similar to index access methods,\ni.e. pg_am.amhandler returns, as INTERNAL, a pointer to a struct with\ncallbacks. In contrast to index AMs that struct needs to live as long\nas a backend, typically that's achieved by just returning a pointer to\na constant struct.\n\nPsql's \\d+ now displays a table's access method. That can be disabled\nwith HIDE_TABLEAM=true, which is mainly useful so regression tests can\nbe run against different AMs. It's quite possible that this behaviour\nstill needs to be fine tuned.\n\nFor now it's not allowed to set a table AM for a partitioned table, as\nwe've not resolved how partitions would inherit that. Disallowing\nallows us to introduce, if we decide that's the way forward, such a\nbehaviour without a compatibility break.\n\nCatversion bumped, to add the heap table AM and references to it.\n\nAuthor: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others\nDiscussion:\n https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de\n https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql\n https://postgr.es/m/20190107235616.6lur25ph22u5u5av@alap3.anarazel.de\n https://postgr.es/m/20190304234700.w5tmhducs5wxgzls@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/8586bf7ed8889f39a59dd99b292014b73be85342\n\nModified Files\n--------------\ndoc/src/sgml/ref/psql-ref.sgml | 11 ++\nsrc/backend/access/heap/Makefile | 2 +-\nsrc/backend/access/heap/heapam_handler.c | 44 ++++++++\nsrc/backend/access/table/Makefile | 2 +-\nsrc/backend/access/table/tableam.c | 18 +++\nsrc/backend/access/table/tableamapi.c | 173 +++++++++++++++++++++++++++++\nsrc/backend/bootstrap/bootparse.y | 2 +\nsrc/backend/catalog/genbki.pl | 4 +\nsrc/backend/catalog/heap.c | 21 ++++\nsrc/backend/catalog/index.c | 1 +\nsrc/backend/catalog/toasting.c | 1 +\nsrc/backend/commands/amcmds.c | 28 +++--\nsrc/backend/commands/cluster.c | 1 +\nsrc/backend/commands/createas.c | 1 +\nsrc/backend/commands/tablecmds.c | 40 +++++++\nsrc/backend/nodes/copyfuncs.c | 1 +\nsrc/backend/parser/gram.y | 100 ++++++++++-------\nsrc/backend/rewrite/rewriteDefine.c | 1 +\nsrc/backend/utils/adt/pseudotypes.c | 1 +\nsrc/backend/utils/cache/relcache.c | 123 +++++++++++++++++++-\nsrc/backend/utils/misc/guc.c | 12 ++\nsrc/bin/psql/describe.c | 16 ++-\nsrc/bin/psql/help.c | 2 +\nsrc/bin/psql/settings.h | 1 +\nsrc/bin/psql/startup.c | 8 ++\nsrc/include/access/tableam.h | 48 ++++++++\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/heap.h | 2 +\nsrc/include/catalog/pg_am.dat | 3 +\nsrc/include/catalog/pg_am.h | 1 +\nsrc/include/catalog/pg_class.dat | 8 +-\nsrc/include/catalog/pg_class.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 13 +++\nsrc/include/catalog/pg_type.dat | 5 +\nsrc/include/nodes/nodes.h | 1 +\nsrc/include/nodes/parsenodes.h | 1 +\nsrc/include/nodes/primnodes.h | 1 +\nsrc/include/utils/rel.h | 15 ++-\nsrc/include/utils/relcache.h | 3 +\nsrc/test/regress/expected/create_am.out | 164 +++++++++++++++++++++++++++\nsrc/test/regress/expected/opr_sanity.out | 19 +++-\nsrc/test/regress/expected/psql.out | 39 +++++++\nsrc/test/regress/expected/sanity_check.out | 7 ++\nsrc/test/regress/expected/type_sanity.out | 15 ++-\nsrc/test/regress/pg_regress_main.c | 7 +-\nsrc/test/regress/sql/create_am.sql | 116 +++++++++++++++++++\nsrc/test/regress/sql/opr_sanity.sql | 16 ++-\nsrc/test/regress/sql/psql.sql | 15 +++\nsrc/test/regress/sql/type_sanity.sql | 11 +-\nsrc/tools/pgindent/typedefs.list | 1 +\n50 files changed, 1055 insertions(+), 74 deletions(-)\n\n", "msg_date": "Wed, 06 Mar 2019 18:01:15 +0000", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pgsql: tableam: introduce table AM infrastructure." }, { "msg_contents": "On 2019-Mar-06, Andres Freund wrote:\n\n> tableam: introduce table AM infrastructure.\n\nThanks for doing this!!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 6 Mar 2019 15:03:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: tableam: introduce table AM infrastructure." }, { "msg_contents": "On Wed, Mar 06, 2019 at 03:03:44PM -0300, Alvaro Herrera wrote:\n> On 2019-Mar-06, Andres Freund wrote:\n>> tableam: introduce table AM infrastructure.\n> \n> Thanks for doing this!!\n\n+1.\n--\nMichael", "msg_date": "Thu, 7 Mar 2019 11:30:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: tableam: introduce table AM infrastructure." }, { "msg_contents": "On Wed, Mar 6, 2019 at 11:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> tableam: introduce table AM infrastructure.\n>\n\nThanks for this work. I noticed a few typos in this commit. The\npatch for the same is attached.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Mar 2019 08:21:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: tableam: introduce table AM infrastructure." }, { "msg_contents": "On 2019-03-11 08:21:21 +0530, Amit Kapila wrote:\n> I noticed a few typos in this commit. The patch for the same is\n> attached.\n\nMerged, thanks!\n\n", "msg_date": "Mon, 11 Mar 2019 10:03:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgsql: tableam: introduce table AM infrastructure." }, { "msg_contents": "Hi,\n\nFor the archive's sake:\n\nOn 2019-03-06 18:01:15 +0000, Andres Freund wrote:\n> tableam: introduce table AM infrastructure.\n\n> Author: Haribabu Kommi, Andres Freund, Alvaro Herrera, Dimitri Golgov and others\n\nPlease note that I completely butchered a name here. It's not Dimitri\nGolgov, it's Dmitry Dolgov.\n\nI'll take care of fixing the name in the release notes.\n\n- Andres\n\n\n", "msg_date": "Mon, 27 May 2019 09:05:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgsql: tableam: introduce table AM infrastructure." } ]
[ { "msg_contents": "Hi,\n\nAfter my tableam patch Andrew's buildfarm animal started failing in the\ncross-version upgrades:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-03-06%2019%3A32%3A24\n\nBut I actually don't think that't really fault of the tableam patch. The\nreason for the assertion is that we assert that\n\t\tAssert(relation->rd_rel->relam != InvalidOid);\nfor tables that have storage. The table in question is a toast table.\n\nThe reason that table doesn't have an AM is that it's the toast table\nfor a partitioned relation, and for toast tables we just copy the AM\nfrom the main table.\n\nThe backtrace shows:\n\n#7 0x0000558b4bdbecfc in create_toast_table (rel=0x7fdab64289e0, toastOid=0, toastIndexOid=0, reloptions=0, lockmode=8, check=false)\n at /home/andres/src/postgresql/src/backend/catalog/toasting.c:263\n263\t\ttoast_relid = heap_create_with_catalog(toast_relname,\n(gdb) p *rel->rd_rel\n$2 = {oid = 80244, relname = {\n data = \"partitioned_table\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\"}, relnamespace = 2200, reltype = 80246, reloftype = 0, relowner = 10, relam = 0, relfilenode = 0, \n reltablespace = 0, relpages = 0, reltuples = 0, relallvisible = 0, reltoastrelid = 0, relhasindex = false, relisshared = false, relpersistence = 112 'p', \n relkind = 112 'p', relnatts = 2, relchecks = 0, relhasrules = false, relhastriggers = false, relhassubclass = false, relrowsecurity = false, \n relforcerowsecurity = false, relispopulated = true, relreplident = 100 'd', relispartition = false, relrewrite = 0, relfrozenxid = 0, relminmxid = 0}\n\nthat were trying to create a toast table for a partitioned table. Which\nseems wrong to me, given that recent commits made partitioned tables\nhave no storage.\n\nThe reason we're creating a storage is that we're upgrading from a\nversion of PG where partitioned tables *did* have storage. And thus the\ndump looks like:\n\n-- For binary upgrade, must preserve pg_type oid\nSELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('80246'::pg_catalog.oid);\n\n\n-- For binary upgrade, must preserve pg_type array oid\nSELECT pg_catalog.binary_upgrade_set_next_array_pg_type_oid('80245'::pg_catalog.oid);\n\n\n-- For binary upgrade, must preserve pg_type toast oid\nSELECT pg_catalog.binary_upgrade_set_next_toast_pg_type_oid('80248'::pg_catalog.oid);\n\n\n-- For binary upgrade, must preserve pg_class oids\nSELECT pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('80244'::pg_catalog.oid);\nSELECT pg_catalog.binary_upgrade_set_next_toast_pg_class_oid('80247'::pg_catalog.oid);\nSELECT pg_catalog.binary_upgrade_set_next_index_pg_class_oid('80249'::pg_catalog.oid);\n\nCREATE TABLE \"public\".\"partitioned_table\" (\n \"a\" integer,\n \"b\" \"text\"\n)\nPARTITION BY LIST (\"a\");\n\n\nand create_toast_table() has logic like:\n\n\t{\n\t\t/*\n\t\t * In binary-upgrade mode, create a TOAST table if and only if\n\t\t * pg_upgrade told us to (ie, a TOAST table OID has been provided).\n\t\t *\n\t\t * This indicates that the old cluster had a TOAST table for the\n\t\t * current table. We must create a TOAST table to receive the old\n\t\t * TOAST file, even if the table seems not to need one.\n\t\t *\n\t\t * Contrariwise, if the old cluster did not have a TOAST table, we\n\t\t * should be able to get along without one even if the new version's\n\t\t * needs_toast_table rules suggest we should have one. There is a lot\n\t\t * of daylight between where we will create a TOAST table and where\n\t\t * one is really necessary to avoid failures, so small cross-version\n\t\t * differences in the when-to-create heuristic shouldn't be a problem.\n\t\t * If we tried to create a TOAST table anyway, we would have the\n\t\t * problem that it might take up an OID that will conflict with some\n\t\t * old-cluster table we haven't seen yet.\n\t\t */\n\t\tif (!OidIsValid(binary_upgrade_next_toast_pg_class_oid) ||\n\t\t\t!OidIsValid(binary_upgrade_next_toast_pg_type_oid))\n\t\t\treturn false;\n\n\nI think we probably should have pg_dump suppress emitting information\nabout the toast table of partitioned tables?\n\nWhile I'm not hugely bothered by binary upgrade mode creating\ninconsistent states - there's plenty of ways to crash the server that\nway - it probably also would be a good idea to have heap_create()\nelog(ERROR) when accessmtd is invalid.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 6 Mar 2019 12:41:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Binary upgrade from <12 to 12 creates toast table for partitioned\n tables" }, { "msg_contents": "\nOn 3/6/19 3:41 PM, Andres Freund wrote:\n> Hi,\n>\n> After my tableam patch Andrew's buildfarm animal started failing in the\n> cross-version upgrades:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-03-06%2019%3A32%3A24\n\n\nIncidentally, I just fixed a bug that was preventing the module from\npicking up its log files. The latest report has them all.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 Mar 2019 16:46:07 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for partitioned\n tables" }, { "msg_contents": "Hi,\n\nOn 2019-03-06 16:46:07 -0500, Andrew Dunstan wrote:\n> \n> On 3/6/19 3:41 PM, Andres Freund wrote:\n> > Hi,\n> >\n> > After my tableam patch Andrew's buildfarm animal started failing in the\n> > cross-version upgrades:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-03-06%2019%3A32%3A24\n> \n> \n> Incidentally, I just fixed a bug that was preventing the module from\n> picking up its log files. The latest report has them all.\n\nAh, cool. I was wondering about that...\n\nOne thing I noticed is that it might need a more aggressive separator,\nthat report has:\n\n\\0\\0\\0rls_tbl_force\\0\\0\\0\\0ROW SECURITY\\0\\0\\0\\0\\0K\\0\\0\\0ALTER TABLE \"regress_rls_schema\".\"rls_tbl_force\" ENABLE ROW LEVEL SECURITY;\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0regress_rls_schema\\0\\0\\0\\0\\0\\0\\0\t\\0\\0\\0buildfarm\\0\\0\\0\\0false\\0\\0\\0\\0350\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0===================== REL_10_STABLE-pg_upgrade_dump_16384.log ==============\n\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 6 Mar 2019 13:51:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" }, { "msg_contents": "On Wed, Mar 6, 2019 at 3:41 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we probably should have pg_dump suppress emitting information\n> about the toast table of partitioned tables?\n\n+1. That seems like the right fix.\n\n> While I'm not hugely bothered by binary upgrade mode creating\n> inconsistent states - there's plenty of ways to crash the server that\n> way - it probably also would be a good idea to have heap_create()\n> elog(ERROR) when accessmtd is invalid.\n\nNot sure about this part.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 7 Mar 2019 13:08:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for partitioned\n tables" }, { "msg_contents": "Hi,\n\nOn 2019-03-07 13:08:35 -0500, Robert Haas wrote:\n> On Wed, Mar 6, 2019 at 3:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we probably should have pg_dump suppress emitting information\n> > about the toast table of partitioned tables?\n>\n> +1. That seems like the right fix.\n\nCool. Alvaro, Kyatoro, Michael, are either of you planning to tackle\nthat? Afaict it's caused by\n\ncommit 807ae415c54628ade937cb209f0fc9913e6b0cf5\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2019-01-04 14:51:17 -0300\n\n Don't create relfilenode for relations without storage\n\n Some relation kinds had relfilenode set to some non-zero value, but\n apparently the actual files did not really exist because creation was\n prevented elsewhere. Get rid of the phony pg_class.relfilenode values.\n\n Catversion bumped, but only because the sanity_test check will fail if\n run in a system initdb'd with the previous version.\n\n Reviewed-by: Kyotaro HORIGUCHI, Michael Paquier\n Discussion: https://postgr.es/m/20181206215552.fm2ypuxq6nhpwjuc@alvherre.pgsql\n\n\n> > While I'm not hugely bothered by binary upgrade mode creating\n> > inconsistent states - there's plenty of ways to crash the server that\n> > way - it probably also would be a good idea to have heap_create()\n> > elog(ERROR) when accessmtd is invalid.\n>\n> Not sure about this part.\n\nAs in, we shouldn't elog out? Or we should have an ereport with a proper\nerror, or ...?\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 7 Mar 2019 10:17:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" }, { "msg_contents": "On 2019-Mar-07, Andres Freund wrote:\n\n> Hi,\n> \n> On 2019-03-07 13:08:35 -0500, Robert Haas wrote:\n> > On Wed, Mar 6, 2019 at 3:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think we probably should have pg_dump suppress emitting information\n> > > about the toast table of partitioned tables?\n> >\n> > +1. That seems like the right fix.\n> \n> Cool. Alvaro, Kyatoro, Michael, are either of you planning to tackle\n> that? Afaict it's caused by\n\nI'll have a look.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 7 Mar 2019 15:34:22 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" }, { "msg_contents": "On Thu, Mar 7, 2019 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > > While I'm not hugely bothered by binary upgrade mode creating\n> > > inconsistent states - there's plenty of ways to crash the server that\n> > > way - it probably also would be a good idea to have heap_create()\n> > > elog(ERROR) when accessmtd is invalid.\n> >\n> > Not sure about this part.\n>\n> As in, we shouldn't elog out? Or we should have an ereport with a proper\n> error, or ...?\n\nAs in, I don't understand the problem well enough to have an informed opinion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 7 Mar 2019 13:36:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for partitioned\n tables" }, { "msg_contents": "On 2019-Mar-07, Robert Haas wrote:\n\n> On Wed, Mar 6, 2019 at 3:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we probably should have pg_dump suppress emitting information\n> > about the toast table of partitioned tables?\n> \n> +1. That seems like the right fix.\n\nThis patch fixes the upgrade problem for me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 8 Mar 2019 20:18:27 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" }, { "msg_contents": "Hi,\n\nOn 2019-03-08 20:18:27 -0300, Alvaro Herrera wrote:\n> On 2019-Mar-07, Robert Haas wrote:\n> \n> > On Wed, Mar 6, 2019 at 3:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think we probably should have pg_dump suppress emitting information\n> > > about the toast table of partitioned tables?\n> > \n> > +1. That seems like the right fix.\n> \n> This patch fixes the upgrade problem for me.\n\nThanks!\n\n\n> -- \n> �lvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n> diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c\n> index e962ae7e913..1de8da59361 100644\n> --- a/src/bin/pg_dump/pg_dump.c\n> +++ b/src/bin/pg_dump/pg_dump.c\n> @@ -4359,9 +4359,9 @@ binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> \t\t\t\t\t \"SELECT c.reltype AS crel, t.reltype AS trel \"\n> \t\t\t\t\t \"FROM pg_catalog.pg_class c \"\n> \t\t\t\t\t \"LEFT JOIN pg_catalog.pg_class t ON \"\n> -\t\t\t\t\t \" (c.reltoastrelid = t.oid) \"\n> +\t\t\t\t\t \" (c.reltoastrelid = t.oid AND c.relkind <> '%c') \"\n> \t\t\t\t\t \"WHERE c.oid = '%u'::pg_catalog.oid;\",\n> -\t\t\t\t\t pg_rel_oid);\n> +\t\t\t\t\t RELKIND_PARTITIONED_TABLE, pg_rel_oid);\n\nHm, I know this code isn't generally well documented, but perhaps we\ncould add a comment as to why we're excluding partitioned tables?\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Fri, 8 Mar 2019 15:20:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" }, { "msg_contents": "Hello\n\nOn 2019-Mar-08, Andres Freund wrote:\n\n> > diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c\n> > index e962ae7e913..1de8da59361 100644\n> > --- a/src/bin/pg_dump/pg_dump.c\n> > +++ b/src/bin/pg_dump/pg_dump.c\n> > @@ -4359,9 +4359,9 @@ binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> > \t\t\t\t\t \"SELECT c.reltype AS crel, t.reltype AS trel \"\n> > \t\t\t\t\t \"FROM pg_catalog.pg_class c \"\n> > \t\t\t\t\t \"LEFT JOIN pg_catalog.pg_class t ON \"\n> > -\t\t\t\t\t \" (c.reltoastrelid = t.oid) \"\n> > +\t\t\t\t\t \" (c.reltoastrelid = t.oid AND c.relkind <> '%c') \"\n> > \t\t\t\t\t \"WHERE c.oid = '%u'::pg_catalog.oid;\",\n> > -\t\t\t\t\t pg_rel_oid);\n> > +\t\t\t\t\t RELKIND_PARTITIONED_TABLE, pg_rel_oid);\n> \n> Hm, I know this code isn't generally well documented, but perhaps we\n> could add a comment as to why we're excluding partitioned tables?\n\nI added a short comment nearby. Hopefully that's sufficient. Let's see\nwhat the buildfarm members have to say now.\n\n(I wondered about putting the comment between two lines of the string\nliteral, but decided against it ...)\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Sun, 10 Mar 2019 13:29:06 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" }, { "msg_contents": "On 2019-03-10 13:29:06 -0300, Alvaro Herrera wrote:\n> Hello\n> \n> On 2019-Mar-08, Andres Freund wrote:\n> \n> > > diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c\n> > > index e962ae7e913..1de8da59361 100644\n> > > --- a/src/bin/pg_dump/pg_dump.c\n> > > +++ b/src/bin/pg_dump/pg_dump.c\n> > > @@ -4359,9 +4359,9 @@ binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> > > \t\t\t\t\t \"SELECT c.reltype AS crel, t.reltype AS trel \"\n> > > \t\t\t\t\t \"FROM pg_catalog.pg_class c \"\n> > > \t\t\t\t\t \"LEFT JOIN pg_catalog.pg_class t ON \"\n> > > -\t\t\t\t\t \" (c.reltoastrelid = t.oid) \"\n> > > +\t\t\t\t\t \" (c.reltoastrelid = t.oid AND c.relkind <> '%c') \"\n> > > \t\t\t\t\t \"WHERE c.oid = '%u'::pg_catalog.oid;\",\n> > > -\t\t\t\t\t pg_rel_oid);\n> > > +\t\t\t\t\t RELKIND_PARTITIONED_TABLE, pg_rel_oid);\n> > \n> > Hm, I know this code isn't generally well documented, but perhaps we\n> > could add a comment as to why we're excluding partitioned tables?\n> \n> I added a short comment nearby. Hopefully that's sufficient. Let's see\n> what the buildfarm members have to say now.\n\nThanks! Looks like crake's doing better.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Sun, 10 Mar 2019 18:11:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Binary upgrade from <12 to 12 creates toast table for\n partitioned tables" } ]
[ { "msg_contents": "Hello hackers,\n\nIn some cases if PostgreSQL encounters with wraparound PostgreSQL might \nleave created temporary tables even after shutdown.\n\nThis orphan temporary tables prevent VACUUM to fix wraparound. It is \nbecause in single mode VACUUM considers orphan temp tables as temp \ntables of other backends.\n\nGrigory reported that one of our client did stuck with fixing wraparound \nby because he didn't know that he has orphaned temp tables left by a \nbackend after wraparound.\n\nThis patch fixes the issue. With it VACUUM deletes orphaned tables in \nsingle mode.\n\nSee also thread in general (I'm not sure that orphan temp tables were \ncause here though):\nhttps://www.postgresql.org/message-id/CADU5SwN6u4radqQgUY2VjEyqXF0KJ6A09PYuJjT%3Do9d7vzM%3DCg%40mail.gmail.com\n\nIf the patch is interesting I'll add it to the next commitfest and label \nit as 'v13'.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Thu, 7 Mar 2019 12:46:06 +0300", "msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "Hi!\n\nOn Thu, Mar 7, 2019 at 12:46 PM Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:\n> In some cases if PostgreSQL encounters with wraparound PostgreSQL might\n> leave created temporary tables even after shutdown.\n>\n> This orphan temporary tables prevent VACUUM to fix wraparound. It is\n> because in single mode VACUUM considers orphan temp tables as temp\n> tables of other backends.\n>\n> Grigory reported that one of our client did stuck with fixing wraparound\n> by because he didn't know that he has orphaned temp tables left by a\n> backend after wraparound.\n>\n> This patch fixes the issue. With it VACUUM deletes orphaned tables in\n> single mode.\n>\n> See also thread in general (I'm not sure that orphan temp tables were\n> cause here though):\n> https://www.postgresql.org/message-id/CADU5SwN6u4radqQgUY2VjEyqXF0KJ6A09PYuJjT%3Do9d7vzM%3DCg%40mail.gmail.com\n>\n> If the patch is interesting I'll add it to the next commitfest and label\n> it as 'v13'.\n\nAs far as I understand, it's intended that user should be able to fix\nwraparound in single mode. Assuming this issue may prevent user form\ndoing this and fix is quite trivial, should we consider backpatching\nthis?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n", "msg_date": "Thu, 7 Mar 2019 12:57:40 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "Arthur Zakirov <a.zakirov@postgrespro.ru> writes:\n> In some cases if PostgreSQL encounters with wraparound PostgreSQL might \n> leave created temporary tables even after shutdown.\n> This orphan temporary tables prevent VACUUM to fix wraparound. It is \n> because in single mode VACUUM considers orphan temp tables as temp \n> tables of other backends.\n\nHm.\n\n> This patch fixes the issue. With it VACUUM deletes orphaned tables in \n> single mode.\n\nThis seems like an astonishingly bad idea. Nobody would expect DROP TABLE\nto be spelled \"VACUUM\", and the last thing we need when someone has been\nforced to use single-user mode is to put additional land mines under their\nfeet. They might, for example, wish to do forensic investigation on such\ntables to discover the reason for a crash.\n\nI wonder if a better response would be, in single-user mode, to allow temp\ntables to be processed as local temp tables regardless of their backend\nnumber. (Everywhere, not just in VACUUM.)\n\nAlso, if what someone actually wants is to drop such a temp table from\nsingle-user mode, we should make sure that they are allowed to do so.\nBut the command for doing that should be \"DROP TABLE\", not \"VACUUM\".\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 07 Mar 2019 09:39:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "On Thu, Mar 7, 2019 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wonder if a better response would be, in single-user mode, to allow temp\n> tables to be processed as local temp tables regardless of their backend\n> number. (Everywhere, not just in VACUUM.)\n\nSince commit debcec7dc31a992703911a9953e299c8d730c778 there is nothing\nto prevent two different backends from using the same relfilenode\nnumber, so I don't think this will work.\n\n> Also, if what someone actually wants is to drop such a temp table from\n> single-user mode, we should make sure that they are allowed to do so.\n> But the command for doing that should be \"DROP TABLE\", not \"VACUUM\".\n\nIn a way I agree, but I think the reality is that some very large\npercentage of people who enter single user mode do so because of a\nwraparound-induced shutdown, and what they need is an easy way to get\nthe system back on line. Running a catalog query to look for\nundropped temp tables and then dropping them one by one using DROP\nTABLE is not what they want. They want to be able to run one or two\ncommands and get their database back on line.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 7 Mar 2019 09:53:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In a way I agree, but I think the reality is that some very large\n> percentage of people who enter single user mode do so because of a\n> wraparound-induced shutdown, and what they need is an easy way to get\n> the system back on line. Running a catalog query to look for\n> undropped temp tables and then dropping them one by one using DROP\n> TABLE is not what they want. They want to be able to run one or two\n> commands and get their database back on line.\n\nSo if we think we can invent a \"MAGICALLY FIX MY DATABASE\" command,\nlet's do that. But please let's not turn a well defined command\nlike VACUUM into something that you don't quite know what it will do.\nI'm especially down on having squishy DWIM logic in a last-resort\noperating mode; the fact that some people only use it in a certain way\nis a poor excuse for setting booby traps for everybody.\n\nSomething I could get behind as a less dangerous way of addressing\nthis issue is to define DISCARD TEMP, in single-user mode, as dropping\nthe contents of all temp schemas not just one.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 07 Mar 2019 10:24:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "On Thu, Mar 7, 2019 at 10:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So if we think we can invent a \"MAGICALLY FIX MY DATABASE\" command,\n> let's do that. But please let's not turn a well defined command\n> like VACUUM into something that you don't quite know what it will do.\n\nI am on the fence about that. I see your point, but on the other\nhand, autovacuum drops temp tables all the time in multi-user mode and\nI think it's pretty clear that, with the possible exception of you,\nusers find that an improvement. So it could be argued that we're\nmerely proposing to make the single-user mode behavior of vacuum\nconsistent with the behavior people are already expecting it to do.\n\nThe underlying and slightly more general problem here is that users\nfind it really hard to know what to do when vacuum fails to advance\nrelfrozenxid. Of course, temp tables are only one reason why that can\nhappen: logical decoding slots and prepared transactions are others,\nand I don't think we can automatically drop that stuff because\nsomebody may still be expecting them to accomplish whatever their\nintended purpose is. The difference with temp tables is that users\nimagine -- quite naturally I think -- that they are in fact temporary,\nand that they will in fact go away when the session ends. The user\nwould tend to view their continued existence as an unwanted\nimplementation artifact, not something that they should be responsible\nfor removing.\n\nReally, I'd like to redesign the way this whole system works. Instead\nof forcing a full-system shutdown, we should just refuse to assign any\nmore XIDs, disable the vacuum cost delay machinery, and let autovacuum\ngo nuts until the problem is corrected. Forcing people to run vacuum\nto run one vacuum at a time is not nice, and not having background\nprocesses like the bgwriter or checkpointer while you're doing it\nisn't good either, and there's no good reason to disallow SELECT\nqueries while we're recovering the system either. Actually, even\nbefore we get to the point where we currently force a shutdown, we\nought to just give up on vacuum cost delay, either all at once or\nperhaps incrementally, when we see that we're getting into trouble.\nBut all of that is work for another time.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 7 Mar 2019 10:49:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "> On Thu, Mar 7, 2019 at 10:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > So if we think we can invent a \"MAGICALLY FIX MY DATABASE\" command,\n> > let's do that. But please let's not turn a well defined command\n> > like VACUUM into something that you don't quite know what it will do.\n\nI see your point. Another approach would be to let user know what prevents\nVACUUM to fix the wraparound in single mode, mainly that there are orphan\ntemp tables . But it might be too verbose if PostgreSQL will print warning for\nevery orphan temp table.\n\n> Really, I'd like to redesign the way this whole system works. Instead\n> of forcing a full-system shutdown, we should just refuse to assign any\n> more XIDs, disable the vacuum cost delay machinery, and let autovacuum\n> go nuts until the problem is corrected. Forcing people to run vacuum\n> to run one vacuum at a time is not nice, and not having background\n> processes like the bgwriter or checkpointer while you're doing it\n> isn't good either, and there's no good reason to disallow SELECT\n> queries while we're recovering the system either. Actually, even\n> before we get to the point where we currently force a shutdown, we\n> ought to just give up on vacuum cost delay, either all at once or\n> perhaps incrementally, when we see that we're getting into trouble.\n> But all of that is work for another time.\n\nI think it would be very neat feature!\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n", "msg_date": "Thu, 7 Mar 2019 21:49:33 +0300", "msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "\n\nOn 03/07/2019 06:49 PM, Robert Haas wrote:\n> On Thu, Mar 7, 2019 at 10:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So if we think we can invent a \"MAGICALLY FIX MY DATABASE\" command,\n>> let's do that. But please let's not turn a well defined command\n>> like VACUUM into something that you don't quite know what it will do.\n> I am on the fence about that. I see your point, but on the other\n> hand, autovacuum drops temp tables all the time in multi-user mode and\n> I think it's pretty clear that, with the possible exception of you,\n> users find that an improvement. So it could be argued that we're\n> merely proposing to make the single-user mode behavior of vacuum\n> consistent with the behavior people are already expecting it to do.\n>\n> The underlying and slightly more general problem here is that users\n> find it really hard to know what to do when vacuum fails to advance\n> relfrozenxid. Of course, temp tables are only one reason why that can\n> happen: logical decoding slots and prepared transactions are others,\n> and I don't think we can automatically drop that stuff because\n> somebody may still be expecting them to accomplish whatever their\n> intended purpose is. The difference with temp tables is that users\n> imagine -- quite naturally I think -- that they are in fact temporary,\n> and that they will in fact go away when the session ends. The user\n> would tend to view their continued existence as an unwanted\n> implementation artifact, not something that they should be responsible\n> for removing.\n\nI`m no hacker, but I would like to express my humble opinion on the matter.\n1. Proposed patch is fairly conservative, to be on fully consistent with \nautovacuum behaivour VACUUM should be able to drop orphaned temp table \neven in mult-user mode.\n\n2. There is indeed a problem of expected behavior from user perspective. \nEvery PostgreSQL user knows that if you hit wraparound, you go \nsingle-mode, run VACUUM and the problem goes away. Exactly because of \nthis I`ve got involved with this problem:\nhttps://www.postgresql.org/message-id/0c7c2f84-74f5-2cd9-767e-9b2566065d71%40postgrespro.ru\nPoor guy repeatedly run VACUUM after VACUUM and had no clue what to do. \nHe even considered to just restore from backup and be done with it. It \ntook some time to figure out a true culprit, and time = money.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 8 Mar 2019 01:38:29 +0300", "msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "On Thu, Mar 07, 2019 at 10:49:29AM -0500, Robert Haas wrote:\n> On Thu, Mar 7, 2019 at 10:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > So if we think we can invent a \"MAGICALLY FIX MY DATABASE\" command,\n> > let's do that. But please let's not turn a well defined command\n> > like VACUUM into something that you don't quite know what it will do.\n> \n> I am on the fence about that. I see your point, but on the other\n> hand, autovacuum drops temp tables all the time in multi-user mode and\n> I think it's pretty clear that, with the possible exception of you,\n> users find that an improvement. So it could be argued that we're\n> merely proposing to make the single-user mode behavior of vacuum\n> consistent with the behavior people are already expecting it to do.\n\nIt is possible for a session to drop temporary tables of other\nsessions. Wouldn't it work as well in this case for single-user mode\nwhen seeing an orphan temp table still defined? Like Tom, I don't\nthink that it is a good idea to play with the heuristics of VACUUM in\nthe way the patch proposes.\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 15:27:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "On Fri, Mar 8, 2019 at 9:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Mar 07, 2019 at 10:49:29AM -0500, Robert Haas wrote:\n> > On Thu, Mar 7, 2019 at 10:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > So if we think we can invent a \"MAGICALLY FIX MY DATABASE\" command,\n> > > let's do that. But please let's not turn a well defined command\n> > > like VACUUM into something that you don't quite know what it will do.\n> >\n> > I am on the fence about that. I see your point, but on the other\n> > hand, autovacuum drops temp tables all the time in multi-user mode and\n> > I think it's pretty clear that, with the possible exception of you,\n> > users find that an improvement. So it could be argued that we're\n> > merely proposing to make the single-user mode behavior of vacuum\n> > consistent with the behavior people are already expecting it to do.\n>\n> It is possible for a session to drop temporary tables of other\n> sessions. Wouldn't it work as well in this case for single-user mode\n> when seeing an orphan temp table still defined? Like Tom, I don't\n> think that it is a good idea to play with the heuristics of VACUUM in\n> the way the patch proposes.\n\nI think we have a kind of agreement, that having simple way to get rid\nof all orphan tables in single-user mode is good. The question is\nwhether it's good for VACUUM command to do this. Naturally, user may\nenter single-user mode for different reasons, not only for xid\nwraparound fixing. For example, one may enter this mode for examining\ntemporary tables present in the database. Then it would be\ndisappointing surprise that all of them gone after VACUUM command.\n\nSo, what about special option, which would make VACUUM to drop orphan\ntables in single-user mode? Do we need it in multi-user mode too?\n\nBTW, does this patch checks that temporary table is really orphan?\nAFAICS, user may define some temporary tables in single-user mode\nbefore running VACUUM.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 7 Jun 2019 21:07:12 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" }, { "msg_contents": "Hello Alexander,\n\nOn Friday, June 7, 2019, Alexander Korotkov <a.korotkov@postgrespro.ru>\nwrote:\n> BTW, does this patch checks that temporary table is really orphan?\n> AFAICS, user may define some temporary tables in single-user mode\n> before running VACUUM.\n\nAs far as I remember, the patch checks it.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n-- \nArtur\n\nHello Alexander,On Friday, June 7, 2019, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:> BTW, does this patch checks that temporary table is really orphan?> AFAICS, user may define some temporary tables in single-user mode> before running VACUUM.As far as I remember, the patch checks it.-- Arthur ZakirovPostgres Professional: http://www.postgrespro.comRussian Postgres Company-- Artur", "msg_date": "Sat, 8 Jun 2019 07:16:49 +0300", "msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] Drop orphan temp tables in single-mode" } ]
[ { "msg_contents": "Currently building postgresql for Win32 with a mingw toolchain produces \nimport libraries with *.a extension, whereas the extension should be \n*.dll.a. There are various downstream workarounds for this, see i.e. [1] \nand [2]. The attached patch 0001-Fix-import-library-extension.patch \naddresses this.\n\nRelated, no actual static libraries are produced alongside the \nrespective dlls. The attached patch 0002-Build-static-libraries.patch \naddresses this, in a similar fashion as is already done for the AIX case \nin Makefile.shlib.\n\nThanks Sandro\n\n[1] \nhttps://src.fedoraproject.org/rpms/mingw-postgresql/blob/master/f/mingw-postgresql.spec#_144 \n[2] \nhttps://aur.archlinux.org/cgit/aur.git/tree/0001-Use-.dll.a-as-extension-for-import-libraries.patch?h=mingw-w64-postgresql", "msg_date": "Thu, 7 Mar 2019 15:23:50 +0100", "msg_from": "Sandro Mani <manisandro@gmail.com>", "msg_from_op": true, "msg_subject": "[Patch] Mingw: Fix import library extension, build actual static\n libraries" }, { "msg_contents": "Any chance these patches could be considered?\n\nThanks\nSandro\n\nOn Thu, Mar 7, 2019 at 3:23 PM Sandro Mani <manisandro@gmail.com> wrote:\n\n> Currently building postgresql for Win32 with a mingw toolchain produces\n> import libraries with *.a extension, whereas the extension should be\n> *.dll.a. There are various downstream workarounds for this, see i.e. [1]\n> and [2]. The attached patch 0001-Fix-import-library-extension.patch\n> addresses this.\n>\n> Related, no actual static libraries are produced alongside the respective\n> dlls. The attached patch 0002-Build-static-libraries.patch addresses this,\n> in a similar fashion as is already done for the AIX case in Makefile.shlib.\n>\n> Thanks Sandro\n>\n> [1]\n> https://src.fedoraproject.org/rpms/mingw-postgresql/blob/master/f/mingw-postgresql.spec#_144\n> [2]\n> https://aur.archlinux.org/cgit/aur.git/tree/0001-Use-.dll.a-as-extension-for-import-libraries.patch?h=mingw-w64-postgresql\n>\n\nAny chance these patches could be considered?ThanksSandroOn Thu, Mar 7, 2019 at 3:23 PM Sandro Mani <manisandro@gmail.com> wrote:\n\nCurrently building postgresql for Win32 with a mingw toolchain produces import libraries with *.a extension, whereas the extension should be *.dll.a. There are various downstream workarounds for this, see i.e. [1] and [2]. The attached patch 0001-Fix-import-library-extension.patch addresses this.\n\nRelated, no actual static libraries are produced alongside the respective dlls. The attached patch 0002-Build-static-libraries.patch addresses this, in a similar fashion as is already done for the AIX case in Makefile.shlib.\nThanks\nSandro\n[1] https://src.fedoraproject.org/rpms/mingw-postgresql/blob/master/f/mingw-postgresql.spec#_144\n[2] https://aur.archlinux.org/cgit/aur.git/tree/0001-Use-.dll.a-as-extension-for-import-libraries.patch?h=mingw-w64-postgresql", "msg_date": "Tue, 2 Apr 2019 11:32:26 +0200", "msg_from": "Sandro Mani <manisandro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Patch] Mingw: Fix import library extension,\n build actual static libraries" }, { "msg_contents": "On Tue, Apr 2, 2019 at 5:32 AM Sandro Mani <manisandro@gmail.com> wrote:\n> Any chance these patches could be considered?\n\nYou should add them to the Open CommitFest, currently\nhttps://commitfest.postgresql.org/23/\n\nThat CommitFest is scheduled to start in July; we are about to go to\nfeature freeze for v12, and your patch was submitted after the last\nCommitFest for v12 had already started. That doesn't *necessarily*\nmean that someone won't make an argument for including your patch in\nv12 rather than making it wait another year, if for example it is\nviewed to be a bug fix, but the odds are against you, because a lot of\nother people's patches have been in line for much longer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Apr 2019 10:02:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Mingw: Fix import library extension,\n build actual static libraries" }, { "msg_contents": "On Thu, Mar 07, 2019 at 03:23:50PM +0100, Sandro Mani wrote:\n> Related, no actual static libraries are produced alongside the respective\n> dlls. The attached patch 0002-Build-static-libraries.patch addresses this,\n> in a similar fashion as is already done for the AIX case in Makefile.shlib.\n\nWe don't build static libraries on AIX, though Makefile.shlib uses the\n$(stlib) variable to get a name for the *.a shared library it makes. Here's\nan example of one AIX Makefile.shlib build sequence, from\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hornet&dt=2019-04-15%2022%3A35%3A52&stg=make\n\nrm -f libpq.a\nar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o pqexpbuffer.o fe-secure.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\ntouch libpq.a\n../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\nxlc_r -qmaxmem=33554432 -D_LARGE_FILES=1 -qnoansialias -g -O2 -qmaxmem=16384 -qsrcmsg -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp -L../../../src/port -L../../../src/common -lpgcommon_shlib -lpgport_shlib -L/home/nm/sw/nopath/libxml2-64/lib -L/home/nm/sw/nopath/icu58.2-64/lib -L/home/nm/sw/nopath/uuid-64/lib -L/home/nm/sw/nopath/openldap-64/lib -L/home/nm/sw/nopath/icu58.2-64/lib -L/home/nm/sw/nopath/libxml2-64/lib -Wl,-blibpath:'/home/nm/farm/xlc64/HEAD/inst/lib:/home/nm/sw/nopath/libxml2-64/lib:/home/nm/sw/nopath/icu58.2-64/lib:/home/nm/sw/nopath/uuid-64/lib:/home/nm/sw/nopath/openldap-64/lib:/home/nm/sw/nopath/icu58.2-64/lib:/home/nm/sw/nopath/libxml2-64/lib:/usr/lib:/lib' -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE -lintl -lssl -lcrypto -lm -lldap_r -llber -lpthreads\nrm -f libpq.a\nar crs libpq.a libpq.so.5\n\n\n", "msg_date": "Mon, 15 Apr 2019 22:22:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Mingw: Fix import library extension, build actual static\n libraries" }, { "msg_contents": "\nOn 4/16/19 1:22 AM, Noah Misch wrote:\n> On Thu, Mar 07, 2019 at 03:23:50PM +0100, Sandro Mani wrote:\n>> Related, no actual static libraries are produced alongside the respective\n>> dlls. The attached patch 0002-Build-static-libraries.patch addresses this,\n>> in a similar fashion as is already done for the AIX case in Makefile.shlib.\n> We don't build static libraries on AIX, though Makefile.shlib uses the\n> $(stlib) variable to get a name for the *.a shared library it makes. Here's\n> an example of one AIX Makefile.shlib build sequence, from\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hornet&dt=2019-04-15%2022%3A35%3A52&stg=make\n>\n> rm -f libpq.a\n> ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o pqexpbuffer.o fe-secure.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n> touch libpq.a\n> ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n> xlc_r -qmaxmem=33554432 -D_LARGE_FILES=1 -qnoansialias -g -O2 -qmaxmem=16384 -qsrcmsg -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp -L../../../src/port -L../../../src/common -lpgcommon_shlib -lpgport_shlib -L/home/nm/sw/nopath/libxml2-64/lib -L/home/nm/sw/nopath/icu58.2-64/lib -L/home/nm/sw/nopath/uuid-64/lib -L/home/nm/sw/nopath/openldap-64/lib -L/home/nm/sw/nopath/icu58.2-64/lib -L/home/nm/sw/nopath/libxml2-64/lib -Wl,-blibpath:'/home/nm/farm/xlc64/HEAD/inst/lib:/home/nm/sw/nopath/libxml2-64/lib:/home/nm/sw/nopath/icu58.2-64/lib:/home/nm/sw/nopath/uuid-64/lib:/home/nm/sw/nopath/openldap-64/lib:/home/nm/sw/nopath/icu58.2-64/lib:/home/nm/sw/nopath/libxml2-64/lib:/usr/lib:/lib' -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE -lintl -lssl -lcrypto -lm -lldap_r -llber -lpthreads\n> rm -f libpq.a\n> ar crs libpq.a libpq.so.5\n>\n>\n\nI'm wondering if it wouldn't be better to set the value of stlib for\nwindows, instead of changes like this:\n\n\n\n-    $(CC) $(CFLAGS)  -shared -static-libgcc -o $@  $(OBJS) $(LDFLAGS)\n$(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--export-all-symbols\n-Wl,--out-implib=$(stlib)\n+    $(CC) $(CFLAGS)  -shared -static-libgcc -o $@  $(OBJS) $(LDFLAGS)\n$(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--export-all-symbols\n-Wl,--out-implib=lib$(NAME).dll.a\n\n\n\nI'm also wondering if changing this will upset third party authors.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 9 Jul 2019 08:26:52 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Mingw: Fix import library extension, build actual static\n libraries" }, { "msg_contents": "On Tue, Jul 09, 2019 at 08:26:52AM -0400, Andrew Dunstan wrote:\n> On 4/16/19 1:22 AM, Noah Misch wrote:\n> > On Thu, Mar 07, 2019 at 03:23:50PM +0100, Sandro Mani wrote:\n> >> Related, no actual static libraries are produced alongside the respective\n> >> dlls. The attached patch 0002-Build-static-libraries.patch addresses this,\n> >> in a similar fashion as is already done for the AIX case in Makefile.shlib.\n> > We don't build static libraries on AIX, though Makefile.shlib uses the\n> > $(stlib) variable to get a name for the *.a shared library it makes. Here's\n> > an example of one AIX Makefile.shlib build sequence, from\n> > https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hornet&dt=2019-04-15%2022%3A35%3A52&stg=make\n> >\n> > rm -f libpq.a\n> > ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o pqexpbuffer.o fe-secure.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n> > touch libpq.a\n> > ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n> > xlc_r -qmaxmem=33554432 -D_LARGE_FILES=1 -qnoansialias -g -O2 -qmaxmem=16384 -qsrcmsg -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp -L../../../src/port -L../../../src/common -lpgcommon_shlib -lpgport_shlib -L/home/nm/sw/nopath/libxml2-64/lib -L/home/nm/sw/nopath/icu58.2-64/lib -L/home/nm/sw/nopath/uuid-64/lib -L/home/nm/sw/nopath/openldap-64/lib -L/home/nm/sw/nopath/icu58.2-64/lib -L/home/nm/sw/nopath/libxml2-64/lib -Wl,-blibpath:'/home/nm/farm/xlc64/HEAD/inst/lib:/home/nm/sw/nopath/libxml2-64/lib:/home/nm/sw/nopath/icu58.2-64/lib:/home/nm/sw/nopath/uuid-64/lib:/home/nm/sw/nopath/openldap-64/lib:/home/nm/sw/nopath/icu58.2-64/lib:/home/nm/sw/nopath/libxml2-64/lib:/usr/lib:/lib' -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE -lintl -lssl -lcrypto -lm -lldap_r -llber -lpthreads\n> > rm -f libpq.a\n> > ar crs libpq.a libpq.so.5\n> \n> I'm wondering if it wouldn't be better to set the value of stlib for\n> windows, instead of changes like this:\n> \n> -��� $(CC) $(CFLAGS)� -shared -static-libgcc -o $@� $(OBJS) $(LDFLAGS)\n> $(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--export-all-symbols\n> -Wl,--out-implib=$(stlib)\n> +��� $(CC) $(CFLAGS)� -shared -static-libgcc -o $@� $(OBJS) $(LDFLAGS)\n> $(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--export-all-symbols\n> -Wl,--out-implib=lib$(NAME).dll.a\n\nIf we're going to have both static libs and import libs, they need different\nnames. But it may be an improvement to set implib = lib$(NAME).dll.a and use\n$(implib) in Make recipes like this one.\n\n> I'm also wondering if changing this will upset third party authors.\n\nI'd make it master-only, but I don't expect big trouble. GNU ld does pick\nlibFOO.dll.a when both that and libFOO.a are available. Folks probably have\nscripts that copy libpq.a to libpq.lib for the benefit of MSVC, and those\nwould need to change.\n\n\n", "msg_date": "Thu, 11 Jul 2019 18:34:29 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Mingw: Fix import library extension, build actual static\n libraries" }, { "msg_contents": "On 2019-03-07 15:23, Sandro Mani wrote:\n> Currently building postgresql for Win32 with a mingw toolchain produces\n> import libraries with *.a extension, whereas the extension should be\n> *.dll.a.\n\nI have read the MinGW documentation starting from\n<http://www.mingw.org/wiki/DLL> and have found no information supporting\nthis claim. All their examples match our existing naming. What is your\nsource for this information -- other than ...\n\n> There are various downstream workarounds for this, see i.e. [1]\n> and [2]. The attached patch 0001-Fix-import-library-extension.patch\n> addresses this.\n\nI'm confused what Fedora and Arch Linux have to do with this. Are they\ndistributing Windows binaries of third-party software?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Sep 2019 08:47:16 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Mingw: Fix import library extension, build actual static\n libraries" } ]
[ { "msg_contents": "I noticed today that the signature for two functions is wrong\nin the documentation:\n\n- \"heap_page_item_attrs\" has the argument order and return type wrong\n and is lacking types\n\n- \"tuple_data_split\" is lacking the type for \"rel_oid\"\n\nPatch attached.\n\nThis should be backpatched down to 9.6 where the functions have been added.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 07 Mar 2019 21:00:24 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Small doc fix for pageinspect" }, { "msg_contents": "On Thu, Mar 07, 2019 at 09:00:24PM +0100, Laurenz Albe wrote:\n> This should be backpatched down to 9.6 where the functions have been\n> added.\n\nThanks, applied. The second argument name of heap_page_item_attrs is\nactually \"page\", and not \"t_data\", so both your patch and the docs\nwere wrong on this point.\n--\nMichael", "msg_date": "Fri, 8 Mar 2019 15:13:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Small doc fix for pageinspect" }, { "msg_contents": "Michael Paquier wrote:\n> On Thu, Mar 07, 2019 at 09:00:24PM +0100, Laurenz Albe wrote:\n> > This should be backpatched down to 9.6 where the functions have been\n> > added.\n> \n> Thanks, applied. The second argument name of heap_page_item_attrs is\n> actually \"page\", and not \"t_data\", so both your patch and the docs\n> were wrong on this point.\n\nThanks, and pardon the sloppiness.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 08 Mar 2019 10:15:21 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Small doc fix for pageinspect" } ]
[ { "msg_contents": "Alvaro has committed two of the patches in this CF entry[1], but the\nremaining two have yet to attract review.\n\nThis message contains only those two, just as before[2] except rebased\nover Alvaro's commits of the others.\n\n<confession>\n There are two new changes. The <ulink>s added in the -docfix patch now\n have link text, as allowed according to b060e6c, and the -content patch\n now updates the definition of the 'content' type in datatypes.sgml,\n which I had overlooked before.\n</confession>\n\nxml-functions-type-docfix-3.patch adjusts the documentation of the XML type\nand related functions to present some behavior and limitations more clearly.\n\nxml-content-2006-2.patch changes the behavior of xmlparse and the\ntext-to-xml cast to allow any XML 'document' (including one with a DTD)\nto be parsed as 'content', where the former behavior was to fail in that\ncase. This is the same as changing the definition of XML 'content' from\nthat of SQL:2003 to that of SQL:2006 and later. The later definition is\npreferable, because it eliminates a case that can fail in, e.g., pg_restore\n(which problem has been reported in the field).\n\nThe patches apply in that order (because the -docfix one adds language\ndescribing the current 'content' behavior, then the -content one changes\nthe behavior, and the language to match it).\n\nRegards,\n-Chap\n\n[1] https://commitfest.postgresql.org/22/1872/\n[2]\nhttps://www.postgresql.org/message-id/3e8eab9e-7289-6c23-5e2c-153cccea2257@anastigmatix.net", "msg_date": "Fri, 8 Mar 2019 00:08:16 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "The two \"XML Fixes\" patches still in need of review" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Alvaro has committed two of the patches in this CF entry[1], but the\n> remaining two have yet to attract review.\n> This message contains only those two, just as before[2] except rebased\n> over Alvaro's commits of the others.\n\nJust to update this thread --- per the other thread at\nhttps://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com\nI've now pushed a somewhat-adjusted version of the XML-content fix\npatch. The documentation patch needs some small rebasing to apply\nafter that one instead of before it.\n\nI'm not going to touch the documentation patch myself; I don't know\nenough about XML to review it competently. I do have a small suggestion\nthough, which is that the large \"Limits and Compatibility\" section\nyou added doesn't really seem to me to belong where you put it.\nPerhaps it'd make sense under the XML section in datatype.sgml,\nbut I think I might lean to making it a new section in Appendix D\n(SQL Conformance).\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 23 Mar 2019 17:05:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The two \"XML Fixes\" patches still in need of review" }, { "msg_contents": "On 03/23/19 17:05, Tom Lane wrote:\n\n> Just to update this thread --- per the other thread at\n> https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com\n> I've now pushed a somewhat-adjusted version of the XML-content fix\n> patch. The documentation patch needs some small rebasing to apply\n> after that one instead of before it.\n\nWill do.\n\n> Perhaps it'd make sense under the XML section in datatype.sgml,\n> but I think I might lean to making it a new section in Appendix D\n> (SQL Conformance).\n\nSounds like the option (4) I proposed back in [1]. I suppose it won't\nbe much trouble to move.\n\n> I'm not going to touch the documentation patch myself; I don't know\n> enough about XML to review it competently.\n\nI would be happy even to see it reviewed for style, and if anybody's\nwilling to look at the technical content even if not feeling ideally\nprepared, I'd be happy to respond or give references for any points\nin question.\n\nIt does fix a number of actual bugs in the existing documentation, so\nwhile it would be ideal to apply a higher standard of technical review\nto this patch, it'd be less ideal to have existing buggy doc lingering on\nbecause nobody feels up to reviewing at that ideal higher standard.\n\nRegards,\n-Chap\n\n[1] https://www.postgresql.org/message-id/5C298BCB.4060704%40anastigmatix.net\n\n", "msg_date": "Sat, 23 Mar 2019 18:20:49 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: The two \"XML Fixes\" patches still in need of review" }, { "msg_contents": "On 03/23/19 18:20, Chapman Flack wrote:\n>> Perhaps it'd make sense under the XML section in datatype.sgml,\n>> but I think I might lean to making it a new section in Appendix D\n>> (SQL Conformance).\n> \n> Sounds like the option (4) I proposed back in [1]. I suppose it won't\n> be much trouble to move.\n\nThe current structure of that appendix is: a few introductory paragraphs\n(not wrapped in any <sect... ...>), followed by two <sect1>s that are\nboth autogenerated (the supported and unsupported features tables).\n\nThe <sect1>s become independent pages, in the HTML version.\n\n- Move the XML limits/conformance section to a new <sect1> there?\n\n- Before the autogenerated tables, or after them?\n\nThe un-<sect>ed intro paragraphs are preceded (in HTML) by a table\nof contents, so the new section should show up there.\n\n- Also add a mention (say, in the next-to-last intro paragraph) that\n there are notes on the SQL/XML conformance that have their own\n section (and link to it)? Or just let the generated ToC link be enough?\n\n\nRegards,\n-Chap\n\n", "msg_date": "Sat, 23 Mar 2019 19:46:23 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: The two \"XML Fixes\" patches still in need of review" }, { "msg_contents": "On 03/23/19 18:20, Chapman Flack wrote:\n> On 03/23/19 17:05, Tom Lane wrote:\n>> I've now pushed a somewhat-adjusted version of the XML-content fix\n>> patch. The documentation patch needs some small rebasing to apply\n>> after that one instead of before it.\n> \n> Will do.\n> \n>> Perhaps it'd make sense under the XML section in datatype.sgml,\n>> but I think I might lean to making it a new section in Appendix D\n>> (SQL Conformance).\n> \n> Sounds like the option (4) I proposed back in [1]. I suppose it won't\n> be much trouble to move.\n\nPFA xml-functions-type-docfix-4.patch rebased, and with the limits/\ncompatibility section moved to Appendix D.\n\nRegards,\n-Chap", "msg_date": "Sat, 23 Mar 2019 21:21:57 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: The two \"XML Fixes\" patches still in need of review" } ]
[ { "msg_contents": "Hi hackers,\n\nThere is already a databases tap tests in pg_rewind, wonder if there is a\nneed for tablespace tap tests in pg_rewind.\nAttached is a initial patch from me.\n\nHere is a patch for runing pg_rewind, it is very similar to\nsrc/bin/pg_rewind/t/002_databases.pl, but there is no master_psql(\"CREATE\nTABLESPACE beforepromotion LOCATION '$tempdir/beforepromotion'\"); after\ncreate_standby() and before promote_standby(), because pg_rewind will error\nout :\ncould not create directory\n\"/Users/sbai/work/postgres/src/bin/pg_rewind/tmp_check/t_006_tablespace_master_local_data/pgdata/pg_tblspc/24576/PG_12_201903063\":\nFile exists\nFailure, exiting\n\nThe patch is created on top of the\ncommit e1e0e8d58c5c70da92e36cb9d59c2f7ecf839e00 (origin/master, origin/HEAD)\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Fri Mar 8 15:10:14 2019 +0900\n\n Fix function signatures of pageinspect in documentation\n\n tuple_data_split() lacked the type of the first argument, and\n heap_page_item_attrs() has reversed the first and second argument,\n with the bytea argument using an incorrect name.\n\n Author: Laurenz Albe\n Discussion:\nhttps://postgr.es/m/8f9ab7b16daf623e87eeef5203a4ffc0dece8dfd.camel@cybertec.at\n Backpatch-through: 9.6", "msg_date": "Fri, 8 Mar 2019 18:14:41 +0800", "msg_from": "Shaoqi Bai <sbai@pivotal.io>", "msg_from_op": true, "msg_subject": "Add tablespace tap test to pg_rewind" }, { "msg_contents": "Hi hackers,\n\nThere is already a databases tap tests in pg_rewind, wonder if there is a\nneed for tablespace tap tests in pg_rewind.\nAttached is a initial patch from me.\n\nHere is a patch for runing pg_rewind, it is very similar to\nsrc/bin/pg_rewind/t/002_databases.pl, but there is no master_psql(\"CREATE\nTABLESPACE beforepromotion LOCATION '$tempdir/beforepromotion'\"); after\ncreate_standby() and before promote_standby(), because pg_rewind will error\nout :\ncould not create directory\n\"/Users/sbai/work/postgres/src/bin/pg_rewind/tmp_check/t_006_tablespace_master_local_data/pgdata/pg_tblspc/24576/PG_12_201903063\":\nFile exists\nFailure, exiting\n\nThe patch is created on top of the\ncommit e1e0e8d58c5c70da92e36cb9d59c2f7ecf839e00 (origin/master, origin/HEAD)\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Fri Mar 8 15:10:14 2019 +0900\n\n Fix function signatures of pageinspect in documentation\n\n tuple_data_split() lacked the type of the first argument, and\n heap_page_item_attrs() has reversed the first and second argument,\n with the bytea argument using an incorrect name.\n\n Author: Laurenz Albe\n Discussion:\nhttps://postgr.es/m/8f9ab7b16daf623e87eeef5203a4ffc0dece8dfd.camel@cybertec.at\n Backpatch-through: 9.6", "msg_date": "Fri, 8 Mar 2019 21:42:29 +0800", "msg_from": "Shaoqi Bai <sbai@pivotal.io>", "msg_from_op": true, "msg_subject": "Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Fri, Mar 08, 2019 at 09:42:29PM +0800, Shaoqi Bai wrote:\n> There is already a databases tap tests in pg_rewind, wonder if there is a\n> need for tablespace tap tests in pg_rewind. Attached is a initial\n> patch from me.\n\nWhen working on the first version of pg_rewind for VMware with Heikki,\ntablespace support has been added only after, so what you propose is\nsensible I think.\n\n+# Run the test in both modes.\n+run_test('local');\nSmall nit: we should test for the remote mode here as well.\n--\nMichael", "msg_date": "Sat, 9 Mar 2019 09:09:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Sat, Mar 09, 2019 at 09:09:24AM +0900, Michael Paquier wrote:\n> When working on the first version of pg_rewind for VMware with Heikki,\n> tablespace support has been added only after, so what you propose is\n> sensible I think.\n> \n> +# Run the test in both modes.\n> +run_test('local');\n> Small nit: we should test for the remote mode here as well.\n\nI got to think more about this one a bit more, and I think that it\nwould be good to also check the consistency of a tablespace created\nbefore promotion. If you copy the logic from 002_databases.pl, this\nis not going to work because the primary and the standby would try to\ncreate the tablespace in the same path, stepping on each other's\ntoes. So what you need to do is to create the tablespace on the\nprimary because creating the standby. This requires a bit more work\nthan what you propose here though as you basically need to extend\nRewindTest::create_standby so as it is possible to pass extra\narguments to $node_master->backup. And in this case the extra option\nto use is --tablespace-mapping to make sure that the primary and the\nstandby have the same tablespace, but defined on different paths.\n--\nMichael", "msg_date": "Mon, 11 Mar 2019 07:50:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "Thanks, will work on it as you suggested\nAdd pg_basebackup --T olddir=newdir to support check the consistency of a\ntablespace created before promotion\nAdd run_test('remote');\n\nOn Mon, Mar 11, 2019 at 6:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Mar 09, 2019 at 09:09:24AM +0900, Michael Paquier wrote:\n> > When working on the first version of pg_rewind for VMware with Heikki,\n> > tablespace support has been added only after, so what you propose is\n> > sensible I think.\n> >\n> > +# Run the test in both modes.\n> > +run_test('local');\n> > Small nit: we should test for the remote mode here as well.\n>\n> I got to think more about this one a bit more, and I think that it\n> would be good to also check the consistency of a tablespace created\n> before promotion. If you copy the logic from 002_databases.pl, this\n> is not going to work because the primary and the standby would try to\n> create the tablespace in the same path, stepping on each other's\n> toes. So what you need to do is to create the tablespace on the\n> primary because creating the standby. This requires a bit more work\n> than what you propose here though as you basically need to extend\n> RewindTest::create_standby so as it is possible to pass extra\n> arguments to $node_master->backup. And in this case the extra option\n> to use is --tablespace-mapping to make sure that the primary and the\n> standby have the same tablespace, but defined on different paths.\n> --\n> Michael\n>\n\nThanks, will work on it as you suggestedAdd pg_basebackup --T olddir=newdir to support check the consistency of a tablespace created before promotionAdd run_test('remote');On Mon, Mar 11, 2019 at 6:50 AM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Mar 09, 2019 at 09:09:24AM +0900, Michael Paquier wrote:\n> When working on the first version of pg_rewind for VMware with Heikki,\n> tablespace support has been added only after, so what you propose is\n> sensible I think.\n> \n> +# Run the test in both modes.\n> +run_test('local');\n> Small nit: we should test for the remote mode here as well.\n\nI got to think more about this one a bit more, and I think that it\nwould be good to also check the consistency of a tablespace created\nbefore promotion.  If you copy the logic from 002_databases.pl, this\nis not going to work because the primary and the standby would try to\ncreate the tablespace in the same path, stepping on each other's\ntoes.  So what you need to do is to create the tablespace on the\nprimary because creating the standby.  This requires a bit more work\nthan what you propose here though as you basically need to extend\nRewindTest::create_standby so as it is possible to pass extra\narguments to $node_master->backup.  And in this case the extra option\nto use is --tablespace-mapping to make sure that the primary and the\nstandby have the same tablespace, but defined on different paths.\n--\nMichael", "msg_date": "Mon, 11 Mar 2019 19:49:11 +0800", "msg_from": "Shaoqi Bai <sbai@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Mon, Mar 11, 2019 at 07:49:11PM +0800, Shaoqi Bai wrote:\n> Thanks, will work on it as you suggested\n> Add pg_basebackup --T olddir=newdir to support check the consistency of a\n> tablespace created before promotion\n> Add run_test('remote');\n\nThanks for considering my input. Why don't you register your patch to\nthe next commit fest then so as it goes through a formal review once\nyou are able to provide a new version? The commit fest is here:\nhttps://commitfest.postgresql.org/23/\n\nWe are currently in the process of wrapping up the last commit fest\nfor v12, so this stuff will have to wait a bit :(\n\nIt could be an idea to split the patch in two pieces:\n- One patch which refactors the code for the new option in\nPostgresNode.pm\n- Second patch for the new test with integration in RewindTest.pm.\nThis should touch different parts of the code, so combining both would\nbe fine as well for me :)\n--\nMichael", "msg_date": "Tue, 12 Mar 2019 11:27:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Tue, Mar 12, 2019 at 10:27 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Mon, Mar 11, 2019 at 07:49:11PM +0800, Shaoqi Bai wrote:\n> > Thanks, will work on it as you suggested\n> > Add pg_basebackup --T olddir=newdir to support check the consistency of a\n> > tablespace created before promotion\n> > Add run_test('remote');\n>\n> Thanks for considering my input. Why don't you register your patch to\n> the next commit fest then so as it goes through a formal review once\n> you are able to provide a new version? The commit fest is here:\n> https://commitfest.postgresql.org/23/\n>\n> We are currently in the process of wrapping up the last commit fest\n> for v12, so this stuff will have to wait a bit :(\n>\n> It could be an idea to split the patch in two pieces:\n> - One patch which refactors the code for the new option in\n> PostgresNode.pm\n> - Second patch for the new test with integration in RewindTest.pm.\n> This should touch different parts of the code, so combining both would\n> be fine as well for me :)\n>\n\nThanks for your advice, sorry for taking so long to give update in the\nthread, because I am stuck in modifing Perl script, knowing little about\nPerl language.\n\nI tried to do the following two things in this patch.\n1. add pg_basebackup --T olddir=newdir to support check the consistency of\na tablespace created before promotion\n2. run both run_test('local') and run_test('remote');\n\nThe code can pass make installcheck under src/bin/pg_rewind, but can not\npass make installcheck under src/bin/pg_basebackup.\nBecause the patch refactor is not done well.\n\nThe patch still need refactor, to make other tests pass, like tests under\nsrc/bin/pg_basebackup.\nSending the letter is just to let you know the little progress on the\nthread.", "msg_date": "Tue, 19 Mar 2019 20:16:21 +0800", "msg_from": "Shaoqi Bai <sbai@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Tue, Mar 19, 2019 at 08:16:21PM +0800, Shaoqi Bai wrote:\n> Thanks for your advice, sorry for taking so long to give update in the\n> thread, because I am stuck in modifing Perl script, knowing little about\n> Perl language.\n\nNo problem. It is true that using perl for the first time can be a\ncertain gap, but once you get used to it it becomes really nice to be\nable to control how tests are run in the tree. I would have avoided\nextra routines in the patch, like what you have done with\ncreate_standby_tbl_mapping(), but instead do something like\ninit_from_backup() which is able to take extra parameters in a way\nsimilar to has_streaming and has_restoring. However, the trick with\ntablespace mapping is that the caller of backup() should be able to\npass down multiple tablespace mapping references to make that a\nmaximum portable.\n--\nMichael", "msg_date": "Wed, 20 Mar 2019 08:45:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Tue, Mar 12, 2019 at 10:27 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> It could be an idea to split the patch in two pieces:\n> - One patch which refactors the code for the new option in\n> PostgresNode.pm\n> - Second patch for the new test with integration in RewindTest.pm.\n> This should touch different parts of the code, so combining both would\n> be fine as well for me :)\n> --\n> Michael\n>\n\nHave updated the patch doing as you suggested\n\n\nOn Wed, Mar 20, 2019 at 7:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> I would have avoided\n> extra routines in the patch, like what you have done with\n> create_standby_tbl_mapping(), but instead do something like\n> init_from_backup() which is able to take extra parameters in a way\n> similar to has_streaming and has_restoring. However, the trick with\n> tablespace mapping is that the caller of backup() should be able to\n> pass down multiple tablespace mapping references to make that a\n> maximum portable.\n> --\n> Michael\n\n\nAlso updated the patch to achieve your suggestion.", "msg_date": "Thu, 21 Mar 2019 23:41:01 +0800", "msg_from": "Shaoqi Bai <sbai@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Thu, Mar 21, 2019 at 11:41:01PM +0800, Shaoqi Bai wrote:\n> Have updated the patch doing as you suggested\n\n+ RewindTest::setup_cluster($test_mode, ['-g']);\n+ RewindTest::start_master();\n\nThere is no need to test for group permissions here, 002_databases.pl\nalready looks after that.\n\n+ # Check for symlink -- needed only on source dir, only allow symlink\n+ # when under pg_tblspc\n+ # (note: this will fall through quietly if file is already gone)\n+ if (-l $srcpath)\nSo you need that but in RecursiveCopy.pm because of init_from_backup\nwhen creating the standby, which makes sense when it comes to\npg_tblspc. I am wondering about any side effects though, and if it\nwould make sense to just remove the restriction for symlinks in\n_copypath_recurse().\n--\nMichael", "msg_date": "Fri, 22 Mar 2019 14:34:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Fri, Mar 22, 2019 at 1:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Mar 21, 2019 at 11:41:01PM +0800, Shaoqi Bai wrote:\n> > Have updated the patch doing as you suggested\n>\n> + RewindTest::setup_cluster($test_mode, ['-g']);\n> + RewindTest::start_master();\n>\n> There is no need to test for group permissions here, 002_databases.pl\n> already looks after that.\n>\n>\nDeleted the test for group permissions in updated patch.\n\n\n> + # Check for symlink -- needed only on source dir, only allow symlink\n> + # when under pg_tblspc\n> + # (note: this will fall through quietly if file is already gone)\n> + if (-l $srcpath)\n> So you need that but in RecursiveCopy.pm because of init_from_backup\n> when creating the standby, which makes sense when it comes to\n> pg_tblspc. I am wondering about any side effects though, and if it\n> would make sense to just remove the restriction for symlinks in\n> _copypath_recurse().\n> --\n> Michael\n>\n\nChecking the RecursiveCopy::copypath being called, only _backup_fs and\ninit_from_backup called it.\nAfter runing cmd make -C src/bin check in updated patch, seeing no failure.", "msg_date": "Fri, 22 Mar 2019 16:25:53 +0800", "msg_from": "Shaoqi Bai <sbai@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "I just noticed this thread. What do we think of adding this test to\npg12? (The patch doesn't apply verbatim, but it's a small update to get\nit to apply.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Apr 2019 10:11:24 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Fri, Apr 26, 2019 at 10:11:24AM -0400, Alvaro Herrera wrote:\n> I just noticed this thread. What do we think of adding this test to\n> pg12? (The patch doesn't apply verbatim, but it's a small update to get\n> it to apply.)\n\nCould you let me have a look at it? I have not tested on Windows, but\nI suspect that because of the symlink() part this would fail, so we\nmay need to skip the tests.\n--\nMichael", "msg_date": "Fri, 26 Apr 2019 23:38:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On 2019-Apr-26, Michael Paquier wrote:\n\n> On Fri, Apr 26, 2019 at 10:11:24AM -0400, Alvaro Herrera wrote:\n> > I just noticed this thread. What do we think of adding this test to\n> > pg12? (The patch doesn't apply verbatim, but it's a small update to get\n> > it to apply.)\n> \n> Could you let me have a look at it?\n\nAbsolutely. I was just trying to get a sense of how frozen the water is\nat this point.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Apr 2019 11:18:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-26, Michael Paquier wrote:\n>> On Fri, Apr 26, 2019 at 10:11:24AM -0400, Alvaro Herrera wrote:\n>>> I just noticed this thread. What do we think of adding this test to\n>>> pg12? (The patch doesn't apply verbatim, but it's a small update to get\n>>> it to apply.)\n\n>> Could you let me have a look at it?\n\n> Absolutely. I was just trying to get a sense of how frozen the water is\n> at this point.\n\nI don't think feature freeze precludes adding new test cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2019 11:21:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Fri, Apr 26, 2019 at 11:21:48AM -0400, Tom Lane wrote:\n> I don't think feature freeze precludes adding new test cases.\n\nI think as well that adding this stuff into v12 would be fine. Now if\nthere is any objection let's wait for later.\n--\nMichael", "msg_date": "Sat, 27 Apr 2019 09:35:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "At Sat, 27 Apr 2019 09:35:19 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190427003519.GC2032@paquier.xyz>\n> On Fri, Apr 26, 2019 at 11:21:48AM -0400, Tom Lane wrote:\n> > I don't think feature freeze precludes adding new test cases.\n> \n> I think as well that adding this stuff into v12 would be fine. Now if\n> there is any objection let's wait for later.\n\nThe patch seems to be using the tablespace directory in backups\ndirectly from standbys. In other words, multiple standbys created\nfrom A backup shares the tablespace directory in the backup.\n\nAnother nitpicking is it sound a bit strainge that the parameter\n\"has_tablespace_mapping\" is not a boolean.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n", "msg_date": "Tue, 07 May 2019 10:31:59 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Fri, Mar 22, 2019 at 04:25:53PM +0800, Shaoqi Bai wrote:\n> Deleted the test for group permissions in updated patch.\n\nWell, there are a couple of things I am not really happy about in this\npatch:\n- There is not much point to have has_tablespace_mapping as it is not\nextensible. Instead I'd rather use a single \"extra\" parameter which\ncan define a range of options. An example of that is within\nPostgresNode::init for \"extra\" and \"auth_extra\".\n- CREATE TABLESPACE is run once on the primary *before* promoting the\nstandby, which causes the tablespace paths to map between both of\nthem. This is not correct. Creating a tablespace on the primary\nbefore creating the standby, and use --tablespace-map would be the way\nto go... However per the next point...\n- standby_afterpromotion is created on the promoted standby, and then\nimmediately dropped. pg_rewind is able to handle this case when\nworking on different hosts. But with this test we finish by having\nthe same problem as pg_basebackup: the source and the target server\nfinish by eating each other. I think that this could actually be an\ninteresting feature for pg_rewind.\n- A comment at the end refers to databases, and not tablespaces.\n\nYou could work out the first problem with the backup by changing the\nbackup()/init_from_backup() in RewindTest::create_standby by a pure\ncall to pg_basebackup, but you still have the second problem, which we\nshould still be able to test, and this requires more facility in\npg_rewind so as it is basically possible to hijack\ncreate_target_symlink() to create a symlink to a different path than\nthe initial one.\n\n> Checking the RecursiveCopy::copypath being called, only _backup_fs and\n> init_from_backup called it.\n> After runing cmd make -C src/bin check in updated patch, seeing no failure.\n\nYes, I can see that. The issue is that even if we do a backup with\n--tablespace-mapping then we still need a tweak to allow the copy of\nsymlinks. I am not sure that this is completely what we are looking\nfor either, as it means that any test setting a primary with a\ntablespace and two standbys initialized from the same base backup\nwould fail. That's not really portable.\n--\nMichael", "msg_date": "Thu, 9 May 2019 14:36:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Tue, May 07, 2019 at 10:31:59AM +0900, Kyotaro HORIGUCHI wrote:\n> The patch seems to be using the tablespace directory in backups\n> directly from standbys. In other words, multiple standbys created\n> from A backup shares the tablespace directory in the backup.\n\nYes, I noticed that, and I am not happy about that either. I'd like\nto think that what we are looking for is an equivalent of the\ntablespace mapping of pg_basebackup, but for init_from_backup(). At\nleast that sounds like a plausible solution.\n--\nMichael", "msg_date": "Thu, 9 May 2019 14:51:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add tablespace tap test to pg_rewind" }, { "msg_contents": "On Thu, May 09, 2019 at 02:36:48PM +0900, Michael Paquier wrote:\n> Yes, I can see that. The issue is that even if we do a backup with\n> --tablespace-mapping then we still need a tweak to allow the copy of\n> symlinks. I am not sure that this is completely what we are looking\n> for either, as it means that any test setting a primary with a\n> tablespace and two standbys initialized from the same base backup\n> would fail. That's not really portable.\n\nSo for now I am marking the patch as returned with feedback. I can\nsee a simple way to put us through that by having an equivalent of\n--tablespace-mapping for the file-level backup routine which passes it\ndown to RecursiveCopy.pm as well as PostgresNode::init_from_backup.\nAt the end of the day, we would need to be able to point the path to\ndifferent locations for:\n- the primary\n- any backup tables\n- any standbys which are restored from the previous backups.\n\nAnd on top of that there is of course the argument of pg_rewind which\nwould need an option similar to pg_basebackup's --tablespace-mapping\nso as a target instance does not finish by using the same tablespace\npath as the source where there is a tablespace of difference during\nthe operation. And it does not prevent an overlap if a tablespace\nneeds to be created when the target instance replays WAL up to the\nconsistent point of the source. So that's a lot of work which may be\nhard to justify.\n--\nMichael", "msg_date": "Fri, 10 May 2019 20:53:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: Add tablespace tap test to pg_rewind" } ]
[ { "msg_contents": "Is it too soon for a PG12 open items wiki page?\n\nI've got a few things I need to keep track of. Normally writing a\npatch and putting on the next open CF is a good move, but since the\nnext one is not for PG12, it seems like not the best place.\n\nAny objections to me making one now?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 8 Mar 2019 23:49:15 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Is it too soon for a PG12 open items wiki page?" }, { "msg_contents": "On Fri, Mar 8, 2019 at 11:49 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> Is it too soon for a PG12 open items wiki page?\n\nIt seems like a good timing to me.\n\n> I've got a few things I need to keep track of. Normally writing a\n> patch and putting on the next open CF is a good move, but since the\n> next one is not for PG12, it seems like not the best place.\n\nAgreed, I had the same thought a couple of days ago.\n\n", "msg_date": "Fri, 8 Mar 2019 11:51:48 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it too soon for a PG12 open items wiki page?" }, { "msg_contents": "On Fri, Mar 8, 2019 at 7:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Mar 8, 2019 at 11:49 AM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n> >\n> > Is it too soon for a PG12 open items wiki page?\n>\n> It seems like a good timing to me.\n>\n> > I've got a few things I need to keep track of. Normally writing a\n> > patch and putting on the next open CF is a good move, but since the\n> > next one is not for PG12, it seems like not the best place.\n>\n> Agreed, I had the same thought a couple of days ago.\n\n+1\n\nThanks,\nAmit\n\n", "msg_date": "Fri, 8 Mar 2019 22:06:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it too soon for a PG12 open items wiki page?" }, { "msg_contents": "On Fri, Mar 08, 2019 at 10:06:43PM +0900, Amit Langote wrote:\n> On Fri, Mar 8, 2019 at 7:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> It seems like a good timing to me.\n\nIt would be a good timing as some issues are already showing up. And\nabracadabra:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n--\nMichael", "msg_date": "Sat, 9 Mar 2019 09:02:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it too soon for a PG12 open items wiki page?" }, { "msg_contents": "On Sat, 9 Mar 2019 at 13:03, Michael Paquier <michael@paquier.xyz> wrote:\n> It would be a good timing as some issues are already showing up. And\n> abracadabra:\n> https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\nThanks.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Sat, 9 Mar 2019 19:53:17 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Is it too soon for a PG12 open items wiki page?" }, { "msg_contents": "On Sat, Mar 9, 2019 at 1:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 08, 2019 at 10:06:43PM +0900, Amit Langote wrote:\n> > On Fri, Mar 8, 2019 at 7:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> It seems like a good timing to me.\n>\n> It would be a good timing as some issues are already showing up. And\n> abracadabra:\n> https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\nThank you Michael!\n\n", "msg_date": "Sat, 9 Mar 2019 09:21:07 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it too soon for a PG12 open items wiki page?" } ]
[ { "msg_contents": "Referring to this thread:\n\nhttps://dba.stackexchange.com/questions/231647/why-are-partial-postgresql-hash-indices-not-smaller-than-full-indices\n\nWhen a hash index is created on a populated table, it estimates the number\nof buckets to start out with based on the number of tuples returned\nby estimate_rel_size. But this number ignores both the fact that NULLs are\nnot stored in hash indexes, and that partial indexes exist. This can lead\nto much too large hash indexes. Doing a re-index just repeats the logic,\nso doesn't fix anything. Fill factor also can't fix it, as you are not\nallowed to increase that beyond 100.\n\nThis goes back to when the pre-sizing was implemented in 2008\n(c9a1cc694abef737548a2a). It seems to be an oversight, rather than\nsomething that was considered.\n\nIs this a bug that should be fixed? Or if getting a more accurate estimate\nis not possible or not worthwhile, add a code comment about that?\n\nCheers,\n\nJeff\n\nReferring to this thread:https://dba.stackexchange.com/questions/231647/why-are-partial-postgresql-hash-indices-not-smaller-than-full-indicesWhen a hash index is created on a populated table, it estimates the number of buckets to start out with based on the number of tuples returned by estimate_rel_size.  But this number ignores both the fact that NULLs are not stored in hash indexes, and that partial indexes exist.  This can lead to much too large hash indexes.  Doing a re-index just repeats the logic, so doesn't fix anything.  Fill factor also can't fix it, as you are not allowed to increase that beyond 100.This goes back to when the pre-sizing was implemented in 2008 (c9a1cc694abef737548a2a).  It seems to be an oversight, rather than something that was considered.Is this a bug that should be fixed?  Or if getting a more accurate estimate is not possible or not worthwhile, add a code comment about that?Cheers,Jeff", "msg_date": "Fri, 8 Mar 2019 13:14:11 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Hash index initial size is too large given NULLs or partial indexes" }, { "msg_contents": "\n\nOn 3/8/19 7:14 PM, Jeff Janes wrote:\n> Referring to this thread:\n> \n> https://dba.stackexchange.com/questions/231647/why-are-partial-postgresql-hash-indices-not-smaller-than-full-indices\n> \n> When a hash index is created on a populated table, it estimates the\n> number of buckets to start out with based on the number of tuples\n> returned by estimate_rel_size.  But this number ignores both the fact\n> that NULLs are not stored in hash indexes, and that partial indexes\n> exist.  This can lead to much too large hash indexes.  Doing a re-index\n> just repeats the logic, so doesn't fix anything.  Fill factor also can't\n> fix it, as you are not allowed to increase that beyond 100.\n> \n\nHmmm :-(\n\n> This goes back to when the pre-sizing was implemented in 2008\n> (c9a1cc694abef737548a2a).  It seems to be an oversight, rather than\n> something that was considered.\n> \n> Is this a bug that should be fixed?  Or if getting a more accurate\n> estimate is not possible or not worthwhile, add a code comment about that?\n> \n\nI'd agree this smells like a bug (or perhaps two). The sizing probably\nshould consider both null_frac and selectivity of the index predicate.\nWhen those two are redundant (i.e. when there's IS NOT NULL condition on\nindexed column), this will result in under-estimate. That means the\nindex build will do a an extra split, but that's probably better than\nhaving permanently bloated index.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Fri, 8 Mar 2019 19:27:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hash index initial size is too large given NULLs or partial\n indexes" }, { "msg_contents": "On Fri, Mar 8, 2019 at 11:57 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On 3/8/19 7:14 PM, Jeff Janes wrote:\n>\n> > This goes back to when the pre-sizing was implemented in 2008\n> > (c9a1cc694abef737548a2a). It seems to be an oversight, rather than\n> > something that was considered.\n> >\n> > Is this a bug that should be fixed? Or if getting a more accurate\n> > estimate is not possible or not worthwhile, add a code comment about that?\n> >\n>\n> I'd agree this smells like a bug (or perhaps two). The sizing probably\n> should consider both null_frac and selectivity of the index predicate.\n>\n\nLike you guys, I also think this area needs improvement. I am not\nsure how easy it is to get the selectivity of the predicate in this\ncode path. If we see how we do it in set_plain_rel_size() during path\ngeneration in the planner, we can get some idea.\n\nAnother idea could be that we don't create the buckets till we know\nthe exact tuples returned by IndexBuildHeapScan. Basically, I think\nwe need to spool the tuples, create the appropriate buckets and then\ninsert the tuples. We might want to do this only when some index\npredicate is present.\n\nIf somebody is interested in doing the leg work, I can help in\nreviewing the patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n", "msg_date": "Sat, 9 Mar 2019 09:55:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hash index initial size is too large given NULLs or partial\n indexes" } ]
[ { "msg_contents": "Hi,\n\nTom introduced supported functions for calculation function's selectivity.\nStill I have similar idea to use supported function for calculation\nfunction's parameter's types and function return type.\n\nMotivation:\n\nReduce a necessity of overloading of functions. My motivation is related\nprimary to Orafce, but this feature should be helpful for anybody with\nsimilar goals. The function's overloading is great functionality but it is\nhard for maintenance.\n\nMy idea to enhance a CREATE FUNCTION command to be able do\n\nCREATE FUCNTION foo(\"any\")\nRETURNS \"any\" AS ...\nTYPEINFO foo_typeinfo\n\nCREATE FUNCTION decode(VARIADIC \"any\")\nRETURNS \"any\" AS ...\nTYPEINFO decode_typeinfo.\n\nThe typeinfo functions returns a pointer tu structure with param types and\nresult type. Only function with \"any\" parameters or \"any\" result can use\nTYPEINFO supported function. This functionality should not be allowed for\ncommon functions.\n\nThis functionality is limited just for C coders. But I expect so typical\napplication coder doesn't need it. It doesn't replace my proposal of\nintroduction other polymorphic type - now named \"commontype\" (can be named\ndifferently). The commontype is good enough solution for application\ncoders, developers.\n\nComments, notes?\n\nRegards\n\nPavel\n\nHi,Tom introduced supported functions for calculation function's selectivity. Still I have similar idea to use supported function for calculation function's parameter's types and function return type.Motivation:Reduce a necessity of overloading of functions. My motivation is related primary to Orafce, but this feature should be helpful for anybody with similar goals. The function's overloading is great functionality but it is hard for maintenance.My idea to enhance a CREATE FUNCTION command to be able doCREATE FUCNTION foo(\"any\")RETURNS \"any\" AS ...TYPEINFO foo_typeinfoCREATE FUNCTION decode(VARIADIC \"any\")RETURNS \"any\" AS ...TYPEINFO decode_typeinfo.The typeinfo functions returns a pointer tu structure with param types and result type. Only function with \"any\" parameters or \"any\" result can use TYPEINFO supported function. This functionality should not be allowed for common functions.This functionality is limited just for C coders. But I expect so typical application coder doesn't need it. It doesn't replace my proposal of introduction other polymorphic type - now named \"commontype\" (can be named differently). The commontype is good enough solution for application coders, developers.Comments, notes?RegardsPavel", "msg_date": "Sat, 9 Mar 2019 07:22:05 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal: type info support functions for functions that use \"any\"\n type" }, { "msg_contents": "Hi\n\nso 9. 3. 2019 v 7:22 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi,\n>\n> Tom introduced supported functions for calculation function's selectivity.\n> Still I have similar idea to use supported function for calculation\n> function's parameter's types and function return type.\n>\n> Motivation:\n>\n> Reduce a necessity of overloading of functions. My motivation is related\n> primary to Orafce, but this feature should be helpful for anybody with\n> similar goals. The function's overloading is great functionality but it is\n> hard for maintenance.\n>\n> My idea to enhance a CREATE FUNCTION command to be able do\n>\n> CREATE FUCNTION foo(\"any\")\n> RETURNS \"any\" AS ...\n> TYPEINFO foo_typeinfo\n>\n> CREATE FUNCTION decode(VARIADIC \"any\")\n> RETURNS \"any\" AS ...\n> TYPEINFO decode_typeinfo.\n>\n> The typeinfo functions returns a pointer tu structure with param types and\n> result type. Only function with \"any\" parameters or \"any\" result can use\n> TYPEINFO supported function. This functionality should not be allowed for\n> common functions.\n>\n> This functionality is limited just for C coders. But I expect so typical\n> application coder doesn't need it. It doesn't replace my proposal of\n> introduction other polymorphic type - now named \"commontype\" (can be named\n> differently). The commontype is good enough solution for application\n> coders, developers.\n>\n> Comments, notes?\n>\n\nhere is a patch\n\nI have not a plan to push decode function to upstream. Patch contains it\njust as demo.\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>", "msg_date": "Tue, 2 Apr 2019 07:39:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> so 9. 3. 2019 v 7:22 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>> Tom introduced supported functions for calculation function's selectivity.\n>> Still I have similar idea to use supported function for calculation\n>> function's parameter's types and function return type.\n>> Motivation:\n>> Reduce a necessity of overloading of functions. My motivation is related\n>> primary to Orafce, but this feature should be helpful for anybody with\n>> similar goals. The function's overloading is great functionality but it is\n>> hard for maintenance.\n\n> here is a patch\n\nTBH, I don't like this proposal one bit. As far as I can see, the idea\nis to let a function's support function redefine the function's declared\nargument and result types on-the-fly according to no predetermined rules,\nand that seems to me like it's a recipe for disaster. How will anyone\nunderstand which function(s) are candidates to match a query, or why one\nparticular candidate got selected over others? It's already hard enough\nto understand the behavior of polymorphic functions in complex cases,\nand those are much more constrained than this would be.\n\nMoreover, I don't think you've even provided a compelling example\ncase. What's this doing that you couldn't do with existing polymorphic\ntypes or the anycompatibletype proposal?\n\nI also strongly suspect that this would break pieces of the system\nthat expect that the stored pg_proc.prorettype has something to do\nwith reality. At minimum, you'd need to fix a number of places you\nhaven't touched here that have their own knowledge of function type\nresolution, such as enforce_generic_type_consistency,\nresolve_polymorphic_argtypes, resolve_aggregate_transtype. Probably\nanyplace that treats polymorphics as being any sort of special case\nwould have to be taught to re-call the support function to find out\nwhat it should think the relevant types are.\n\n(I don't even want to think about what happens if the support function's\nbehavior changes between original parsing and these re-checking spots.)\n\nAnother thing that's very much less than compelling about your example\nis that your support function seems to be happy to throw errors\nif the argument types don't match what it's expecting. That seems\nquite unacceptable, since it would prevent the parser from moving on\nto consider other possibly-matching functions. Maybe that's just\nbecause it's a quick hack not a polished example, but it doesn't\nseem like a good precedent.\n\nIn short, I think the added complexity and bug potential outweigh\nany possible gain from this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jul 2019 16:03:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "I wrote:\n> TBH, I don't like this proposal one bit. As far as I can see, the idea\n> is to let a function's support function redefine the function's declared\n> argument and result types on-the-fly according to no predetermined rules,\n> and that seems to me like it's a recipe for disaster. How will anyone\n> understand which function(s) are candidates to match a query, or why one\n> particular candidate got selected over others? It's already hard enough\n> to understand the behavior of polymorphic functions in complex cases,\n> and those are much more constrained than this would be.\n\nAfter thinking about this a bit more, it seems like you could avoid\na lot of problems if you restricted what the support function call\ndoes to be potentially replacing the result type of a function\ndeclared to return ANY with some more-specific type (computed from\nexamination of the actual arguments). That would make it act much\nmore like a traditional polymorphic function. It'd remove the issues\nabout interactions among multiple potentially-matching functions,\nsince we'd only call a single support function for an already-identified\ntarget function.\n\nYou'd still need to touch everyplace that knows about polymorphic\ntype resolution, since this would essentially be another form of\npolymorphic function. And I'm still very dubious that it's worth\nthe trouble. But it would be a lot more controllable than the\nproposal as it stands.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jul 2019 16:53:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "pá 26. 7. 2019 v 22:03 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > so 9. 3. 2019 v 7:22 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> > napsal:\n> >> Tom introduced supported functions for calculation function's\n> selectivity.\n> >> Still I have similar idea to use supported function for calculation\n> >> function's parameter's types and function return type.\n> >> Motivation:\n> >> Reduce a necessity of overloading of functions. My motivation is related\n> >> primary to Orafce, but this feature should be helpful for anybody with\n> >> similar goals. The function's overloading is great functionality but it\n> is\n> >> hard for maintenance.\n>\n> > here is a patch\n>\n> TBH, I don't like this proposal one bit. As far as I can see, the idea\n> is to let a function's support function redefine the function's declared\n> argument and result types on-the-fly according to no predetermined rules,\n> and that seems to me like it's a recipe for disaster. How will anyone\n> understand which function(s) are candidates to match a query, or why one\n> particular candidate got selected over others? It's already hard enough\n> to understand the behavior of polymorphic functions in complex cases,\n> and those are much more constrained than this would be.\n>\n\nI quietly expect so this feature will be used without combination with\noverloading. But the combination of support function and overloading can be\nexplicitly disabled - (in runtime for simple implementation).\n\n\n> Moreover, I don't think you've even provided a compelling example\n> case. What's this doing that you couldn't do with existing polymorphic\n> types or the anycompatibletype proposal?\n>\n\nThere are two cases of usage\n\na) combination of polymorphic types - fx(t1, t1, t2, t1, t2, t1, t2, ...)\nb) forcing types fx(t1, t2) t1 force explicit cast for t2 to t1\nc) optimization of repeated call of functions like fx(\"any\", \"any\", \"any\",\n...)\n\nIt is pretty hard to create simple non-procedural language to describe\nsyntaxes like @a. But with procedural code it is easy.\n\n@c is special case, that we can do already. But we cannot to push casting\noutside function, and inside function, there is a overhead with casting.\nWith implementing type case inside function, then we increase startup time\nand it is overhead for function started by plpgsql runtime.\n\n\n> I also strongly suspect that this would break pieces of the system\n> that expect that the stored pg_proc.prorettype has something to do\n> with reality. At minimum, you'd need to fix a number of places you\n> haven't touched here that have their own knowledge of function type\n> resolution, such as enforce_generic_type_consistency,\n> resolve_polymorphic_argtypes, resolve_aggregate_transtype. Probably\n> anyplace that treats polymorphics as being any sort of special case\n> would have to be taught to re-call the support function to find out\n> what it should think the relevant types are.\n>\n> (I don't even want to think about what happens if the support function's\n> behavior changes between original parsing and these re-checking spots.)\n>\n\nThe helper function should be immutable - what I know, is not possible to\nchange data types dynamically, so repeated call should not be effective,\nbut should to produce same result, so it should not be a problem.\n\n>\n> Another thing that's very much less than compelling about your example\n> is that your support function seems to be happy to throw errors\n> if the argument types don't match what it's expecting. That seems\n> quite unacceptable, since it would prevent the parser from moving on\n> to consider other possibly-matching functions. Maybe that's just\n> because it's a quick hack not a polished example, but it doesn't\n> seem like a good precedent.\n>\n\nIn this case it is decision, because I don't expect overloading.\n\nI understand to your objections about mixing parser helper functions and\noverloading. Currently it is pretty hard to understand what will be\nexpected behave when somebody overload function with polymorphic function.\n\nWith parser helper function the overloading is not necessary and can be\ndisabled.\n\n\n> In short, I think the added complexity and bug potential outweigh\n> any possible gain from this.\n>\n> regards, tom lane\n>\n\npá 26. 7. 2019 v 22:03 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> so 9. 3. 2019 v 7:22 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>> Tom introduced supported functions for calculation function's selectivity.\n>> Still I have similar idea to use supported function for calculation\n>> function's parameter's types and function return type.\n>> Motivation:\n>> Reduce a necessity of overloading of functions. My motivation is related\n>> primary to Orafce, but this feature should be helpful for anybody with\n>> similar goals. The function's overloading is great functionality but it is\n>> hard for maintenance.\n\n> here is a patch\n\nTBH, I don't like this proposal one bit.  As far as I can see, the idea\nis to let a function's support function redefine the function's declared\nargument and result types on-the-fly according to no predetermined rules,\nand that seems to me like it's a recipe for disaster.  How will anyone\nunderstand which function(s) are candidates to match a query, or why one\nparticular candidate got selected over others?  It's already hard enough\nto understand the behavior of polymorphic functions in complex cases,\nand those are much more constrained than this would be.I quietly expect so this feature will be used without combination with overloading. But the combination of support function and overloading can be explicitly disabled - (in runtime for simple implementation). \n\nMoreover, I don't think you've even provided a compelling example\ncase.  What's this doing that you couldn't do with existing polymorphic\ntypes or the anycompatibletype proposal?There are two cases of usagea) combination of polymorphic types - fx(t1, t1, t2, t1, t2, t1, t2, ...)b) forcing types fx(t1, t2) t1 force explicit cast for t2 to t1c) optimization of repeated call of functions like fx(\"any\", \"any\", \"any\", ...)It is pretty hard to create simple non-procedural language to describe syntaxes like @a. But with procedural code it is easy. @c is special case, that we can do already. But we cannot to push casting outside function, and inside function, there is a overhead with casting. With implementing type case inside function, then we increase startup time and it is overhead for function started by plpgsql runtime.\n\nI also strongly suspect that this would break pieces of the system\nthat expect that the stored pg_proc.prorettype has something to do\nwith reality.  At minimum, you'd need to fix a number of places you\nhaven't touched here that have their own knowledge of function type\nresolution, such as enforce_generic_type_consistency,\nresolve_polymorphic_argtypes, resolve_aggregate_transtype.  Probably\nanyplace that treats polymorphics as being any sort of special case\nwould have to be taught to re-call the support function to find out\nwhat it should think the relevant types are.\n\n(I don't even want to think about what happens if the support function's\nbehavior changes between original parsing and these re-checking spots.)The helper function should be immutable - what I know, is not possible to change data types dynamically, so repeated call should not be effective, but should to produce same result, so it should not be a problem.  \n\nAnother thing that's very much less than compelling about your example\nis that your support function seems to be happy to throw errors\nif the argument types don't match what it's expecting.  That seems\nquite unacceptable, since it would prevent the parser from moving on\nto consider other possibly-matching functions.  Maybe that's just\nbecause it's a quick hack not a polished example, but it doesn't\nseem like a good precedent.In this case it is decision, because I don't expect overloading. I understand to your objections about mixing parser helper functions and overloading. Currently it is pretty hard to understand what will be expected behave when somebody overload function with polymorphic function.With parser helper function the overloading is not necessary and can be disabled.\n\nIn short, I think the added complexity and bug potential outweigh\nany possible gain from this.\n\n                        regards, tom lane", "msg_date": "Sat, 27 Jul 2019 07:28:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "pá 26. 7. 2019 v 22:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I wrote:\n> > TBH, I don't like this proposal one bit. As far as I can see, the idea\n> > is to let a function's support function redefine the function's declared\n> > argument and result types on-the-fly according to no predetermined rules,\n> > and that seems to me like it's a recipe for disaster. How will anyone\n> > understand which function(s) are candidates to match a query, or why one\n> > particular candidate got selected over others? It's already hard enough\n> > to understand the behavior of polymorphic functions in complex cases,\n> > and those are much more constrained than this would be.\n>\n> After thinking about this a bit more, it seems like you could avoid\n> a lot of problems if you restricted what the support function call\n> does to be potentially replacing the result type of a function\n> declared to return ANY with some more-specific type (computed from\n> examination of the actual arguments). That would make it act much\n> more like a traditional polymorphic function. It'd remove the issues\n> about interactions among multiple potentially-matching functions,\n> since we'd only call a single support function for an already-identified\n> target function.\n>\n\nI am not sure if I understand well - so I repeat it with my words.\n\nSo calculation of result type (replace ANY by some specific) can be ok?\n\nI am able to do it if there will be a agreement.\n\nI wrote a possibility to specify argument types as optimization as\nprotection against repeated type identification and casting (that can be\ndone in planning time, and should not be repeated).\n\nThis feature should be used only for functions with types fx(\"any\", \"any\",\n..) returns \"any\". So it is very probable so in execution type you should\nto do some work with parameter type identification.\n\nBut if we find a agreement just on work with return type, then it is good\nenough solution. The practical overhead of type cache inside function\nshould not be dramatic.\n\n\n\n> You'd still need to touch everyplace that knows about polymorphic\n> type resolution, since this would essentially be another form of\n> polymorphic function. And I'm still very dubious that it's worth\n> the trouble. But it would be a lot more controllable than the\n> proposal as it stands.\n>\n\nok\n\n\n> regards, tom lane\n>\n\npá 26. 7. 2019 v 22:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I wrote:\n> TBH, I don't like this proposal one bit.  As far as I can see, the idea\n> is to let a function's support function redefine the function's declared\n> argument and result types on-the-fly according to no predetermined rules,\n> and that seems to me like it's a recipe for disaster.  How will anyone\n> understand which function(s) are candidates to match a query, or why one\n> particular candidate got selected over others?  It's already hard enough\n> to understand the behavior of polymorphic functions in complex cases,\n> and those are much more constrained than this would be.\n\nAfter thinking about this a bit more, it seems like you could avoid\na lot of problems if you restricted what the support function call\ndoes to be potentially replacing the result type of a function\ndeclared to return ANY with some more-specific type (computed from\nexamination of the actual arguments).  That would make it act much\nmore like a traditional polymorphic function.  It'd remove the issues\nabout interactions among multiple potentially-matching functions,\nsince we'd only call a single support function for an already-identified\ntarget function.I am not sure if I understand well - so I repeat it with my words.So calculation of result type (replace ANY by some specific) can be ok? I am able to do it if there will be a agreement. I wrote a possibility to specify argument types as optimization as protection against repeated type identification and casting (that can be done in planning time, and should not be repeated).This feature should be used only for functions with types fx(\"any\", \"any\", ..) returns \"any\". So it is very probable so in execution type you should to do some work with parameter type identification.But if we find a agreement just on work with return type, then it is good enough solution. The practical overhead of type cache inside function should not be dramatic.\n\nYou'd still need to touch everyplace that knows about polymorphic\ntype resolution, since this would essentially be another form of\npolymorphic function.  And I'm still very dubious that it's worth\nthe trouble.  But it would be a lot more controllable than the\nproposal as it stands.ok \n\n                        regards, tom lane", "msg_date": "Sat, 27 Jul 2019 07:44:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "On Sat, Jul 27, 2019 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 26. 7. 2019 v 22:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I wrote:\n>> > TBH, I don't like this proposal one bit. As far as I can see, the idea\n>> > is to let a function's support function redefine the function's declared\n>> > argument and result types on-the-fly according to no predetermined rules,\n>> > and that seems to me like it's a recipe for disaster. How will anyone\n>> > understand which function(s) are candidates to match a query, or why one\n>> > particular candidate got selected over others? It's already hard enough\n>> > to understand the behavior of polymorphic functions in complex cases,\n>> > and those are much more constrained than this would be.\n>>\n>> After thinking about this a bit more, it seems like you could avoid\n>> a lot of problems if you restricted what the support function call\n>> does to be potentially replacing the result type of a function\n>> declared to return ANY with some more-specific type (computed from\n>> examination of the actual arguments). That would make it act much\n>> more like a traditional polymorphic function. It'd remove the issues\n>> about interactions among multiple potentially-matching functions,\n>> since we'd only call a single support function for an already-identified\n>> target function.\n>\n>\n> I am not sure if I understand well - so I repeat it with my words.\n>\n> So calculation of result type (replace ANY by some specific) can be ok?\n>\n> I am able to do it if there will be a agreement.\n...\n\nHi Pavel,\n\nI see that this is an active project with an ongoing discussion, but\nwe have run out of July so I have moved this to the September CF and\nset it to \"Waiting on Author\".\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 21:01:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "čt 1. 8. 2019 v 11:01 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Sat, Jul 27, 2019 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > pá 26. 7. 2019 v 22:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> I wrote:\n> >> > TBH, I don't like this proposal one bit. As far as I can see, the\n> idea\n> >> > is to let a function's support function redefine the function's\n> declared\n> >> > argument and result types on-the-fly according to no predetermined\n> rules,\n> >> > and that seems to me like it's a recipe for disaster. How will anyone\n> >> > understand which function(s) are candidates to match a query, or why\n> one\n> >> > particular candidate got selected over others? It's already hard\n> enough\n> >> > to understand the behavior of polymorphic functions in complex cases,\n> >> > and those are much more constrained than this would be.\n> >>\n> >> After thinking about this a bit more, it seems like you could avoid\n> >> a lot of problems if you restricted what the support function call\n> >> does to be potentially replacing the result type of a function\n> >> declared to return ANY with some more-specific type (computed from\n> >> examination of the actual arguments). That would make it act much\n> >> more like a traditional polymorphic function. It'd remove the issues\n> >> about interactions among multiple potentially-matching functions,\n> >> since we'd only call a single support function for an already-identified\n> >> target function.\n> >\n> >\n> > I am not sure if I understand well - so I repeat it with my words.\n> >\n> > So calculation of result type (replace ANY by some specific) can be ok?\n> >\n> > I am able to do it if there will be a agreement.\n> ...\n>\n> Hi Pavel,\n>\n> I see that this is an active project with an ongoing discussion, but\n> we have run out of July so I have moved this to the September CF and\n> set it to \"Waiting on Author\".\n>\n\nsure\n\nPavel\n\n\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n\nčt 1. 8. 2019 v 11:01 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Sat, Jul 27, 2019 at 5:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 26. 7. 2019 v 22:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I wrote:\n>> > TBH, I don't like this proposal one bit.  As far as I can see, the idea\n>> > is to let a function's support function redefine the function's declared\n>> > argument and result types on-the-fly according to no predetermined rules,\n>> > and that seems to me like it's a recipe for disaster.  How will anyone\n>> > understand which function(s) are candidates to match a query, or why one\n>> > particular candidate got selected over others?  It's already hard enough\n>> > to understand the behavior of polymorphic functions in complex cases,\n>> > and those are much more constrained than this would be.\n>>\n>> After thinking about this a bit more, it seems like you could avoid\n>> a lot of problems if you restricted what the support function call\n>> does to be potentially replacing the result type of a function\n>> declared to return ANY with some more-specific type (computed from\n>> examination of the actual arguments).  That would make it act much\n>> more like a traditional polymorphic function.  It'd remove the issues\n>> about interactions among multiple potentially-matching functions,\n>> since we'd only call a single support function for an already-identified\n>> target function.\n>\n>\n> I am not sure if I understand well - so I repeat it with my words.\n>\n> So calculation of result type (replace ANY by some specific) can be ok?\n>\n> I am able to do it if there will be a agreement.\n...\n\nHi Pavel,\n\nI see that this is an active project with an ongoing discussion, but\nwe have run out of July so I have moved this to the September CF and\nset it to \"Waiting on Author\".surePavel\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Wed, 7 Aug 2019 08:01:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Hi\n\npá 26. 7. 2019 v 22:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I wrote:\n> > TBH, I don't like this proposal one bit. As far as I can see, the idea\n> > is to let a function's support function redefine the function's declared\n> > argument and result types on-the-fly according to no predetermined rules,\n> > and that seems to me like it's a recipe for disaster. How will anyone\n> > understand which function(s) are candidates to match a query, or why one\n> > particular candidate got selected over others? It's already hard enough\n> > to understand the behavior of polymorphic functions in complex cases,\n> > and those are much more constrained than this would be.\n>\n> After thinking about this a bit more, it seems like you could avoid\n> a lot of problems if you restricted what the support function call\n> does to be potentially replacing the result type of a function\n> declared to return ANY with some more-specific type (computed from\n> examination of the actual arguments). That would make it act much\n> more like a traditional polymorphic function. It'd remove the issues\n> about interactions among multiple potentially-matching functions,\n> since we'd only call a single support function for an already-identified\n> target function.\n>\n> You'd still need to touch everyplace that knows about polymorphic\n> type resolution, since this would essentially be another form of\n> polymorphic function. And I'm still very dubious that it's worth\n> the trouble. But it would be a lot more controllable than the\n> proposal as it stands.\n>\n\nI am sending reduced version of previous patch. Now, support function is\nused just for replacement of returned type \"any\" by some other.\n\nThe are two patches - shorter with only support function, larger with demo\n\"decode\" function. I don't expect so the \"decode\" extension should be\npushed to master. It is just demo of usage.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>", "msg_date": "Wed, 7 Aug 2019 08:11:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Hi\n\nrebase\n\nPavel", "msg_date": "Fri, 16 Aug 2019 08:41:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Hi\n\npá 16. 8. 2019 v 8:41 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> rebase\n>\n\nanother rebase\n\nRegards\n\nPavel\n\n\n> Pavel\n>", "msg_date": "Thu, 28 Nov 2019 06:29:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ parser-support-function-with-demo-20191128.patch ]\n\nTBH, I'm still not convinced that this is a good idea. Restricting\nthe support function to only change the function's return type is\nsafer than the original proposal, but it's still not terribly safe.\nIf you change the support function's algorithm in any way, how do\nyou know whether you've broken existing stored queries? If the\nsupport function consults external resources to make its choice\n(perhaps checking the existence of a cast), where could we record\nthat the query depends on the existence of that cast? There'd be\nno visible trace of that in the query parsetree.\n\nI'm also still not convinced that this idea allows doing anything\nthat can't be done just as well with polymorphism. It would be a\nreally bad idea for the support function to be examining the values\nof the arguments (else what happens when they're not constants?).\nSo all you can do is look at their types, and then it seems like\nthe things you can usefully do are pretty much like polymorphism,\ni.e. select some one of the input types, or a related type such\nas an array type or element type. If there are gaps in what you\ncan express with polymorphism, I'd much rather spend effort on\nimproving that facility than in adding something that is only\naccessible to advanced C coders. (Yes, I know I've been slacking\non reviewing [1].)\n\nLastly, I still think that this patch doesn't begin to address\nall the places that would have to know about the feature. There's\na lot of places that know about polymorphism --- if this is\npolymorphism on steroids, which it is, then why don't all of those\nplaces need to be touched?\n\nOn the whole I think we should reject this idea.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/26/1911/\n\n\n", "msg_date": "Tue, 14 Jan 2020 16:09:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "út 14. 1. 2020 v 22:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > [ parser-support-function-with-demo-20191128.patch ]\n>\n> TBH, I'm still not convinced that this is a good idea. Restricting\n> the support function to only change the function's return type is\n> safer than the original proposal, but it's still not terribly safe.\n> If you change the support function's algorithm in any way, how do\n> you know whether you've broken existing stored queries? If the\n> support function consults external resources to make its choice\n> (perhaps checking the existence of a cast), where could we record\n> that the query depends on the existence of that cast? There'd be\n> no visible trace of that in the query parsetree.\n>\n\nThis risk is real and cannot be simply solved without more complications.\n\nCan be solution to limit and enforce this functionality only for extensions\nthat be initialized from shared_preload_libraries or\nlocal_preload_libraries?\n\n\n> I'm also still not convinced that this idea allows doing anything\n> that can't be done just as well with polymorphism. It would be a\n> really bad idea for the support function to be examining the values\n> of the arguments (else what happens when they're not constants?).\n> So all you can do is look at their types, and then it seems like\n> the things you can usefully do are pretty much like polymorphism,\n> i.e. select some one of the input types, or a related type such\n> as an array type or element type. If there are gaps in what you\n> can express with polymorphism, I'd much rather spend effort on\n> improving that facility than in adding something that is only\n> accessible to advanced C coders. (Yes, I know I've been slacking\n> on reviewing [1].)\n>\n\nFor my purpose critical information is type. I don't need to work with\nconstant, but I can imagine, so some API can be nice to work with constant\nvalue.\nYes, I can solve lot of things by patch [1], but not all, and this patch\nshorter, and almost trivial.\n\n\n> Lastly, I still think that this patch doesn't begin to address\n> all the places that would have to know about the feature. There's\n> a lot of places that know about polymorphism --- if this is\n> polymorphism on steroids, which it is, then why don't all of those\n> places need to be touched?\n>\n\nI am sorry, I don't understand last sentence?\n\n\n> On the whole I think we should reject this idea.\n>\n> regards, tom lane\n>\n> [1] https://commitfest.postgresql.org/26/1911/\n>\n\nút 14. 1. 2020 v 22:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n>  [ parser-support-function-with-demo-20191128.patch ]\n\nTBH, I'm still not convinced that this is a good idea.  Restricting\nthe support function to only change the function's return type is\nsafer than the original proposal, but it's still not terribly safe.\nIf you change the support function's algorithm in any way, how do\nyou know whether you've broken existing stored queries?  If the\nsupport function consults external resources to make its choice\n(perhaps checking the existence of a cast), where could we record\nthat the query depends on the existence of that cast?  There'd be\nno visible trace of that in the query parsetree.This risk is real and cannot be simply solved without more complications.Can be solution to limit and enforce this functionality only for extensions that be initialized from shared_preload_libraries or local_preload_libraries? \n\nI'm also still not convinced that this idea allows doing anything\nthat can't be done just as well with polymorphism.  It would be a\nreally bad idea for the support function to be examining the values\nof the arguments (else what happens when they're not constants?).\nSo all you can do is look at their types, and then it seems like\nthe things you can usefully do are pretty much like polymorphism,\ni.e. select some one of the input types, or a related type such\nas an array type or element type.  If there are gaps in what you\ncan express with polymorphism, I'd much rather spend effort on\nimproving that facility than in adding something that is only\naccessible to advanced C coders.  (Yes, I know I've been slacking\non reviewing [1].)For my purpose critical information is type. I don't need to work with constant, but I can imagine, so some API can be nice to work with constant value. Yes, I can solve lot of things by patch [1], but not all, and this patch shorter, and almost trivial. \n\nLastly, I still think that this patch doesn't begin to address\nall the places that would have to know about the feature.  There's\na lot of places that know about polymorphism --- if this is\npolymorphism on steroids, which it is, then why don't all of those\nplaces need to be touched?I am sorry, I don't understand  last sentence? \n\nOn the whole I think we should reject this idea.\n\n                        regards, tom lane\n\n[1] https://commitfest.postgresql.org/26/1911/", "msg_date": "Wed, 15 Jan 2020 11:04:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "st 15. 1. 2020 v 11:04 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 14. 1. 2020 v 22:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > [ parser-support-function-with-demo-20191128.patch ]\n>>\n>> TBH, I'm still not convinced that this is a good idea. Restricting\n>> the support function to only change the function's return type is\n>> safer than the original proposal, but it's still not terribly safe.\n>> If you change the support function's algorithm in any way, how do\n>> you know whether you've broken existing stored queries? If the\n>> support function consults external resources to make its choice\n>> (perhaps checking the existence of a cast), where could we record\n>> that the query depends on the existence of that cast? There'd be\n>> no visible trace of that in the query parsetree.\n>>\n>\n> This risk is real and cannot be simply solved without more complications.\n>\n> Can be solution to limit and enforce this functionality only for\n> extensions that be initialized from shared_preload_libraries or\n> local_preload_libraries?\n>\n\nWhen we check, so used function is started from dynamic loaded extension,\nwe can raise a error. It's not too great for upgrades, but I expect upgrade\nof this kind extension is very similar like Postgres - and the restart can\nbe together.\n\n\n>\n>> I'm also still not convinced that this idea allows doing anything\n>> that can't be done just as well with polymorphism. It would be a\n>> really bad idea for the support function to be examining the values\n>> of the arguments (else what happens when they're not constants?).\n>> So all you can do is look at their types, and then it seems like\n>> the things you can usefully do are pretty much like polymorphism,\n>> i.e. select some one of the input types, or a related type such\n>> as an array type or element type. If there are gaps in what you\n>> can express with polymorphism, I'd much rather spend effort on\n>> improving that facility than in adding something that is only\n>> accessible to advanced C coders. (Yes, I know I've been slacking\n>> on reviewing [1].)\n>>\n>\n> For my purpose critical information is type. I don't need to work with\n> constant, but I can imagine, so some API can be nice to work with constant\n> value.\n> Yes, I can solve lot of things by patch [1], but not all, and this patch\n> shorter, and almost trivial.\n>\n\nAll this discussion is motivated by my work on Orafce extension -\nhttps://github.com/orafce/orafce\n\nUnfortunately implementation of \"decode\" functions is not possible with\npatch [1]. Now I have 55 instances of \"decode\" function and I am sure, I\ndon't cover all.\n\nWith this patch (polymorphism on stereoids :)), I can do it very simple,\nand quickly. This functions and other similar.\n\nThe patch was very simple, so I think, maybe wrongly, so it is acceptable\nway.\n\nOur polymorphism is strong, and if I design code natively for Postgres,\nthan it is perfect. But It doesn't allow to implement some simple functions\nthat are used in other databases. With this small patch I can cover almost\nall situations - and very simply.\n\nI don't want to increase complexity of polymorphism rules more - [1] is\nmaximum, what we can implement with acceptable costs, but this generic\nsystem is sometimes not enough.\n\nBut I invite any design, how this problem can be solved.\n\nAny ideas?\n\n\n>\n>\n>> Lastly, I still think that this patch doesn't begin to address\n>> all the places that would have to know about the feature. There's\n>> a lot of places that know about polymorphism --- if this is\n>> polymorphism on steroids, which it is, then why don't all of those\n>> places need to be touched?\n>>\n>\n> I am sorry, I don't understand last sentence?\n>\n>\n>> On the whole I think we should reject this idea.\n>>\n>> regards, tom lane\n>>\n>> [1] https://commitfest.postgresql.org/26/1911/\n>>\n>\n\nst 15. 1. 2020 v 11:04 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 14. 1. 2020 v 22:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n>  [ parser-support-function-with-demo-20191128.patch ]\n\nTBH, I'm still not convinced that this is a good idea.  Restricting\nthe support function to only change the function's return type is\nsafer than the original proposal, but it's still not terribly safe.\nIf you change the support function's algorithm in any way, how do\nyou know whether you've broken existing stored queries?  If the\nsupport function consults external resources to make its choice\n(perhaps checking the existence of a cast), where could we record\nthat the query depends on the existence of that cast?  There'd be\nno visible trace of that in the query parsetree.This risk is real and cannot be simply solved without more complications.Can be solution to limit and enforce this functionality only for extensions that be initialized from shared_preload_libraries or local_preload_libraries?When we check, so used function is started from dynamic loaded extension, we can raise a error. It's not too great for upgrades, but I expect upgrade of this kind extension is very similar like Postgres - and the restart can be together. \n\nI'm also still not convinced that this idea allows doing anything\nthat can't be done just as well with polymorphism.  It would be a\nreally bad idea for the support function to be examining the values\nof the arguments (else what happens when they're not constants?).\nSo all you can do is look at their types, and then it seems like\nthe things you can usefully do are pretty much like polymorphism,\ni.e. select some one of the input types, or a related type such\nas an array type or element type.  If there are gaps in what you\ncan express with polymorphism, I'd much rather spend effort on\nimproving that facility than in adding something that is only\naccessible to advanced C coders.  (Yes, I know I've been slacking\non reviewing [1].)For my purpose critical information is type. I don't need to work with constant, but I can imagine, so some API can be nice to work with constant value. Yes, I can solve lot of things by patch [1], but not all, and this patch shorter, and almost trivial. All this discussion is motivated by my work on Orafce extension - https://github.com/orafce/orafceUnfortunately implementation of \"decode\" functions is not possible with patch [1]. Now I have 55 instances of \"decode\" function and I am sure, I don't cover all.With this patch (polymorphism on stereoids :)), I can do it very simple, and quickly. This functions and other similar. The patch was very simple, so I think, maybe wrongly, so it is acceptable way.Our polymorphism is strong, and if I design code natively for Postgres, than it is perfect. But It doesn't allow to implement some simple functions that are used in other databases. With this small patch I can cover almost all situations - and very simply.I don't want to increase complexity of polymorphism rules more - [1] is maximum, what we can implement with acceptable costs, but this generic system is sometimes not enough.But I invite any design, how this problem can be solved. Any ideas? \n\nLastly, I still think that this patch doesn't begin to address\nall the places that would have to know about the feature.  There's\na lot of places that know about polymorphism --- if this is\npolymorphism on steroids, which it is, then why don't all of those\nplaces need to be touched?I am sorry, I don't understand  last sentence? \n\nOn the whole I think we should reject this idea.\n\n                        regards, tom lane\n\n[1] https://commitfest.postgresql.org/26/1911/", "msg_date": "Thu, 16 Jan 2020 08:57:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Hi\n\nút 14. 1. 2020 v 22:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > [ parser-support-function-with-demo-20191128.patch ]\n>\n> TBH, I'm still not convinced that this is a good idea. Restricting\n> the support function to only change the function's return type is\n> safer than the original proposal, but it's still not terribly safe.\n> If you change the support function's algorithm in any way, how do\n> you know whether you've broken existing stored queries? If the\n> support function consults external resources to make its choice\n> (perhaps checking the existence of a cast), where could we record\n> that the query depends on the existence of that cast? There'd be\n> no visible trace of that in the query parsetree.\n>\n>\nI reread all related mails and I think so it should be safe - or there is\nsame risk like using any C extensions for functions or hooks.\n\nI use a example from demo\n\n+CREATE FUNCTION decode_support(internal)\n+RETURNS internal\n+AS 'MODULE_PATHNAME'\n+LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;\n+\n+--\n+-- decode function - example of function that returns \"any\" type\n+--\n+CREATE FUNCTION decode(variadic \"any\")\n+RETURNS \"any\"\n+AS 'MODULE_PATHNAME'\n+LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE SUPPORT decode_support;\n\nThe support function (and implementation) is joined with \"decode\" function.\nSo I cannot to change the behave of support function without reloading a\nextension, and it needs typically session reconnect.\n\n\n\n> I'm also still not convinced that this idea allows doing anything\n> that can't be done just as well with polymorphism. It would be a\n> really bad idea for the support function to be examining the values\n> of the arguments (else what happens when they're not constants?).\n> So all you can do is look at their types, and then it seems like\n> the things you can usefully do are pretty much like polymorphism,\n> i.e. select some one of the input types, or a related type such\n> as an array type or element type. If there are gaps in what you\n> can express with polymorphism, I'd much rather spend effort on\n> improving that facility than in adding something that is only\n> accessible to advanced C coders. (Yes, I know I've been slacking\n> on reviewing [1].)\n>\n\nThe design is based not on values, just on types. I don't need to know a\nvalue, I need to know a type.\n\nCurrently our polymorphism is not enough - and is necessary to use \"any\"\ndatatype. This patch just add a possibility to use \"any\" as return type.\n\nI spent on this topic lot of time and one result is patch [1]. This patch\nincrease situation lot of, but cannot to cover all. There are strong limits\nfor variadic usage.\n\nThis patch is really not about values, it is about types - and about more\npossibility (and more elasticity) to control result type.\n\n\n> Lastly, I still think that this patch doesn't begin to address\n> all the places that would have to know about the feature. There's\n> a lot of places that know about polymorphism --- if this is\n> polymorphism on steroids, which it is, then why don't all of those\n> places need to be touched?\n>\n\nIt is working with \"any\" type, and then it can be very small, because the\nall work with this type is moved to extension.\n\n\n> On the whole I think we should reject this idea.\n>\n\nI will accept any your opinion. Please, try to understand to me as Orafce\ndeveloper, maintainer. I would to clean this extension, and current state\nof polymorphism (with patch [1]) doesn't allow it.\n\nI am open to any proposals, ideas.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n> [1] https://commitfest.postgresql.org/26/1911/\n>\n\nHiút 14. 1. 2020 v 22:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n>  [ parser-support-function-with-demo-20191128.patch ]\n\nTBH, I'm still not convinced that this is a good idea.  Restricting\nthe support function to only change the function's return type is\nsafer than the original proposal, but it's still not terribly safe.\nIf you change the support function's algorithm in any way, how do\nyou know whether you've broken existing stored queries?  If the\nsupport function consults external resources to make its choice\n(perhaps checking the existence of a cast), where could we record\nthat the query depends on the existence of that cast?  There'd be\nno visible trace of that in the query parsetree.\nI reread all related mails and I think so it should be safe - or there is same risk like using any C extensions for functions or hooks.I use a example from demo +CREATE FUNCTION decode_support(internal)+RETURNS internal+AS 'MODULE_PATHNAME'+LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE;++--+-- decode function - example of function that returns \"any\" type+--+CREATE FUNCTION decode(variadic \"any\")+RETURNS \"any\"+AS 'MODULE_PATHNAME'+LANGUAGE C IMMUTABLE STRICT PARALLEL SAFE SUPPORT decode_support;The support function (and implementation) is joined with \"decode\" function. So I cannot to change the behave of support function without reloading a extension, and it needs typically session reconnect.  \nI'm also still not convinced that this idea allows doing anything\nthat can't be done just as well with polymorphism.  It would be a\nreally bad idea for the support function to be examining the values\nof the arguments (else what happens when they're not constants?).\nSo all you can do is look at their types, and then it seems like\nthe things you can usefully do are pretty much like polymorphism,\ni.e. select some one of the input types, or a related type such\nas an array type or element type.  If there are gaps in what you\ncan express with polymorphism, I'd much rather spend effort on\nimproving that facility than in adding something that is only\naccessible to advanced C coders.  (Yes, I know I've been slacking\non reviewing [1].)The design is based not on values, just on types. I don't need to know a value, I need to know a type. Currently our polymorphism is not enough - and is necessary to use \"any\" datatype. This patch just add a possibility to use \"any\" as return type.I spent on this topic lot of time and one result is patch [1]. This patch increase situation lot of, but cannot to cover all. There are strong limits for variadic usage. This patch is really not about values, it is about types - and about more possibility (and more elasticity) to control result type.\n\nLastly, I still think that this patch doesn't begin to address\nall the places that would have to know about the feature.  There's\na lot of places that know about polymorphism --- if this is\npolymorphism on steroids, which it is, then why don't all of those\nplaces need to be touched?It is working with \"any\" type, and then it can be very small, because the all work with this type is moved to extension.\n\nOn the whole I think we should reject this idea.I will accept any your opinion. Please, try to understand to me as Orafce developer, maintainer. I would to clean this extension, and current state of polymorphism (with patch [1]) doesn't allow it. I am open to any proposals, ideas.RegardsPavel\n\n                        regards, tom lane\n\n[1] https://commitfest.postgresql.org/26/1911/", "msg_date": "Sun, 26 Jan 2020 16:33:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "> On 26 Jan 2020, at 16:33, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I reread all related mails and I think so it should be safe - or there is same risk like using any C extensions for functions or hooks.\n\nThis patch has been bumped in CFs for the past year, with the thread stalled\nand the last review comment being in support of rejection. Tom, do you still\nfeel it should be rejected in light of Pavel's latest posts?\n\ncheers ./daniel\n\n", "msg_date": "Fri, 31 Jul 2020 00:15:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> This patch has been bumped in CFs for the past year, with the thread stalled\n> and the last review comment being in support of rejection. Tom, do you still\n> feel it should be rejected in light of Pavel's latest posts?\n\nI have seen no convincing response to the concerns I raised in my\nlast message in the thread [1], to wit that\n\n1. I think the \"flexibility\" of letting a support function resolve the\noutput type in some unspecified way is mostly illusory, because if it\ndoesn't do it in a way that's morally equivalent to polymorphism, it's\ndoing it wrong. Also, I'm not that excited about improving polymorphism\nin a way that is only accessible with specialized C code. The example of\nOracle-like DECODE() could be handled about as well if we had a second set\nof anycompatible-style polymorphic types, that is something like\n\ndecode(expr anycompatible,\n search1 anycompatible, result1 anycompatible2,\n search2 anycompatible, result2 anycompatible2,\n search3 anycompatible, result3 anycompatible2,\n ...\n) returns anycompatible2;\n\nAdmittedly, you'd need to write a separate declaration for each number of\narguments you wanted to support, but they could all point at the same C\nfunction --- which'd be a lot simpler than in this patch, since it would\nnot need to deal with any type coercions, only comparisons.\n\nI also argue that to the extent that the support function is reinventing\npolymorphism internally, it's going to be inferior to the parser's\nversion. As an example, with Pavel's sample implementation, if a\nparticular query needs a coercion from type X to type Y, that's nowhere\nvisible in the parse tree. So you could drop the cast without being told\nthat view so-and-so depends on it, leading to a run-time failure next time\nyou try to use that view. Doing the same thing with normal polymorphism,\nthe X-to-Y cast function would be used in the parse tree and so we'd know\nabout the dependency.\n\n2. I have no faith that the proposed implementation is correct or\ncomplete. As I complained earlier, a lot of places have special-case\nhandling for polymorphism, and it seems like every one of them would\nneed to know about this feature too. That is, to the extent that\nthis patch's footprint is smaller than commit 24e2885ee -- which it\nis, by a lot -- I think those are bugs of omission. It will not work\nto have a situation where some parts of the backend resolve a function's\nresult type as one thing and others resolve it as something else thanks to\nfailure to account for this new feature. As a concrete example, it looks\nlike we'd fail pretty hard if someone tried to use this facility in an\naggregate support function.\n\nSo my opinion is still what it was in January.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/31501.1579036195%40sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 30 Jul 2020 20:32:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "pá 31. 7. 2020 v 2:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > This patch has been bumped in CFs for the past year, with the thread\n> stalled\n> > and the last review comment being in support of rejection. Tom, do you\n> still\n> > feel it should be rejected in light of Pavel's latest posts?\n>\n> I have seen no convincing response to the concerns I raised in my\n> last message in the thread [1], to wit that\n>\n> 1. I think the \"flexibility\" of letting a support function resolve the\n> output type in some unspecified way is mostly illusory, because if it\n> doesn't do it in a way that's morally equivalent to polymorphism, it's\n> doing it wrong. Also, I'm not that excited about improving polymorphism\n> in a way that is only accessible with specialized C code. The example of\n> Oracle-like DECODE() could be handled about as well if we had a second set\n> of anycompatible-style polymorphic types, that is something like\n>\n> decode(expr anycompatible,\n> search1 anycompatible, result1 anycompatible2,\n> search2 anycompatible, result2 anycompatible2,\n> search3 anycompatible, result3 anycompatible2,\n> ...\n> ) returns anycompatible2;\n>\n\nWith this proposal I can write a good enough implementation of the \"decode\"\nfunction, although it cannot be 100% compatible. It can cover probably\nalmost all use cases.\n\nBut this design doesn't help with ANSI compatible LEAD, LAG functions.\nThere is a different strategy - optional argument is implicitly casted to\ntype of first argument.\n\n\n> Admittedly, you'd need to write a separate declaration for each number of\n> arguments you wanted to support, but they could all point at the same C\n> function --- which'd be a lot simpler than in this patch, since it would\n> not need to deal with any type coercions, only comparisons.\n>\n\nThis patch is reduced - first version allowed similar argument list\ntransformations like parser does with COALESCE or CASE expressions.\n\nWhen arguments are transformed early, then the body of function can be thin.\n\n\n> I also argue that to the extent that the support function is reinventing\n> polymorphism internally, it's going to be inferior to the parser's\n> version. As an example, with Pavel's sample implementation, if a\n> particular query needs a coercion from type X to type Y, that's nowhere\n> visible in the parse tree. So you could drop the cast without being told\n> that view so-and-so depends on it, leading to a run-time failure next time\n> you try to use that view. Doing the same thing with normal polymorphism,\n> the X-to-Y cast function would be used in the parse tree and so we'd know\n> about the dependency.\n>\n\nIt is by reduced design. First implementation did a transformation of the\nargument list too. Then the cast was visible in the argument list.\n\nIt is true, so this patch implements an alternative way to polymorphic\ntypes. I don't think it is necessarily bad (and this functionality is\navailable only for C language). We do it for COALESCE, CASE, GREATEST,\nLEAST functions and minimally due lazy evaluation we don't try to rewrite\nthese functionality to usual functions. I would not increase the complexity\nof Postgres type systems or introduce some specific features used just by\nme. When people start to write an application on Postgres, then the current\nsystem is almost good enough. But a different situation is when a\nsignificant factor is compatibility - this is a topic that I have to solve\nin Orafce or issue with LAG, LEAD functions. Introducing a special\npolymorphic type for some specific behavior is hard and maybe unacceptable\nwork. For me (as extension author) it can be nice to have some possibility\nto modify a parse tree - without useless overhead. With this possibility,\nsome functions can be lighter and faster - because casting will be outside\nthe function.\n\nRegards\n\nPavel\n\n\n\n> 2. I have no faith that the proposed implementation is correct or\n> complete. As I complained earlier, a lot of places have special-case\n> handling for polymorphism, and it seems like every one of them would\n> need to know about this feature too. That is, to the extent that\n> this patch's footprint is smaller than commit 24e2885ee -- which it\n> is, by a lot -- I think those are bugs of omission. It will not work\n> to have a situation where some parts of the backend resolve a function's\n> result type as one thing and others resolve it as something else thanks to\n> failure to account for this new feature. As a concrete example, it looks\n> like we'd fail pretty hard if someone tried to use this facility in an\n> aggregate support function.\n>\n\n\n\n>\n> So my opinion is still what it was in January.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/31501.1579036195%40sss.pgh.pa.us\n>\n\npá 31. 7. 2020 v 2:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Daniel Gustafsson <daniel@yesql.se> writes:\n> This patch has been bumped in CFs for the past year, with the thread stalled\n> and the last review comment being in support of rejection.  Tom, do you still\n> feel it should be rejected in light of Pavel's latest posts?\n\nI have seen no convincing response to the concerns I raised in my\nlast message in the thread [1], to wit that\n\n1. I think the \"flexibility\" of letting a support function resolve the\noutput type in some unspecified way is mostly illusory, because if it\ndoesn't do it in a way that's morally equivalent to polymorphism, it's\ndoing it wrong.  Also, I'm not that excited about improving polymorphism\nin a way that is only accessible with specialized C code.  The example of\nOracle-like DECODE() could be handled about as well if we had a second set\nof anycompatible-style polymorphic types, that is something like\n\ndecode(expr anycompatible,\n       search1 anycompatible, result1 anycompatible2,\n       search2 anycompatible, result2 anycompatible2,\n       search3 anycompatible, result3 anycompatible2,\n       ...\n) returns anycompatible2;With this proposal I can write a good enough implementation of the \"decode\" function, although it cannot be 100% compatible. It can cover probably almost all use cases.But this design doesn't help with ANSI compatible LEAD, LAG functions. There is a different strategy - optional argument is implicitly casted to type of first argument.\n\nAdmittedly, you'd need to write a separate declaration for each number of\narguments you wanted to support, but they could all point at the same C\nfunction --- which'd be a lot simpler than in this patch, since it would\nnot need to deal with any type coercions, only comparisons.This patch is reduced - first version allowed similar argument list transformations like parser does with COALESCE or CASE expressions.When arguments are transformed early, then the body of function can be thin.\n\nI also argue that to the extent that the support function is reinventing\npolymorphism internally, it's going to be inferior to the parser's\nversion.  As an example, with Pavel's sample implementation, if a\nparticular query needs a coercion from type X to type Y, that's nowhere\nvisible in the parse tree.  So you could drop the cast without being told\nthat view so-and-so depends on it, leading to a run-time failure next time\nyou try to use that view.  Doing the same thing with normal polymorphism,\nthe X-to-Y cast function would be used in the parse tree and so we'd know\nabout the dependency.It is by reduced design. First implementation did a transformation of the argument list too. Then the cast was visible in the argument list. It is true, so this patch implements an alternative way to polymorphic types. I don't think it is necessarily bad (and this functionality is available only for C language). We do it for COALESCE, CASE, GREATEST, LEAST functions and minimally due lazy evaluation we don't try to rewrite these functionality to usual functions. I would not increase the complexity of Postgres type systems or introduce some specific features used just by me. When people start to write an application on Postgres, then the current system is almost good enough. But a different situation is when a significant factor is compatibility - this is a topic that I have to solve in Orafce or issue with LAG, LEAD functions. Introducing a special polymorphic type for some specific behavior is hard and maybe unacceptable work. For me (as extension author) it can be nice to have some possibility to modify a parse tree - without useless overhead. With this possibility, some functions can be lighter and faster - because casting will be outside the function.RegardsPavel \n\n2. I have no faith that the proposed implementation is correct or\ncomplete.  As I complained earlier, a lot of places have special-case\nhandling for polymorphism, and it seems like every one of them would\nneed to know about this feature too.  That is, to the extent that\nthis patch's footprint is smaller than commit 24e2885ee -- which it\nis, by a lot -- I think those are bugs of omission.  It will not work\nto have a situation where some parts of the backend resolve a function's\nresult type as one thing and others resolve it as something else thanks to\nfailure to account for this new feature.  As a concrete example, it looks\nlike we'd fail pretty hard if someone tried to use this facility in an\naggregate support function. \n\nSo my opinion is still what it was in January.\n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/message-id/31501.1579036195%40sss.pgh.pa.us", "msg_date": "Fri, 31 Jul 2020 18:28:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "> pá 31. 7. 2020 v 2:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> napsal:\n> \n> So my opinion is still what it was in January.\n\nSince there does not appear to be support for this patch and it has not \nattracted any new review or comment in the last year I'm planning to \nclose it on MAR 8 unless there are arguments to the contrary.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 3 Mar 2021 12:12:31 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" }, { "msg_contents": "st 3. 3. 2021 v 18:12 odesílatel David Steele <david@pgmasters.net> napsal:\n\n> > pá 31. 7. 2020 v 2:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us\n> > <mailto:tgl@sss.pgh.pa.us>> napsal:\n> >\n> > So my opinion is still what it was in January.\n>\n> Since there does not appear to be support for this patch and it has not\n> attracted any new review or comment in the last year I'm planning to\n> close it on MAR 8 unless there are arguments to the contrary.\n>\n\nThis feature is very specific. The benefit is mostly for authors of\nextensions that try to emulate some other RDBMS. For me - it can reduce\nthousands of lines of Orafce source code.\n\nBut Tom has a strong negative option on this feature. But on second hand, I\nam accepting so this feature is specific and an benefit is specific subset\nof users. So there are no strong reasons for hard pushing from me. There\nare other similar proposals, so we will see.\n\nRegards\n\nPavel\n\n\n\n\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n\nst 3. 3. 2021 v 18:12 odesílatel David Steele <david@pgmasters.net> napsal:> pá 31. 7. 2020 v 2:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> napsal:\n> \n>     So my opinion is still what it was in January.\n\nSince there does not appear to be support for this patch and it has not \nattracted any new review or comment in the last year I'm planning to \nclose it on MAR 8 unless there are arguments to the contrary.This feature is very specific. The benefit is mostly for authors of extensions that try to emulate some other RDBMS. For me - it can reduce thousands of lines of Orafce source code. But Tom has a strong negative option on this feature. But on second hand, I am accepting so this feature is specific and an benefit is specific subset of users. So there are no strong reasons  for hard pushing from me. There are other similar proposals, so we will see.RegardsPavel\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Wed, 3 Mar 2021 18:25:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: type info support functions for functions that use\n \"any\" type" } ]
[ { "msg_contents": "for example:\nbegin;\ndeclare cur cursor for select * from t;\ninsert into t2 values(...);\nfetch next cur;\ncommit;\n\n// after this, I can't fetch cur any more.\n\nMy question are:\n1. Is this must in principle? or it is easy to implement as this in PG?\n2. Any bad thing would happen if I keep the named portal (for the cursor)\navailable even the transaction is commit, so that I can fetch the cursor\nafter the transaction is committed?\n\nThanks\n\nfor example:begin;declare cur cursor for select * from t;insert into t2 values(...);fetch next cur;commit;// after this,  I can't fetch cur any more. My question are:1.  Is this must in principle?  or it is easy to implement as this in PG?2.  Any bad thing would happen if I keep the named portal (for the cursor) available even the transaction is commit, so that I can fetch the cursor after the transaction is committed?  Thanks", "msg_date": "Sun, 10 Mar 2019 16:14:16 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "what makes the PL cursor life-cycle must be in the same transaction?" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n\n> for example:\n> begin;\n> declare cur cursor for select * from t;\n> insert into t2 values(...);\n> fetch next cur;\n> commit;\n>\n> // after this, I can't fetch cur any more.\n>\n> My question are:\n> 1. Is this must in principle? or it is easy to implement as this in PG?\n\nIt is already implemented. If you declare the cursor WITH HOLD, you can\nkeep using it after the transaction commits.\n\n> 2. Any bad thing would happen if I keep the named portal (for the cursor)\n> available even the transaction is commit, so that I can fetch the cursor\n> after the transaction is committed?\n\nAccording to the documentation\n(https://www.postgresql.org/docs/current/sql-declare.html):\n\n| In the current implementation, the rows represented by a held cursor\n| are copied into a temporary file or memory area so that they remain\n| available for subsequent transactions. \n\n> Thanks\n\n- ilmari\n-- \n\"I use RMS as a guide in the same way that a boat captain would use\n a lighthouse. It's good to know where it is, but you generally\n don't want to find yourself in the same spot.\" - Tollef Fog Heen\n\n", "msg_date": "Sun, 10 Mar 2019 23:07:53 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: what makes the PL cursor life-cycle must be in the same\n transaction?" }, { "msg_contents": "DECLARE cur CURSOR with hold FOR SELECT * FROM t;\n\nthe \"with hold\" is designed for this purpose. sorry for this\ninterruption.\n\nOn Sun, Mar 10, 2019 at 4:14 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> for example:\n> begin;\n> declare cur cursor for select * from t;\n> insert into t2 values(...);\n> fetch next cur;\n> commit;\n>\n> // after this, I can't fetch cur any more.\n>\n> My question are:\n> 1. Is this must in principle? or it is easy to implement as this in PG?\n> 2. Any bad thing would happen if I keep the named portal (for the cursor)\n> available even the transaction is commit, so that I can fetch the cursor\n> after the transaction is committed?\n>\n> Thanks\n>\n\nDECLARE cur CURSOR with hold FOR SELECT * FROM t;the \"with hold\"  is designed for this purpose.  sorry for this interruption. On Sun, Mar 10, 2019 at 4:14 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:for example:begin;declare cur cursor for select * from t;insert into t2 values(...);fetch next cur;commit;// after this,  I can't fetch cur any more. My question are:1.  Is this must in principle?  or it is easy to implement as this in PG?2.  Any bad thing would happen if I keep the named portal (for the cursor) available even the transaction is commit, so that I can fetch the cursor after the transaction is committed?  Thanks", "msg_date": "Mon, 11 Mar 2019 08:54:58 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: what makes the PL cursor life-cycle must be in the same\n transaction?" } ]
[ { "msg_contents": "Include GUC's unit, if it has one, in out-of-range error messages.\n\nThis should reduce confusion in cases where we've applied a units\nconversion, so that the number being reported (and the quoted range\nlimits) are in some other units than what the user gave in the\nsetting we're rejecting.\n\nSome of the changes here assume that float GUCs can have units,\nwhich isn't true just yet, but will be shortly.\n\nDiscussion: https://postgr.es/m/3811.1552169665@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/28a65fc3607a0f45c39a9418f747459bb4f1592a\n\nModified Files\n--------------\nsrc/backend/utils/misc/guc.c | 113 ++++++++++++++++++++++----------------\nsrc/test/regress/expected/guc.out | 2 +-\n2 files changed, 66 insertions(+), 49 deletions(-)\n\n", "msg_date": "Sun, 10 Mar 2019 19:18:20 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Include GUC's unit, if it has one,\n in out-of-range error message" }, { "msg_contents": "On Sun, Mar 10, 2019 at 07:18:20PM +0000, Tom Lane wrote:\n> Include GUC's unit, if it has one, in out-of-range error messages.\n> \n> This should reduce confusion in cases where we've applied a units\n> conversion, so that the number being reported (and the quoted range\n> limits) are in some other units than what the user gave in the\n> setting we're rejecting.\n> \n> Some of the changes here assume that float GUCs can have units,\n> which isn't true just yet, but will be shortly.\n\nIt does not seem to have cooled down all animals yet, whelk is\nstill complaining:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=whelk&dt=2019-03-11%2000%3A41%3A13\n\n-ERROR: -Infinity is outside the valid range for parameter \"geqo_selection_bias\" (1.5 .. 2)\n+ERROR: invalid value for parameter \"geqo_selection_bias\": \"-infinity\"\n\nIt would be nice if we could avoid an alternate output.\n--\nMichael", "msg_date": "Mon, 11 Mar 2019 10:11:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Include GUC's unit, if it has one, in out-of-range error\n message" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Mar 10, 2019 at 07:18:20PM +0000, Tom Lane wrote:\n>> Include GUC's unit, if it has one, in out-of-range error messages.\n\n> It does not seem to have cooled down all animals yet, whelk is\n> still complaining:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=whelk&dt=2019-03-11%2000%3A41%3A13\n\nYeah, also the HPUX animals (gaur isn't booted up at the moment, but\nI bet it'd fail too).\n\nI think what's going on here is what's mentioned in the comments in\nfloat8in_internal:\n\n * C99 requires that strtod() accept NaN, [+-]Infinity, and [+-]Inf,\n * but not all platforms support all of these (and some accept them\n * but set ERANGE anyway...)\n\nSpecifically, these symptoms would be explained if these platforms'\nstrtod() sets ERANGE for infinity.\n\nI can think of three plausible responses. In decreasing order of\namount of work:\n\n1. Decide that we'd better wrap strtod() with something that ensures\nplatform-independent behavior for all our uses of strtod (and strtof?)\nrather than only float8in_internal.\n\n2. Put in a hack in guc.c to make it ignore ERANGE as long as the result\nsatisfies isinf(). This would ensure GUC cases would go through the\nvalue-out-of-range path rather than the syntax-error path. We've got\na bunch of other strtod() calls that are potentially subject to similar\nplatform dependencies though ...\n\n3. Decide this isn't worth avoiding platform dependencies for, and just\ntake out the new regression test case. I'd only put in that test on\nthe spur of the moment anyway, so it's hard to argue that it's worth\nmuch.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 10 Mar 2019 23:15:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Portability of strtod (was Re: pgsql: Include GUC's unit,\n if it has one, in out-of-range error message)" }, { "msg_contents": "On Mon, Mar 11, 2019 at 8:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Sun, Mar 10, 2019 at 07:18:20PM +0000, Tom Lane wrote:\n> I think what's going on here is what's mentioned in the comments in\n> float8in_internal:\n>\n> * C99 requires that strtod() accept NaN, [+-]Infinity, and [+-]Inf,\n> * but not all platforms support all of these (and some accept them\n> * but set ERANGE anyway...)\n>\n> Specifically, these symptoms would be explained if these platforms'\n> strtod() sets ERANGE for infinity.\n>\n> I can think of three plausible responses. In decreasing order of\n> amount of work:\n>\n> 1. Decide that we'd better wrap strtod() with something that ensures\n> platform-independent behavior for all our uses of strtod (and strtof?)\n> rather than only float8in_internal.\n>\n\nThis sounds like a good approach, but won't it has the risk of change\nin behaviour?\n\n> 2. Put in a hack in guc.c to make it ignore ERANGE as long as the result\n> satisfies isinf(). This would ensure GUC cases would go through the\n> value-out-of-range path rather than the syntax-error path. We've got\n> a bunch of other strtod() calls that are potentially subject to similar\n> platform dependencies though ...\n>\n\nYeah, this won't completely fix the symptom.\n\n> 3. Decide this isn't worth avoiding platform dependencies for, and just\n> take out the new regression test case. I'd only put in that test on\n> the spur of the moment anyway, so it's hard to argue that it's worth\n> much.\n>\n\nFor the time being option-3 sounds like a reasonable approach to fix\nbuildfarm failures and then later if we want to do some bigger surgery\nbased on option-1 or some other option, we can anyways do it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n", "msg_date": "Mon, 11 Mar 2019 09:24:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Portability of strtod (was Re: pgsql: Include GUC's unit, if it\n has one, in out-of-range error message)" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Mon, Mar 11, 2019 at 8:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I can think of three plausible responses. In decreasing order of\n>> amount of work:\n>> \n>> 1. Decide that we'd better wrap strtod() with something that ensures\n>> platform-independent behavior for all our uses of strtod (and strtof?)\n>> rather than only float8in_internal.\n\n> This sounds like a good approach, but won't it has the risk of change\n> in behaviour?\n\nWell, the point would be to produce consistent behavior across platforms\nwhere it's not consistent now. So yeah, some platforms would necessarily\nsee a behavior change. But I think your point is that changing this\neverywhere is solving a problem that hasn't been complained about,\nand that's a valid concern.\n\n>> 2. Put in a hack in guc.c to make it ignore ERANGE as long as the result\n>> satisfies isinf(). This would ensure GUC cases would go through the\n>> value-out-of-range path rather than the syntax-error path. We've got\n>> a bunch of other strtod() calls that are potentially subject to similar\n>> platform dependencies though ...\n\n> Yeah, this won't completely fix the symptom.\n\nIt would fix things in an area where we're changing the behavior anyway,\nso maybe that's the right scope to work at. After thinking about this\na little, it seems like simply ignoring ERANGE from strtod might get the\nbehavior we want: per POSIX strtod's result should be infinity for overflow\nor zero for underflow, and proceeding with either of those should give\nbetter behavior than treating the case as a syntax error. Anyway\nI think I'll try that and see what the buildfarm says.\n\n>> 3. Decide this isn't worth avoiding platform dependencies for, and just\n>> take out the new regression test case. I'd only put in that test on\n>> the spur of the moment anyway, so it's hard to argue that it's worth\n>> much.\n\n> For the time being option-3 sounds like a reasonable approach to fix\n> buildfarm failures and then later if we want to do some bigger surgery\n> based on option-1 or some other option, we can anyways do it.\n\nYeah, if I can't fix it pretty easily then I'll just remove the test\ncase. But the behavior shown in the expected result is a bit nicer than\nwhat we're actually getting from these buildfarm animals, so ideally\nwe'd fix it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 11 Mar 2019 11:06:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Portability of strtod (was Re: pgsql: Include GUC's unit,\n if it has one, in out-of-range error message)" } ]
[ { "msg_contents": "Hi, hackers.\n\nAttached patches add missing distance operator <->(box, point).\n\nWe already have reverse operator <->(point, box), but it can't be used for kNN\nsearch in GiST and SP-GiST. GiST and SP-GiST now support kNN searches over more\ncomplex polygons and circles, but do not support more simple boxes, which seems\nto be inconsistent.\n\nDescription of the patches:\n1. Add function dist_pb(box, point) and operator <->.\n\n2. Add <-> to GiST box_ops.\n Extracted gist_box_distance_helper() common for gist_box_distance() and\n gist_bbox_distance().\n\n3. Add <-> to SP-GiST.\n Changed only catalog and tests. Box case is already checked in\n spg_box_quad_leaf_consistent():\n out->recheckDistances = distfnoid == F_DIST_POLYP;\n\n\n--\nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 11 Mar 2019 02:49:07 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Add missing operator <->(box, point)" }, { "msg_contents": "\nHello Nikita,\n\n> Attached patches add missing distance operator <->(box, point).\n\nIndeed.\n\n> We already have reverse operator <->(point, box), but it can't be used \n> for kNN search in GiST and SP-GiST. GiST and SP-GiST now support kNN \n> searches over more complex polygons and circles, but do not support more \n> simple boxes, which seems to be inconsistent.\n>\n> Description of the patches:\n> 1. Add function dist_pb(box, point) and operator <->.\n\nAbout this first patch: applies cleanly, compiles, \"make check\" ok.\n\nNo doc changes, but this was expected to work in the first place, \naccording to the documention.\n\nAbout the test, I'd suggest to name the result columns, eg \"pt to box \ndist\" and \"box to pt dist\", otherwise why all is repeated is unclear.\n\nI notice that other distance tests do not test for commutativity. Are they \nalso not implemented, or just not tested? If not implemented, I'd suggest \nto add them in the same batch. If not tested, maybe the patch should do as \nothers, or maybe given the trivial implementation there should just be one \ntest per commutted operator for coverage.\n\nISTM that the committer would need to \"bump the catalog revision number\" \nbecause it adds new functions & operators.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 20 Apr 2019 15:41:11 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "[ warning, drive-by comment ahead ]\n\nFabien COELHO <coelho@cri.ensmp.fr> writes:\n> I notice that other distance tests do not test for commutativity. Are they \n> also not implemented, or just not tested? If not implemented, I'd suggest \n> to add them in the same batch.\n\nYeah ... just looking at operators named <->, I see\n\nregression=# select oid, oid::regoperator, oprcom, oprcode from pg_operator where oprname = '<->';\n oid | oid | oprcom | oprcode \n------+----------------------+--------+---------------------------\n 517 | <->(point,point) | 517 | point_distance\n 613 | <->(point,line) | 0 | dist_pl\n 614 | <->(point,lseg) | 0 | dist_ps\n 615 | <->(point,box) | 0 | dist_pb\n 616 | <->(lseg,line) | 0 | dist_sl\n 617 | <->(lseg,box) | 0 | dist_sb\n 618 | <->(point,path) | 0 | dist_ppath\n 706 | <->(box,box) | 706 | box_distance\n 707 | <->(path,path) | 707 | path_distance\n 708 | <->(line,line) | 708 | line_distance\n 709 | <->(lseg,lseg) | 709 | lseg_distance\n 712 | <->(polygon,polygon) | 712 | poly_distance\n 1520 | <->(circle,circle) | 1520 | circle_distance\n 1522 | <->(point,circle) | 3291 | dist_pc\n 3291 | <->(circle,point) | 1522 | dist_cpoint\n 3276 | <->(point,polygon) | 3289 | dist_ppoly\n 3289 | <->(polygon,point) | 3276 | dist_polyp\n 1523 | <->(circle,polygon) | 0 | dist_cpoly\n 1524 | <->(line,box) | 0 | dist_lb\n 5005 | <->(tsquery,tsquery) | 0 | pg_catalog.tsquery_phrase\n(20 rows)\n\nIt's not clear to me why to be particularly more excited about\n<->(box, point) than about the other missing cases here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2019 00:01:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "Attached 2nd version of the patches.\n\nOn 20.04.2019 16:41, Fabien COELHO wrote:\n\n> About the test, I'd suggest to name the result columns, eg \"pt to box\n> dist\" and \"box to pt dist\", otherwise why all is repeated is unclear.\n\nFixed.\n\nOn 02.07.2019 7:01, Tom Lane wrote:\n\n> [ warning, drive-by comment ahead ]\n>\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I notice that other distance tests do not test for commutativity. Are they\n>> also not implemented, or just not tested? If not implemented, I'd suggest\n>> to add them in the same batch.\n> Yeah ... just looking at operators named <->, I see\n>\n> regression=# select oid, oid::regoperator, oprcom, oprcode from pg_operator where oprname = '<->';\n> oid | oid | oprcom | oprcode\n> ------+----------------------+--------+---------------------------\n> 517 | <->(point,point) | 517 | point_distance\n> 613 | <->(point,line) | 0 | dist_pl\n> 614 | <->(point,lseg) | 0 | dist_ps\n> 615 | <->(point,box) | 0 | dist_pb\n> 616 | <->(lseg,line) | 0 | dist_sl\n> 617 | <->(lseg,box) | 0 | dist_sb\n> 618 | <->(point,path) | 0 | dist_ppath\n> 706 | <->(box,box) | 706 | box_distance\n> 707 | <->(path,path) | 707 | path_distance\n> 708 | <->(line,line) | 708 | line_distance\n> 709 | <->(lseg,lseg) | 709 | lseg_distance\n> 712 | <->(polygon,polygon) | 712 | poly_distance\n> 1520 | <->(circle,circle) | 1520 | circle_distance\n> 1522 | <->(point,circle) | 3291 | dist_pc\n> 3291 | <->(circle,point) | 1522 | dist_cpoint\n> 3276 | <->(point,polygon) | 3289 | dist_ppoly\n> 3289 | <->(polygon,point) | 3276 | dist_polyp\n> 1523 | <->(circle,polygon) | 0 | dist_cpoly\n> 1524 | <->(line,box) | 0 | dist_lb\n> 5005 | <->(tsquery,tsquery) | 0 | pg_catalog.tsquery_phrase\n> (20 rows)\n>\n> It's not clear to me why to be particularly more excited about\n> <->(box, point) than about the other missing cases here.\n>\n> \t\t\tregards, tom lane\n\nThe original goal was to add support of ordering by distance to point to\nall geometric opclasses. As you can see, GiST and SP-GiST box_ops has no\ndistance operator while more complex circle_ops and poly_ops have it:\n\nSELECT\n amname,\n opcname,\n amopopr::regoperator AS dist_opr\nFROM\n pg_opclass LEFT JOIN\n pg_amop ON amopfamily = opcfamily AND amoppurpose = 'o',\n pg_am,\n pg_type\nWHERE\n opcmethod = pg_am.oid AND\n opcintype = pg_type.oid AND\n typcategory = 'G'\nORDER BY 1, 2;\n\n\n amname | opcname | dist_opr\n--------+-------------------+--------------------\n brin | box_inclusion_ops |\n gist | box_ops |\n gist | circle_ops | <->(circle,point)\n gist | point_ops | <->(point,point)\n gist | poly_ops | <->(polygon,point)\n spgist | box_ops |\n spgist | kd_point_ops | <->(point,point)\n spgist | poly_ops | <->(polygon,point)\n spgist | quad_point_ops | <->(point,point)\n(9 rows)\n\nWe could use commuted \"const <-> var\" operators for kNN searches, but the\ncurrent implementation requires the existence of \"var <-> const\" operators, and\norder-by-op clauses are rebuilt using them (see match_clause_to_ordering_op()\nat /src/backend/optimizer/path/indxpath.c).\n\n\n\n--\nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 2 Jul 2019 21:17:52 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "On Tue, Jul 2, 2019 at 9:19 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> We could use commuted \"const <-> var\" operators for kNN searches, but the\n> current implementation requires the existence of \"var <-> const\" operators, and\n> order-by-op clauses are rebuilt using them (see match_clause_to_ordering_op()\n> at /src/backend/optimizer/path/indxpath.c).\n\nBut probably it's still worth to just add commutator for every <->\noperator and close this question. Otherwise, it may arise again once\nwe want to add some more kNN support to opclasses or something. On\nthe other hand, are we already going to limit oid consumption?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 2 Jul 2019 21:55:40 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Tue, Jul 2, 2019 at 9:19 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n>> We could use commuted \"const <-> var\" operators for kNN searches, but the\n>> current implementation requires the existence of \"var <-> const\" operators, and\n>> order-by-op clauses are rebuilt using them (see match_clause_to_ordering_op()\n>> at /src/backend/optimizer/path/indxpath.c).\n\n> But probably it's still worth to just add commutator for every <->\n> operator and close this question.\n\nYeah, I was just thinking that it was weird not to have the commutator\noperators, independently of indexing considerations. Seems like a\nusability thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2019 15:13:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "On Mon, Mar 11, 2019 at 2:49 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> 2. Add <-> to GiST box_ops.\n> Extracted gist_box_distance_helper() common for gist_box_distance() and\n> gist_bbox_distance().\n\nFor me it doesn't look worth having two distinct functions\ngist_box_distance_helper() and gist_bbox_distance(). What about\nhaving just one and leave responsibility for recheck flag to the\ncaller?\n\n> 3. Add <-> to SP-GiST.\n> Changed only catalog and tests. Box case is already checked in\n> spg_box_quad_leaf_consistent():\n> out->recheckDistances = distfnoid == F_DIST_POLYP;\n\nSo, it seems to be fix of oversight in 2a6368343ff4. But assuming\nfixing this requires catalog changes, we shouldn't backpatch this.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 8 Jul 2019 18:22:21 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "Attached 3rd version of the patches.\n\nOn 02.07.2019 21:55, Alexander Korotkov wrote:\n\n> On Tue, Jul 2, 2019 at 9:19 PM Nikita Glukhov<n.gluhov@postgrespro.ru> wrote:\n>> We could use commuted \"const <-> var\" operators for kNN searches, but the\n>> current implementation requires the existence of \"var <-> const\" operators, and\n>> order-by-op clauses are rebuilt using them (see match_clause_to_ordering_op()\n>> at /src/backend/optimizer/path/indxpath.c).\n> But probably it's still worth to just add commutator for every <->\n> operator and close this question. Otherwise, it may arise again once\n> we want to add some more kNN support to opclasses or something. On\n> the other hand, are we already going to limit oid consumption?\n\nAll missing distance operators were added to the first patch.\n\n\n\nOn 08.07.2019 18:22, Alexander Korotkov wrote:\n\n> On Mon, Mar 11, 2019 at 2:49 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n>> 2. Add <-> to GiST box_ops.\n>> Extracted gist_box_distance_helper() common for gist_box_distance() and\n>> gist_bbox_distance().\n> For me it doesn't look worth having two distinct functions\n> gist_box_distance_helper() and gist_bbox_distance(). What about\n> having just one and leave responsibility for recheck flag to the\n> caller?\n\ngist_bbox_distance() was removed.\n\nBut maybe it would be better to replace two identical functions\ngist_circle_distance() and gist_poly_distance() with the single\ngist_bbox_distance()?\n\n\n>> 3. Add <-> to SP-GiST.\n>> Changed only catalog and tests. Box case is already checked in\n>> spg_box_quad_leaf_consistent():\n>> out->recheckDistances = distfnoid == F_DIST_POLYP;\n> So, it seems to be fix of oversight in 2a6368343ff4. But assuming\n> fixing this requires catalog changes, we shouldn't backpatch this.\n>\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 8 Jul 2019 23:38:07 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "On Mon, Jul 8, 2019 at 11:39 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> On 08.07.2019 18:22, Alexander Korotkov wrote:\n> For me it doesn't look worth having two distinct functions\n> gist_box_distance_helper() and gist_bbox_distance(). What about\n> having just one and leave responsibility for recheck flag to the\n> caller?\n>\n> gist_bbox_distance() was removed.\n\nOK.\n\n> But maybe it would be better to replace two identical functions\n> gist_circle_distance() and gist_poly_distance() with the single\n> gist_bbox_distance()?\n\nSounds reasonable to me.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 9 Jul 2019 00:03:54 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" }, { "msg_contents": "On Tue, Jul 9, 2019 at 12:03 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Mon, Jul 8, 2019 at 11:39 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> > On 08.07.2019 18:22, Alexander Korotkov wrote:\n> > For me it doesn't look worth having two distinct functions\n> > gist_box_distance_helper() and gist_bbox_distance(). What about\n> > having just one and leave responsibility for recheck flag to the\n> > caller?\n> >\n> > gist_bbox_distance() was removed.\n>\n> OK.\n>\n> > But maybe it would be better to replace two identical functions\n> > gist_circle_distance() and gist_poly_distance() with the single\n> > gist_bbox_distance()?\n>\n> Sounds reasonable to me.\n\nHowever, gist_poly_distance() and gist_circle_distance() have\ndifferent signatures. Having same internal function to be\ncorresponding to more than one catalog function cause troubles in\nsanity checks. So, let's leave it as it is.\n\nRevised patchset is attached. It contains commit messages as well as\nminor editorialization.\n\nI'm going to push this if no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 11 Jul 2019 19:13:49 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add missing operator <->(box, point)" } ]
[ { "msg_contents": "Variables used after a longjmp() need to be declared volatile. In\ncase of a pointer, it's the pointer itself that needs to be declared\nvolatile, not the pointed-to value. So we need\n\n PyObject *volatile items;\n\ninstead of\n\n volatile PyObject *items; /* wrong */\n\nAttached patch fixes a couple of cases of that. Most instances were\nalready correct.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 11 Mar 2019 08:23:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Fix volatile vs. pointer confusion" }, { "msg_contents": "On Mon, Mar 11, 2019 at 08:23:39AM +0100, Peter Eisentraut wrote:\n> Attached patch fixes a couple of cases of that. Most instances were\n> already correct.\n\nIt seems to me that you should look at that:\nhttps://www.postgresql.org/message-id/20190308055911.GG4099@paquier.xyz\nThey treat about the same subject, and a patch has been sent for this\nCF.\n--\nMichael", "msg_date": "Mon, 11 Mar 2019 17:31:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix volatile vs. pointer confusion" }, { "msg_contents": "On 2019-Mar-11, Peter Eisentraut wrote:\n\n> Variables used after a longjmp() need to be declared volatile. In\n> case of a pointer, it's the pointer itself that needs to be declared\n> volatile, not the pointed-to value. So we need\n> \n> PyObject *volatile items;\n> \n> instead of\n> \n> volatile PyObject *items; /* wrong */\n\nLooking at recently committed 2e616dee9e60, we have introduced this:\n\n+ volatile xmlBufferPtr buf = NULL;\n+ volatile xmlNodePtr cur_copy = NULL;\n\nwhere the pointer-ness nature of the object is inside the typedef. I\n*suppose* that this is correct as written. There are a few occurrences\nof this pattern in eg. contrib/xml2.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Mon, 11 Mar 2019 08:57:39 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix volatile vs. pointer confusion" }, { "msg_contents": "On 2019-03-11 09:31, Michael Paquier wrote:\n> On Mon, Mar 11, 2019 at 08:23:39AM +0100, Peter Eisentraut wrote:\n>> Attached patch fixes a couple of cases of that. Most instances were\n>> already correct.\n> \n> It seems to me that you should look at that:\n> https://www.postgresql.org/message-id/20190308055911.GG4099@paquier.xyz\n> They treat about the same subject, and a patch has been sent for this\n> CF.\n\nI'm aware of that patch and have been looking at it. But it's not\ndirectly related to this issue.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 13 Mar 2019 10:13:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix volatile vs. pointer confusion" }, { "msg_contents": "On 2019-03-11 12:57, Alvaro Herrera wrote:\n> Looking at recently committed 2e616dee9e60, we have introduced this:\n> \n> + volatile xmlBufferPtr buf = NULL;\n> + volatile xmlNodePtr cur_copy = NULL;\n> \n> where the pointer-ness nature of the object is inside the typedef. I\n> *suppose* that this is correct as written. There are a few occurrences\n> of this pattern in eg. contrib/xml2.\n\nI think this is correct, but I don't want to wreck my sanity trying to\nunderstand the syntax-level details of why.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 13 Mar 2019 10:14:32 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix volatile vs. pointer confusion" }, { "msg_contents": "On 2019-03-11 08:23, Peter Eisentraut wrote:\n> Variables used after a longjmp() need to be declared volatile. In\n> case of a pointer, it's the pointer itself that needs to be declared\n> volatile, not the pointed-to value. So we need\n> \n> PyObject *volatile items;\n> \n> instead of\n> \n> volatile PyObject *items; /* wrong */\n> \n> Attached patch fixes a couple of cases of that. Most instances were\n> already correct.\n\nCommitted.\n\nI'll wait for the build farm to see if there are any new compiler\nwarnings because of this, then backpatch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 14 Mar 2019 08:48:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix volatile vs. pointer confusion" } ]
[ { "msg_contents": "My code is based on commit\n\nzhifan@zhifandeMacBook-Pro ~/g/polardb_clean> git log\ncommit d06fe6ce2c79420fd19ac89ace81b66579f08493\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Nov 6 18:56:26 2018 -0500\n\nwhat I did includes:\n1. ./configure --enable-debug\n2. make world // doesn't see the test_shm_mq on the output\n3. make install-world // doesn't see the test_shm_mq.control under the\ninstall directory.\n4. CREATE EXTENSION test_shm_mq; ==> . could not open extension control\nfile \"/.../share/postgresql/extension/test_shm_mq.control\": No such file or\ndirectory\n\nhow can I get it work? Thanks\n\nMy code is based on commit zhifan@zhifandeMacBook-Pro ~/g/polardb_clean> git logcommit d06fe6ce2c79420fd19ac89ace81b66579f08493 Author: Tom Lane <tgl@sss.pgh.pa.us>Date:   Tue Nov 6 18:56:26 2018 -0500what I did includes:1.   ./configure --enable-debug2.   make world  // doesn't see the test_shm_mq on the output3.   make install-world // doesn't see the test_shm_mq.control under the install directory.4.   CREATE EXTENSION test_shm_mq;  ==> .  could not open extension control file \"/.../share/postgresql/extension/test_shm_mq.control\": No such file or directoryhow can I get it work?  Thanks", "msg_date": "Mon, 11 Mar 2019 15:59:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "I have some troubles to run test_shm_mq;" }, { "msg_contents": "On Mon, Mar 11, 2019 at 8:59 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 4. CREATE EXTENSION test_shm_mq; ==> . could not open extension control file \"/.../share/postgresql/extension/test_shm_mq.control\": No such file or directory\n>\n> how can I get it work? Thanks\n\nHi Andy,\n\nTry this first:\n\ncd src/test/modules/test_shm_mq\nmake install\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Mon, 11 Mar 2019 21:04:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: I have some troubles to run test_shm_mq;" }, { "msg_contents": "Works, thank you Thomas! I have spent more than 2 hours on this. do you\nknow which document I miss for this question?\n\nThanks\n\nOn Mon, Mar 11, 2019 at 4:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Mar 11, 2019 at 8:59 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > 4. CREATE EXTENSION test_shm_mq; ==> . could not open extension\n> control file \"/.../share/postgresql/extension/test_shm_mq.control\": No such\n> file or directory\n> >\n> > how can I get it work? Thanks\n>\n> Hi Andy,\n>\n> Try this first:\n>\n> cd src/test/modules/test_shm_mq\n> make install\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n\nWorks, thank you Thomas!  I have spent more than 2 hours on this.   do you know which document I miss for this question?ThanksOn Mon, Mar 11, 2019 at 4:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Mar 11, 2019 at 8:59 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 4.   CREATE EXTENSION test_shm_mq;  ==> .  could not open extension control file \"/.../share/postgresql/extension/test_shm_mq.control\": No such file or directory\n>\n> how can I get it work?  Thanks\n\nHi Andy,\n\nTry this first:\n\ncd src/test/modules/test_shm_mq\nmake install\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Mon, 11 Mar 2019 16:30:06 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: I have some troubles to run test_shm_mq;" }, { "msg_contents": "and whenever I run a simple query \"SELECT test_shm_mq(1024, 'a');\"\n\nI see the following log\n\n2019-03-11 16:33:17.800 CST [65021] LOG: background worker \"test_shm_mq\"\n(PID 65052) exited with exit code 1\n\n\ndoes it indicates something wrong?\n\nOn Mon, Mar 11, 2019 at 4:30 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Works, thank you Thomas! I have spent more than 2 hours on this. do you\n> know which document I miss for this question?\n>\n> Thanks\n>\n> On Mon, Mar 11, 2019 at 4:05 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n>\n>> On Mon, Mar 11, 2019 at 8:59 PM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>> > 4. CREATE EXTENSION test_shm_mq; ==> . could not open extension\n>> control file \"/.../share/postgresql/extension/test_shm_mq.control\": No such\n>> file or directory\n>> >\n>> > how can I get it work? Thanks\n>>\n>> Hi Andy,\n>>\n>> Try this first:\n>>\n>> cd src/test/modules/test_shm_mq\n>> make install\n>>\n>> --\n>> Thomas Munro\n>> https://enterprisedb.com\n>>\n>\n\nand whenever I run a simple query  \"SELECT test_shm_mq(1024, 'a');\"I see the following log2019-03-11 16:33:17.800 CST [65021] LOG:  background worker \"test_shm_mq\" (PID 65052) exited with exit code 1does it indicates something wrong?On Mon, Mar 11, 2019 at 4:30 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Works, thank you Thomas!  I have spent more than 2 hours on this.   do you know which document I miss for this question?ThanksOn Mon, Mar 11, 2019 at 4:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Mar 11, 2019 at 8:59 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 4.   CREATE EXTENSION test_shm_mq;  ==> .  could not open extension control file \"/.../share/postgresql/extension/test_shm_mq.control\": No such file or directory\n>\n> how can I get it work?  Thanks\n\nHi Andy,\n\nTry this first:\n\ncd src/test/modules/test_shm_mq\nmake install\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Mon, 11 Mar 2019 16:34:54 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: I have some troubles to run test_shm_mq;" }, { "msg_contents": "On Mon, Mar 11, 2019 at 9:35 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> and whenever I run a simple query \"SELECT test_shm_mq(1024, 'a');\"\n>\n> I see the following log\n>\n> 2019-03-11 16:33:17.800 CST [65021] LOG: background worker \"test_shm_mq\" (PID 65052) exited with exit code 1\n\nHmm, I don't know actually know why test_shm_mq_main() ends with\nproc_exit(1) instead of 0. It's possible that it was written when the\nmeaning of bgworker exit codes was still being figured out, but I'm\nnot sure...\n\n>> Works, thank you Thomas! I have spent more than 2 hours on this. do you know which document I miss for this question?\n\nThere is probably only src/test/modules/README, which explains that\nthese modules are tests and examples and not part of a server\ninstallation.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Mon, 11 Mar 2019 22:01:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: I have some troubles to run test_shm_mq;" }, { "msg_contents": "Thanks for the clarification!\n\nOn Mon, Mar 11, 2019 at 5:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Mar 11, 2019 at 9:35 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > and whenever I run a simple query \"SELECT test_shm_mq(1024, 'a');\"\n> >\n> > I see the following log\n> >\n> > 2019-03-11 16:33:17.800 CST [65021] LOG: background worker\n> \"test_shm_mq\" (PID 65052) exited with exit code 1\n>\n> Hmm, I don't know actually know why test_shm_mq_main() ends with\n> proc_exit(1) instead of 0. It's possible that it was written when the\n> meaning of bgworker exit codes was still being figured out, but I'm\n> not sure...\n>\n> >> Works, thank you Thomas! I have spent more than 2 hours on this. do\n> you know which document I miss for this question?\n>\n> There is probably only src/test/modules/README, which explains that\n> these modules are tests and examples and not part of a server\n> installation.\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n\n Thanks for the clarification!On Mon, Mar 11, 2019 at 5:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Mar 11, 2019 at 9:35 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> and whenever I run a simple query  \"SELECT test_shm_mq(1024, 'a');\"\n>\n> I see the following log\n>\n> 2019-03-11 16:33:17.800 CST [65021] LOG:  background worker \"test_shm_mq\" (PID 65052) exited with exit code 1\n\nHmm, I don't know actually know why test_shm_mq_main() ends with\nproc_exit(1) instead of 0.  It's possible that it was written when the\nmeaning of bgworker exit codes was still being figured out, but I'm\nnot sure...\n\n>> Works, thank you Thomas!  I have spent more than 2 hours on this.   do you know which document I miss for this question?\n\nThere is probably only src/test/modules/README, which explains that\nthese modules are tests and examples and not part of a server\ninstallation.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Mon, 11 Mar 2019 19:43:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: I have some troubles to run test_shm_mq;" } ]
[ { "msg_contents": "Hi,\n\nI've been reading about TOASTing and would like to modify how the slicing works by taking into consideration the type of the varlena field. These changes would support future implementations of type specific optimized TOAST'ing functions. The first step would be to add information to the TOAST so we know if it is sliced or not and by which function it was sliced and TOASTed. This information should not break the current on disk format of TOASTs. I had the idea of putting this information on the varattrib struct va_header, perhaps adding more bit layouts to represent sliced TOASTs. This idea, however, was pointed to me to be a rather naive approach. What would be the best way to do this?\n\nBruno Hass\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nI've been reading about TOASTing and would like to modify how the slicing works by taking into consideration the type of the varlena field. These changes would support future implementations of type specific optimized TOAST'ing functions. The first step would\n be to add information to the TOAST so we know if it is sliced or not and by which function it was sliced and TOASTed. This information should not break the current on disk format of TOASTs. I had the idea of putting this information on the varattrib struct\n va_header, perhaps adding more bit layouts to represent sliced TOASTs. This idea, however, was pointed to me to be a rather naive approach. What would be the best way to do this?\n\n\n\n\nBruno Hass", "msg_date": "Mon, 11 Mar 2019 13:27:14 +0000", "msg_from": "Bruno Hass <bruno_hass@LIVE.COM>", "msg_from_op": true, "msg_subject": "Best way to keep track of a sliced TOAST" }, { "msg_contents": "On Mon, Mar 11, 2019 at 9:27 AM Bruno Hass <bruno_hass@live.com> wrote:\n> I've been reading about TOASTing and would like to modify how the slicing works by taking into consideration the type of the varlena field. These changes would support future implementations of type specific optimized TOAST'ing functions. The first step would be to add information to the TOAST so we know if it is sliced or not and by which function it was sliced and TOASTed. This information should not break the current on disk format of TOASTs. I had the idea of putting this information on the varattrib struct va_header, perhaps adding more bit layouts to represent sliced TOASTs. This idea, however, was pointed to me to be a rather naive approach. What would be the best way to do this?\n\nWell, you can't really use va_header, because every possible bit\npattern for va_header means something already. The first byte tells\nus what kind of varlena we have:\n\n * Bit layouts for varlena headers on big-endian machines:\n *\n * 00xxxxxx 4-byte length word, aligned, uncompressed data (up to 1G)\n * 01xxxxxx 4-byte length word, aligned, *compressed* data (up to 1G)\n * 10000000 1-byte length word, unaligned, TOAST pointer\n * 1xxxxxxx 1-byte length word, unaligned, uncompressed data (up to 126b)\n *\n * Bit layouts for varlena headers on little-endian machines:\n *\n * xxxxxx00 4-byte length word, aligned, uncompressed data (up to 1G)\n * xxxxxx10 4-byte length word, aligned, *compressed* data (up to 1G)\n * 00000001 1-byte length word, unaligned, TOAST pointer\n * xxxxxxx1 1-byte length word, unaligned, uncompressed data (up to 126b)\n\nAll of the bits other than the ones that tell us what kind of varlena\nwe've got are part of the length word itself; you couldn't use any bit\npattern for some other purpose without breaking on-disk compatibility\nwith existing releases. What you could possibly do is add a new\npossible value of vartag_external, which tells us what \"kind\" of\ntoasted datum we've got. Currently, toasted datums stored on disk are\nalways type 18, but there's no reason that I know of why we couldn't\nhave more than one possibility there.\n\nHowever, I think you might want to discuss on this mailing list a bit\nmore about what you are hoping to achieve before you do too much\ndevelopment, at least if you aspire to get something committed. A\nproject like the one you are proposing sounds like something not for\nthe faint of heart, and it's not really clear what benefits you\nanticipate. I think there has been previous discussion of this topic\nat least for jsonb, so you might also want to search the archives for\nthose discussions. I wouldn't go so far as to say that this idea\ncan't work or wouldn't have any value, but it does seem like the kind\nof thing where you could spend a lot of time going down a dead end,\nand discussion on the list might help you avoid some of those dead\nends.\n\nIt seems to me that making this overly pluggable is likely to be a net\nnegative, because there probably aren't really that many different\nways of doing this that are useful, and because having to store more\nidentifying information will make the toasted datum larger. One idea\nis to let the datatype divide the datum up into variable-sized chunks\nand then have the on-disk format store a list of chunk lengths in\nchunk 0 (and following, if there are lots of chunks?) followed by the\nchunks themselves. The data would all go into the TOAST table as it\ndoes today, and the TOASTed data could be read without knowing\nanything about the data type. However, code that knows how the data\nwas chunked at TOAST time could try to speed things up by operating\ndirectly on the compressed data if it can figure out which chunk it\nneeds without fetching everything.\n\nBut that is just an idea, and it might turn out to suck.\n\nNice name, by the way, if an inferior spelling. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 12 Mar 2019 12:34:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Best way to keep track of a sliced TOAST" }, { "msg_contents": "> It seems to me that making this overly pluggable is likely to be a net\n> negative, because there probably aren't really that many different\n> ways of doing this that are useful, and because having to store more\n> identifying information will make the toasted datum larger. One idea\n> is to let the datatype divide the datum up into variable-sized chunks\n> and then have the on-disk format store a list of chunk lengths in\n> chunk 0 (and following, if there are lots of chunks?) followed by the\n> chunks themselves. The data would all go into the TOAST table as it\n> does today, and the TOASTed data could be read without knowing\n> anything about the data type. However, code that knows how the data\n> was chunked at TOAST time could try to speed things up by operating\n> directly on the compressed data if it can figure out which chunk it\n> needs without fetching everything.\n\nThis idea is what I was hoping to achieve. Would we be able to make optimizations on deTOASTing just by storing the chunk lengths in chunk 0? Also, wouldn't it break existing functions by dedicating a whole chunk (possibly more) to such metadata?\n\n________________________________\nDe: Robert Haas <robertmhaas@gmail.com>\nEnviado: terça-feira, 12 de março de 2019 14:34\nPara: Bruno Hass\nCc: pgsql-hackers\nAssunto: Re: Best way to keep track of a sliced TOAST\n\nOn Mon, Mar 11, 2019 at 9:27 AM Bruno Hass <bruno_hass@live.com> wrote:\n> I've been reading about TOASTing and would like to modify how the slicing works by taking into consideration the type of the varlena field. These changes would support future implementations of type specific optimized TOAST'ing functions. The first step would be to add information to the TOAST so we know if it is sliced or not and by which function it was sliced and TOASTed. This information should not break the current on disk format of TOASTs. I had the idea of putting this information on the varattrib struct va_header, perhaps adding more bit layouts to represent sliced TOASTs. This idea, however, was pointed to me to be a rather naive approach. What would be the best way to do this?\n\nWell, you can't really use va_header, because every possible bit\npattern for va_header means something already. The first byte tells\nus what kind of varlena we have:\n\n * Bit layouts for varlena headers on big-endian machines:\n *\n * 00xxxxxx 4-byte length word, aligned, uncompressed data (up to 1G)\n * 01xxxxxx 4-byte length word, aligned, *compressed* data (up to 1G)\n * 10000000 1-byte length word, unaligned, TOAST pointer\n * 1xxxxxxx 1-byte length word, unaligned, uncompressed data (up to 126b)\n *\n * Bit layouts for varlena headers on little-endian machines:\n *\n * xxxxxx00 4-byte length word, aligned, uncompressed data (up to 1G)\n * xxxxxx10 4-byte length word, aligned, *compressed* data (up to 1G)\n * 00000001 1-byte length word, unaligned, TOAST pointer\n * xxxxxxx1 1-byte length word, unaligned, uncompressed data (up to 126b)\n\nAll of the bits other than the ones that tell us what kind of varlena\nwe've got are part of the length word itself; you couldn't use any bit\npattern for some other purpose without breaking on-disk compatibility\nwith existing releases. What you could possibly do is add a new\npossible value of vartag_external, which tells us what \"kind\" of\ntoasted datum we've got. Currently, toasted datums stored on disk are\nalways type 18, but there's no reason that I know of why we couldn't\nhave more than one possibility there.\n\nHowever, I think you might want to discuss on this mailing list a bit\nmore about what you are hoping to achieve before you do too much\ndevelopment, at least if you aspire to get something committed. A\nproject like the one you are proposing sounds like something not for\nthe faint of heart, and it's not really clear what benefits you\nanticipate. I think there has been previous discussion of this topic\nat least for jsonb, so you might also want to search the archives for\nthose discussions. I wouldn't go so far as to say that this idea\ncan't work or wouldn't have any value, but it does seem like the kind\nof thing where you could spend a lot of time going down a dead end,\nand discussion on the list might help you avoid some of those dead\nends.\n\nIt seems to me that making this overly pluggable is likely to be a net\nnegative, because there probably aren't really that many different\nways of doing this that are useful, and because having to store more\nidentifying information will make the toasted datum larger. One idea\nis to let the datatype divide the datum up into variable-sized chunks\nand then have the on-disk format store a list of chunk lengths in\nchunk 0 (and following, if there are lots of chunks?) followed by the\nchunks themselves. The data would all go into the TOAST table as it\ndoes today, and the TOASTed data could be read without knowing\nanything about the data type. However, code that knows how the data\nwas chunked at TOAST time could try to speed things up by operating\ndirectly on the compressed data if it can figure out which chunk it\nneeds without fetching everything.\n\nBut that is just an idea, and it might turn out to suck.\n\nNice name, by the way, if an inferior spelling. :-)\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n\n> It seems to me that making this overly pluggable is likely\n to be a net\n\n\n\n>\n negative, because there probably aren't really that many different\n>\n ways of doing this that are useful, and because having to store more\n>\n identifying information will make the toasted datum larger.  One idea\n>\n is to let the datatype divide the datum up into variable-sized chunks\n>\n and then have the on-disk format store a list of chunk lengths in\n>\n chunk 0 (and following, if there are lots of chunks?) followed by the\n>\n chunks themselves.  The data would all go into the TOAST table as it\n>\n does today, and the TOASTed data could be read without knowing\n>\n anything about the data type.  However, code that knows how the data\n>\n was chunked at TOAST time could try to speed things up by operating\n>\n directly on the compressed data if it can figure out which chunk it\n>\n needs without fetching everything.\n\n\n\n\n\nThis\n idea is what I was hoping to achieve. Would we be able to make optimizations on deTOASTing  just by storing the chunk lengths in chunk 0? Also, wouldn't it break existing functions by dedicating a whole chunk (possibly more) to such metadata?\n\n\n\n\nDe: Robert Haas <robertmhaas@gmail.com>\nEnviado: terça-feira, 12 de março de 2019 14:34\nPara: Bruno Hass\nCc: pgsql-hackers\nAssunto: Re: Best way to keep track of a sliced TOAST\n \n\n\nOn Mon, Mar 11, 2019 at 9:27 AM Bruno Hass <bruno_hass@live.com> wrote:\n> I've been reading about TOASTing and would like to modify how the slicing works by taking into consideration the type of the varlena field. These changes would support future implementations of type specific optimized TOAST'ing functions. The first step would\n be to add information to the TOAST so we know if it is sliced or not and by which function it was sliced and TOASTed. This information should not break the current on disk format of TOASTs. I had the idea of putting this information on the varattrib struct\n va_header, perhaps adding more bit layouts to represent sliced TOASTs. This idea, however, was pointed to me to be a rather naive approach. What would be the best way to do this?\n\nWell, you can't really use va_header, because every possible bit\npattern for va_header means something already.  The first byte tells\nus what kind of varlena we have:\n\n * Bit layouts for varlena headers on big-endian machines:\n *\n * 00xxxxxx 4-byte length word, aligned, uncompressed data (up to 1G)\n * 01xxxxxx 4-byte length word, aligned, *compressed* data (up to 1G)\n * 10000000 1-byte length word, unaligned, TOAST pointer\n * 1xxxxxxx 1-byte length word, unaligned, uncompressed data (up to 126b)\n *\n * Bit layouts for varlena headers on little-endian machines:\n *\n * xxxxxx00 4-byte length word, aligned, uncompressed data (up to 1G)\n * xxxxxx10 4-byte length word, aligned, *compressed* data (up to 1G)\n * 00000001 1-byte length word, unaligned, TOAST pointer\n * xxxxxxx1 1-byte length word, unaligned, uncompressed data (up to 126b)\n\nAll of the bits other than the ones that tell us what kind of varlena\nwe've got are part of the length word itself; you couldn't use any bit\npattern for some other purpose without breaking on-disk compatibility\nwith existing releases.  What you could possibly do is add a new\npossible value of vartag_external, which tells us what \"kind\" of\ntoasted datum we've got.  Currently, toasted datums stored on disk are\nalways type 18, but there's no reason that I know of why we couldn't\nhave more than one possibility there.\n\nHowever, I think you might want to discuss on this mailing list a bit\nmore about what you are hoping to achieve before you do too much\ndevelopment, at least if you aspire to get something committed.  A\nproject like the one you are proposing sounds like something not for\nthe faint of heart, and it's not really clear what benefits you\nanticipate.  I think there has been previous discussion of this topic\nat least for jsonb, so you might also want to search the archives for\nthose discussions.  I wouldn't go so far as to say that this idea\ncan't work or wouldn't have any value, but it does seem like the kind\nof thing where you could spend a lot of time going down a dead end,\nand discussion on the list might help you avoid some of those dead\nends.\n\nIt seems to me that making this overly pluggable is likely to be a net\nnegative, because there probably aren't really that many different\nways of doing this that are useful, and because having to store more\nidentifying information will make the toasted datum larger.  One idea\nis to let the datatype divide the datum up into variable-sized chunks\nand then have the on-disk format store a list of chunk lengths in\nchunk 0 (and following, if there are lots of chunks?) followed by the\nchunks themselves.  The data would all go into the TOAST table as it\ndoes today, and the TOASTed data could be read without knowing\nanything about the data type.  However, code that knows how the data\nwas chunked at TOAST time could try to speed things up by operating\ndirectly on the compressed data if it can figure out which chunk it\nneeds without fetching everything.\n\nBut that is just an idea, and it might turn out to suck.\n\nNice name, by the way, if an inferior spelling.  :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 15 Mar 2019 11:37:40 +0000", "msg_from": "Bruno Hass <bruno_hass@live.com>", "msg_from_op": false, "msg_subject": "RE: Best way to keep track of a sliced TOAST" }, { "msg_contents": "On Fri, Mar 15, 2019 at 7:37 AM Bruno Hass <bruno_hass@live.com> wrote:\n> This idea is what I was hoping to achieve. Would we be able to make optimizations on deTOASTing just by storing the chunk lengths in chunk 0?\n\nI don't know. I guess we could also NOT store the chunk lengths and\njust say that if you don't know which chunk you want by chunk number,\nyour only other alternative is to read the chunks in order. The\nproblem with that is that it you can no longer index by byte-position\nwithout fetching every chunk prior to that byte position, but maybe\nthat's not important enough to justify the overhead of a list of chunk\nlengths. Or maybe it depends on what you want to do with it.\n\nAgain, stuff like what you are suggesting here has been suggested\nbefore. I think the problem is if someone did the work to invent such\nan infrastructure, that wouldn't actually do anything by itself. We'd\nthen need to find an application of it where it brought us some clear\nadvantage. As I said in my previous email, jsonb seems like a\npromising candidate, but I don't think it's a slam dunk. What would\nthe design look like, exactly? Which operations would get faster, and\ncould we really make them work? The existing format is, I think,\ndesigned with a byte-oriented format in mind, and a chunk-oriented\nformat might have different design constraints. It seems like an idea\nwith potential, but there's a lot of daylight between a directional\nidea with potential and a specific idea accompanied by a high-quality\nimplementation thereof.\n\n> Also, wouldn't it break existing functions by dedicating a whole chunk (possibly more) to such metadata?\n\nAnybody writing such a patch would have to be prepared to fix any such\nbreakage that occurred, at least as regards core code. I would guess\nthat this could be done without breaking too much third-party code,\nbut I guess it depends on exactly what the author of this hypothetical\npatch ends up changing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 15 Mar 2019 10:22:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Best way to keep track of a sliced TOAST" }, { "msg_contents": "I would like to optimize the jsonb key access operations. I could not find the discussion you've mentioned, but I am giving some thought to the idea.\n\nInstead of storing lengths, could we dedicate the first chunk of the TOASTed jsonb to store where each key is located? Would it be a good idea?\n\nYou've mentioned that the current jsonb format is byte-oriented. Does that imply that a single jsonb key value might be split between multiple chunks?\n\n\nBruno Hass\n\n________________________________\nDe: Robert Haas <robertmhaas@gmail.com>\nEnviado: sexta-feira, 15 de março de 2019 12:22\nPara: Bruno Hass\nCc: pgsql-hackers\nAssunto: Re: Best way to keep track of a sliced TOAST\n\nOn Fri, Mar 15, 2019 at 7:37 AM Bruno Hass <bruno_hass@live.com> wrote:\n> This idea is what I was hoping to achieve. Would we be able to make optimizations on deTOASTing just by storing the chunk lengths in chunk 0?\n\nI don't know. I guess we could also NOT store the chunk lengths and\njust say that if you don't know which chunk you want by chunk number,\nyour only other alternative is to read the chunks in order. The\nproblem with that is that it you can no longer index by byte-position\nwithout fetching every chunk prior to that byte position, but maybe\nthat's not important enough to justify the overhead of a list of chunk\nlengths. Or maybe it depends on what you want to do with it.\n\nAgain, stuff like what you are suggesting here has been suggested\nbefore. I think the problem is if someone did the work to invent such\nan infrastructure, that wouldn't actually do anything by itself. We'd\nthen need to find an application of it where it brought us some clear\nadvantage. As I said in my previous email, jsonb seems like a\npromising candidate, but I don't think it's a slam dunk. What would\nthe design look like, exactly? Which operations would get faster, and\ncould we really make them work? The existing format is, I think,\ndesigned with a byte-oriented format in mind, and a chunk-oriented\nformat might have different design constraints. It seems like an idea\nwith potential, but there's a lot of daylight between a directional\nidea with potential and a specific idea accompanied by a high-quality\nimplementation thereof.\n\n> Also, wouldn't it break existing functions by dedicating a whole chunk (possibly more) to such metadata?\n\nAnybody writing such a patch would have to be prepared to fix any such\nbreakage that occurred, at least as regards core code. I would guess\nthat this could be done without breaking too much third-party code,\nbut I guess it depends on exactly what the author of this hypothetical\npatch ends up changing.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n\nI would like to optimize the jsonb key access operations. I could not find the discussion you've mentioned, but I am giving some thought to the idea. \n\n\n\n\nInstead of storing lengths, could we dedicate the first chunk of the TOASTed jsonb to store where each key is located? Would it be a good idea?\n\n\n\n\nYou've mentioned that the current jsonb format is byte-oriented. Does that imply that a single jsonb key value might be split between multiple chunks?\n\n\n\n\n\n\n\nBruno Hass\n\n\n\n\n\n\nDe: Robert Haas <robertmhaas@gmail.com>\nEnviado: sexta-feira, 15 de março de 2019 12:22\nPara: Bruno Hass\nCc: pgsql-hackers\nAssunto: Re: Best way to keep track of a sliced TOAST\n \n\n\nOn Fri, Mar 15, 2019 at 7:37 AM Bruno Hass <bruno_hass@live.com> wrote:\n> This idea is what I was hoping to achieve. Would we be able to make optimizations on deTOASTing  just by storing the chunk lengths in chunk 0?\n\nI don't know. I guess we could also NOT store the chunk lengths and\njust say that if you don't know which chunk you want by chunk number,\nyour only other alternative is to read the chunks in order.  The\nproblem with that is that it you can no longer index by byte-position\nwithout fetching every chunk prior to that byte position, but maybe\nthat's not important enough to justify the overhead of a list of chunk\nlengths.  Or maybe it depends on what you want to do with it.\n\nAgain, stuff like what you are suggesting here has been suggested\nbefore.  I think the problem is if someone did the work to invent such\nan infrastructure, that wouldn't actually do anything by itself.  We'd\nthen need to find an application of it where it brought us some clear\nadvantage.  As I said in my previous email, jsonb seems like a\npromising candidate, but I don't think it's a slam dunk.  What would\nthe design look like, exactly?  Which operations would get faster, and\ncould we really make them work?  The existing format is, I think,\ndesigned with a byte-oriented format in mind, and a chunk-oriented\nformat might have different design constraints.  It seems like an idea\nwith potential, but there's a lot of daylight between a directional\nidea with potential and a specific idea accompanied by a high-quality\nimplementation thereof.\n\n> Also, wouldn't it break existing functions by dedicating a whole chunk (possibly more) to such metadata?\n\nAnybody writing such a patch would have to be prepared to fix any such\nbreakage that occurred, at least as regards core code.  I would guess\nthat this could be done without breaking too much third-party code,\nbut I guess it depends on exactly what the author of this hypothetical\npatch ends up changing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 21 Mar 2019 01:20:16 +0000", "msg_from": "Bruno Hass <bruno_hass@live.com>", "msg_from_op": false, "msg_subject": "RE: Best way to keep track of a sliced TOAST" }, { "msg_contents": "On Wed, Mar 20, 2019 at 9:20 PM Bruno Hass <bruno_hass@live.com> wrote:\n> I would like to optimize the jsonb key access operations. I could not find the discussion you've mentioned, but I am giving some thought to the idea.\n>\n> Instead of storing lengths, could we dedicate the first chunk of the TOASTed jsonb to store where each key is located? Would it be a good idea?\n\nI don't see how that would work. To know which key is which, you'd\nhave to store all the keys. They might not fit in the first chunk.\nThat's the whole reason this has to be TOASTed to begin with.\n\n> You've mentioned that the current jsonb format is byte-oriented. Does that imply that a single jsonb key value might be split between multiple chunks?\n\nYes.\n\nYou're going to need to look at the code yourself to get anywhere\nhere... I don't have unlimited time to answer questions about it, and\neven if I did, you're not really going to understand how it works\nwithout studying it yourself.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 21 Mar 2019 14:45:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Best way to keep track of a sliced TOAST" } ]
[ { "msg_contents": "Hi:\n I need some function which requires some message exchange among different\nback-ends (connections).\nspecially I need a shared hash map and a message queue.\n\nMessage queue: it should be many writers, 1 reader. Looks POSIX message\nqueue should be OK, but postgre doesn't use it. is there any equivalent in\nPG?\n\nshared hash map: the number of items can be fixed and the value can be\nfixed as well.\n\nany keywords or explanation will be extremely helpful.\n\nThanks\n\nHi:  I need some function which requires some message exchange among different back-ends (connections).specially I need a shared hash map and a message queue.  Message queue:  it should be many writers,  1 reader.   Looks POSIX message queue should be OK, but postgre doesn't use it.  is there any equivalent in PG?  shared hash map:  the number of items can be fixed and the value can be fixed as well. any keywords or explanation will be extremely helpful. Thanks", "msg_date": "Mon, 11 Mar 2019 21:36:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Suggestions on message transfer among backends" }, { "msg_contents": "notes on the shared hash map: it needs multi writers and multi readers.\n\nOn Mon, Mar 11, 2019 at 9:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n> I need some function which requires some message exchange among\n> different back-ends (connections).\n> specially I need a shared hash map and a message queue.\n>\n> Message queue: it should be many writers, 1 reader. Looks POSIX\n> message queue should be OK, but postgre doesn't use it. is there any\n> equivalent in PG?\n>\n> shared hash map: the number of items can be fixed and the value can be\n> fixed as well.\n>\n> any keywords or explanation will be extremely helpful.\n>\n> Thanks\n>\n\nnotes on the shared hash map:   it needs multi writers and multi readers. On Mon, Mar 11, 2019 at 9:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi:  I need some function which requires some message exchange among different back-ends (connections).specially I need a shared hash map and a message queue.  Message queue:  it should be many writers,  1 reader.   Looks POSIX message queue should be OK, but postgre doesn't use it.  is there any equivalent in PG?  shared hash map:  the number of items can be fixed and the value can be fixed as well. any keywords or explanation will be extremely helpful. Thanks", "msg_date": "Mon, 11 Mar 2019 21:37:32 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "Em seg, 11 de mar de 2019 às 10:36, Andy Fan\n<zhihui.fan1213@gmail.com> escreveu:\n>\n> I need some function which requires some message exchange among different back-ends (connections).\n> specially I need a shared hash map and a message queue.\n>\nIt seems you are looking for LISTEN/NOTIFY. However, if it is part of\na complex solution, a background worker with shared memory access is\nthe way to go.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n", "msg_date": "Mon, 11 Mar 2019 20:53:28 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "On 03/11/19 19:53, Euler Taveira wrote:\n> Em seg, 11 de mar de 2019 às 10:36, Andy Fan\n> <zhihui.fan1213@gmail.com> escreveu:\n>>\n>> I need some function which requires some message exchange among different back-ends (connections).\n>> specially I need a shared hash map and a message queue.\n>>\n> It seems you are looking for LISTEN/NOTIFY. However, if it is part of\n\nMy own recollection from looking at LISTEN/NOTIFY is that, yes, it\noffers a mechanism for message passing among sessions, but the message\n/reception/ part is very closely bound to the frontend/backend protocol.\n\nThat is, a message sent in session B can be received in session A, but\nit pretty much goes flying straight out the network connection to\n/the connected client associated with session A/.\n\nIf you're actually working /in the backend/ of session A (say, in a\nserver-side PL), it seemed to be unexpectedly difficult to find a way\nto hook those notifications. But I looked at it only briefly, and\nsome time ago.\n\nRegards,\n-Chap\n\n", "msg_date": "Mon, 11 Mar 2019 20:07:48 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "Hello.\n\nAt Mon, 11 Mar 2019 21:37:32 +0800, Andy Fan <zhihui.fan1213@gmail.com> wrote in <CAKU4AWqhZn1v5CR85J74AAVXnTijWTzy6y-3pbYxqmpL5ETEig@mail.gmail.com>\n> notes on the shared hash map: it needs multi writers and multi readers.\n> \n> On Mon, Mar 11, 2019 at 9:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> \n> > Hi:\n> > I need some function which requires some message exchange among\n> > different back-ends (connections).\n> > specially I need a shared hash map and a message queue.\n> >\n> > Message queue: it should be many writers, 1 reader. Looks POSIX\n> > message queue should be OK, but postgre doesn't use it. is there any\n> > equivalent in PG?\n> >\n> > shared hash map: the number of items can be fixed and the value can be\n> > fixed as well.\n> >\n> > any keywords or explanation will be extremely helpful.\n\nI suppose that you are writing an extension or tweaking the core\ncode in C source. dshash (dynamic shared hash) would work for you\nas shared hash map, and is shm_mq usable as the message queue?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 Mar 2019 14:01:26 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "On 11/03/2019 18:36, Andy Fan wrote:\n> Hi:\n>   I need some function which requires some message exchange among \n> different back-ends (connections).\n> specially I need a shared hash map and a message queue.\n> \n> Message queue:  it should be many writers,  1 reader.   Looks POSIX \n> message queue should be OK, but postgre doesn't use it.  is there any \n> equivalent in PG?\n> \n> shared hash map:  the number of items can be fixed and the value can be \n> fixed as well.\n> \n> any keywords or explanation will be extremely helpful.\nYou may use shm_mq (shared memory queue) and hash tables (dynahash.c) in \nshared memory (see ShmemInitHash() + shmem_startup_hook)\n> \n> Thanks\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n", "msg_date": "Tue, 12 Mar 2019 10:59:41 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "On Tue, Mar 12, 2019 at 1:59 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 11/03/2019 18:36, Andy Fan wrote:\n> > Hi:\n> > I need some function which requires some message exchange among\n> > different back-ends (connections).\n> > specially I need a shared hash map and a message queue.\n> >\n> > Message queue: it should be many writers, 1 reader. Looks POSIX\n> > message queue should be OK, but postgre doesn't use it. is there any\n> > equivalent in PG?\n> >\n> > shared hash map: the number of items can be fixed and the value can be\n> > fixed as well.\n> >\n> > any keywords or explanation will be extremely helpful.\n> You may use shm_mq (shared memory queue) and hash tables (dynahash.c) in\n> shared memory (see ShmemInitHash() + shmem_startup_hook)\n> >\n> > Thanks\n>\n> --\n> Andrey Lepikhov\n> Postgres Professional\n> https://postgrespro.com\n> The Russian Postgres Company\n>\n\nThanks Andrey and all people replied this! dynahash/ShmemInitHash is the\none I'm using and it is ok for my purposes.\nI planned to use posix/system v message queue, since they are able to\nsupport multi readers/multi writer.\nI just don't know why shm_mq is designed to single-reader & single-writer.\n\nOn Tue, Mar 12, 2019 at 1:59 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 11/03/2019 18:36, Andy Fan wrote:\n> Hi:\n>    I need some function which requires some message exchange among \n> different back-ends (connections).\n> specially I need a shared hash map and a message queue.\n> \n> Message queue:  it should be many writers,  1 reader.   Looks POSIX \n> message queue should be OK, but postgre doesn't use it.  is there any \n> equivalent in PG?\n> \n> shared hash map:  the number of items can be fixed and the value can be \n> fixed as well.\n> \n> any keywords or explanation will be extremely helpful.\nYou may use shm_mq (shared memory queue) and hash tables (dynahash.c) in \nshared memory (see ShmemInitHash() + shmem_startup_hook)\n> \n> Thanks\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres CompanyThanks Andrey and all people replied this!   dynahash/ShmemInitHash is the one I'm using and it is ok for my purposes. I planned to use posix/system v  message queue,   since they are able to support multi readers/multi writer.   I just don't know why shm_mq is designed to single-reader & single-writer.", "msg_date": "Tue, 12 Mar 2019 14:36:39 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "On Tue, Mar 12, 2019 at 2:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> On Tue, Mar 12, 2019 at 1:59 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\n> wrote:\n>\n>> On 11/03/2019 18:36, Andy Fan wrote:\n>> > Hi:\n>> > I need some function which requires some message exchange among\n>> > different back-ends (connections).\n>> > specially I need a shared hash map and a message queue.\n>> >\n>> > Message queue: it should be many writers, 1 reader. Looks POSIX\n>> > message queue should be OK, but postgre doesn't use it. is there any\n>> > equivalent in PG?\n>> >\n>> > shared hash map: the number of items can be fixed and the value can be\n>> > fixed as well.\n>> >\n>> > any keywords or explanation will be extremely helpful.\n>> You may use shm_mq (shared memory queue) and hash tables (dynahash.c) in\n>> shared memory (see ShmemInitHash() + shmem_startup_hook)\n>> >\n>> > Thanks\n>>\n>> --\n>> Andrey Lepikhov\n>> Postgres Professional\n>> https://postgrespro.com\n>> The Russian Postgres Company\n>>\n>\n> Thanks Andrey and all people replied this! dynahash/ShmemInitHash is the\n> one I'm using and it is ok for my purposes.\n> I planned to use posix/system v message queue, since they are able to\n> support multi readers/multi writer.\n>\n\n Posix/System v message queue is not a portable way for postgres since\nthey are not widely support on all the os, like Darwin. I think that may\nbe a reason why pg didn't use it. and I just hack for fun, so posix mq\ncan be a solution for me.\n\n\n> I just don't know why shm_mq is designed to single-reader & single-writer.\n>\n\nProbably this will be simpler and enough for PostgreSQL.\n\nThat is just the thoughts per my current knowledge.\n\nOn Tue, Mar 12, 2019 at 2:36 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Mar 12, 2019 at 1:59 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 11/03/2019 18:36, Andy Fan wrote:\n> Hi:\n>    I need some function which requires some message exchange among \n> different back-ends (connections).\n> specially I need a shared hash map and a message queue.\n> \n> Message queue:  it should be many writers,  1 reader.   Looks POSIX \n> message queue should be OK, but postgre doesn't use it.  is there any \n> equivalent in PG?\n> \n> shared hash map:  the number of items can be fixed and the value can be \n> fixed as well.\n> \n> any keywords or explanation will be extremely helpful.\nYou may use shm_mq (shared memory queue) and hash tables (dynahash.c) in \nshared memory (see ShmemInitHash() + shmem_startup_hook)\n> \n> Thanks\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres CompanyThanks Andrey and all people replied this!   dynahash/ShmemInitHash is the one I'm using and it is ok for my purposes. I planned to use posix/system v  message queue,   since they are able to support multi readers/multi writer.    Posix/System v  message queue is not a portable way for postgres  since they are not widely support on all the os, like Darwin.  I think that may be a reason why pg didn't use it.  and I just hack for fun,  so posix mq  can be a solution for me.   I just don't know why shm_mq is designed to single-reader & single-writer. Probably  this will be simpler and enough for PostgreSQL. That is just the thoughts per my current knowledge.", "msg_date": "Tue, 12 Mar 2019 16:09:16 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> I just don't know why shm_mq is designed to single-reader & single-writer. \n\nshm_mq was implemented as a part of infrastructure for parallel query\nprocessing. The leader backend launches multiple parallel workers and sets up\na few queues to communicate with each. One queue is used to send request\n(query plan) to the worker, one queue is there to receive data from it, and I\nthink there's one more queue to receive error messages.\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n", "msg_date": "Tue, 12 Mar 2019 09:36:30 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "On Tue, Mar 12, 2019 at 4:34 AM Antonin Houska <ah@cybertec.at> wrote:\n> Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > I just don't know why shm_mq is designed to single-reader & single-writer.\n>\n> shm_mq was implemented as a part of infrastructure for parallel query\n> processing. The leader backend launches multiple parallel workers and sets up\n> a few queues to communicate with each. One queue is used to send request\n> (query plan) to the worker, one queue is there to receive data from it, and I\n> think there's one more queue to receive error messages.\n\nNo, the queues aren't used to send anything to the worker. We know\nthe size of the query plan before we create the DSM, so we can just\nallocate enough space to store the whole thing. We don't know the\nsize of the result set, though, so we use a queue to retrieve that\nfrom the worker. And we also don't know the size of any warnings or\nerrors or other such things that the worker might generate, so we use\na queue to retrieve that stuff, too. It turned out to be better to\nhave a separate queue for each of those things rather than a single\nqueue for both.\n\nI admit that I could have design a system that supported multiple\nreaders and writers and that it would have been useful, but it also\nwould have been more work, and there's something to be said for\nfinishing the feature before your boss fires you. Also, such a system\nwould probably have more overhead; shm_mq can do a lot of things\nwithout locks that would need locks if you had multiple readers and\nwriters.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 12 Mar 2019 16:19:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Mar 12, 2019 at 4:34 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > > I just don't know why shm_mq is designed to single-reader & single-writer.\n> >\n> > shm_mq was implemented as a part of infrastructure for parallel query\n> > processing. The leader backend launches multiple parallel workers and sets up\n> > a few queues to communicate with each. One queue is used to send request\n> > (query plan) to the worker, one queue is there to receive data from it, and I\n> > think there's one more queue to receive error messages.\n> \n> No, the queues aren't used to send anything to the worker. We know\n> the size of the query plan before we create the DSM, so we can just\n> allocate enough space to store the whole thing.\n\nok, I forgot that. (Last time I saw this part was when reading the parallel\nsequential scan patch a few years ago.)\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n", "msg_date": "Thu, 14 Mar 2019 14:37:04 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Suggestions on message transfer among backends" } ]
[ { "msg_contents": "Hi Hackers,\n\nIn master branch, unaccent extension is having issue with the below python\nscript.This issue is only in windows 10 and python 3.\n\npython generate_unaccent_rules.py --unicode-data-file UnicodeData.txt\n--latin-ascii-file Latin-ASCII.xml > unaccent.rules\n\nI am getting the following error\n\nUnicodeEncodeError: 'charmap' codec can't encode character '\\u0100' in\nposition 0: character maps to <undefined>\n\nI went through the python script and found that the stdout encoding is set\nto utf-8 only if python version is <=2. The same needs to be done for\npython 3\n-- \nCheers\nRam 4.0\n\nHi Hackers,In master branch,  unaccent extension is having issue with the below python script.This issue is only in windows 10 and python 3. python generate_unaccent_rules.py --unicode-data-file UnicodeData.txt --latin-ascii-file Latin-ASCII.xml > unaccent.rules  I am getting the following errorUnicodeEncodeError: 'charmap' codec can't encode character '\\u0100' inposition 0: character maps to <undefined>   I went through the python script and found that the stdout encoding is setto utf-8 only if python version is <=2. The same needs to be done for python 3 -- CheersRam 4.0", "msg_date": "Mon, 11 Mar 2019 21:54:45 +0530", "msg_from": "Ramanarayana <raam.soft@gmail.com>", "msg_from_op": true, "msg_subject": "Unaccent extension python script Issue in Windows" }, { "msg_contents": "On Mon, Mar 11, 2019 at 09:54:45PM +0530, Ramanarayana wrote:\n> I went through the python script and found that the stdout encoding is set\n> to utf-8 only if python version is <=2. The same needs to be done for\n> python 3\n\nIf you send a patch for that, how would it look like? Could you also\nregister any patch produced to the future commit fest? It is here:\nhttps://commitfest.postgresql.org/23/\n--\nMichael", "msg_date": "Tue, 12 Mar 2019 11:28:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "On Mon, 11 Mar 2019 at 22:29, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Mar 11, 2019 at 09:54:45PM +0530, Ramanarayana wrote:\n> > I went through the python script and found that the stdout encoding is\n> set\n> > to utf-8 only if python version is <=2. The same needs to be done for\n> > python 3\n>\n> If you send a patch for that, how would it look like? Could you also\n> register any patch produced to the future commit fest? It is here:\n> https://commitfest.postgresql.org/23/\n\n\nWe had integrated that into a patch on Bug#15548\n(generate_unaccent_rules-remove-combining-diacritical-accents-04.patch),\nbut there had been issues as overlapping patches had already been\ncommitted. I can try to abstract out these changes in the few days.\nHugh\n\nOn Mon, 11 Mar 2019 at 22:29, Michael Paquier <michael@paquier.xyz> wrote:On Mon, Mar 11, 2019 at 09:54:45PM +0530, Ramanarayana wrote:\n> I went through the python script and found that the stdout encoding is set\n> to utf-8 only if python version is <=2. The same needs to be done for\n> python 3\n\nIf you send a patch for that, how would it look like?  Could you also\nregister any patch produced to the future commit fest?  It is here:\nhttps://commitfest.postgresql.org/23/We had integrated that into a patch on Bug#15548 (generate_unaccent_rules-remove-combining-diacritical-accents-04.patch), but there had been issues as overlapping patches had already been committed. I can try to abstract out these changes in the few days.Hugh", "msg_date": "Tue, 12 Mar 2019 08:30:51 -0400", "msg_from": "Hugh Ranalli <hugh@whtc.ca>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "Hi Hugh,\n\nI have abstracted out the windows compatibility changes from your patch to\na new patch and tested it. Added the patch to\nhttps://commitfest.postgresql.org/23/\n\nPlease feel free to change it if it requires any changes.\n\nCheers\nRam 4.0", "msg_date": "Sun, 17 Mar 2019 08:28:17 +0530", "msg_from": "Ramanarayana <raam.soft@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "Hi Ram,\nThanks for doing this; I've been overestimating my ability to get to things\nover the last couple of weeks.\n\nI've looked at the patch and have made one minor change. I had moved all\nthe imports up to the top, to keep them in one place (and I think some had\noriginally been used only by the Python 2 code. You added them there, but\ndidn't remove them from their original positions. So I've incorporated that\ninto your patch, attached as v2. I've tested this under Python 2 and 3 on\nLinux, not Windows.\n\nEverything else looks correct. I apologise for not having replied to your\nquestion in the original bug report. I had intended to, but as I said,\nthere's been an increase in the things I need to juggle at the moment.\n\nBest wishes,\nHugh\n\n\n\nOn Sat, 16 Mar 2019 at 22:58, Ramanarayana <raam.soft@gmail.com> wrote:\n\n> Hi Hugh,\n>\n> I have abstracted out the windows compatibility changes from your patch to\n> a new patch and tested it. Added the patch to\n> https://commitfest.postgresql.org/23/\n>\n> Please feel free to change it if it requires any changes.\n>\n> Cheers\n> Ram 4.0\n>", "msg_date": "Sun, 17 Mar 2019 20:23:05 -0400", "msg_from": "Hugh Ranalli <hugh@whtc.ca>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "Hello.\n\nAt Sun, 17 Mar 2019 20:23:05 -0400, Hugh Ranalli <hugh@whtc.ca> wrote in <CAAhbUMNoBLu7jAbyK5MK0LXEyt03PzNQt_Apkg0z9bsAjcLV4g@mail.gmail.com>\n> Hi Ram,\n> Thanks for doing this; I've been overestimating my ability to get to things\n> over the last couple of weeks.\n> \n> I've looked at the patch and have made one minor change. I had moved all\n> the imports up to the top, to keep them in one place (and I think some had\n> originally been used only by the Python 2 code. You added them there, but\n> didn't remove them from their original positions. So I've incorporated that\n> into your patch, attached as v2. I've tested this under Python 2 and 3 on\n> Linux, not Windows.\n\nThough I'm not sure the necessity of running the script on\nWindows, the problem is not specific for Windows, but general one\nthat haven't accidentially found on non-Windows environment.\n\nOn CentOS7:\n> export LANG=\"ja_JP.EUCJP\"\n> python <..snipped..>\n..\n> UnicodeEncodeError: 'euc_jp' codec can't encode character '\\xab' in position 0: illegal multibyte sequence\n\nSo this is not an issue with Windows but with python3.\n\nThe script generates identical files with the both versions of\npython with the pach on Linux and Windows 7. Python3 on Windows\nemits CRLF as a new line but it doesn't seem to harm. (I didn't\nconfirmed that due to extreme slowness of build from uncertain\nreasons now..)\n\nThis patch contains irrelevant changes. The minimal required\nchange would be the attached. If you want refacotor the\nUnicodeData reader or rearrange import sutff, it should be\nseparate patches.\n\nIt would be better use IOBase for Python3 especially for stdout\nreplacement but I didin't since it *is* working.\n\n> Everything else looks correct. I apologise for not having replied to your\n> question in the original bug report. I had intended to, but as I said,\n> there's been an increase in the things I need to juggle at the moment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 18 Mar 2019 14:13:34 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "Hello.\n\nAt Mon, 18 Mar 2019 14:13:34 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190318.141334.186469242.horiguchi.kyotaro@lab.ntt.co.jp>\n> Hello.\n> \n> At Sun, 17 Mar 2019 20:23:05 -0400, Hugh Ranalli <hugh@whtc.ca> wrote in <CAAhbUMNoBLu7jAbyK5MK0LXEyt03PzNQt_Apkg0z9bsAjcLV4g@mail.gmail.com>\n> > Hi Ram,\n> > Thanks for doing this; I've been overestimating my ability to get to things\n> > over the last couple of weeks.\n> > \n> > I've looked at the patch and have made one minor change. I had moved all\n> > the imports up to the top, to keep them in one place (and I think some had\n> > originally been used only by the Python 2 code. You added them there, but\n> > didn't remove them from their original positions. So I've incorporated that\n> > into your patch, attached as v2. I've tested this under Python 2 and 3 on\n> > Linux, not Windows.\n> \n> Though I'm not sure the necessity of running the script on\n> Windows, the problem is not specific for Windows, but general one\n> that haven't accidentially found on non-Windows environment.\n> \n> On CentOS7:\n> > export LANG=\"ja_JP.EUCJP\"\n> > python <..snipped..>\n> ..\n> > UnicodeEncodeError: 'euc_jp' codec can't encode character '\\xab' in position 0: illegal multibyte sequence\n> \n> So this is not an issue with Windows but with python3.\n> \n> The script generates identical files with the both versions of\n> python with the pach on Linux and Windows 7. Python3 on Windows\n> emits CRLF as a new line but it doesn't seem to harm. (I didn't\n> confirmed that due to extreme slowness of build from uncertain\n> reasons now..)\n\nI confirmed that CRLF actually doesn't harm and unaccent works\ncorrectly. (t_isspace() excludes them as white space).\n\n> This patch contains irrelevant changes. The minimal required\n> change would be the attached. If you want refacotor the\n> UnicodeData reader or rearrange import sutff, it should be\n> separate patches.\n> \n> It would be better use IOBase for Python3 especially for stdout\n> replacement but I didin't since it *is* working.\n> \n> > Everything else looks correct. I apologise for not having replied to your\n> > question in the original bug report. I had intended to, but as I said,\n> > there's been an increase in the things I need to juggle at the moment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 18 Mar 2019 15:27:55 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "On Mon, 18 Mar 2019 at 01:14, Kyotaro HORIGUCHI <\nhoriguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> This patch contains irrelevant changes. The minimal required\n> change would be the attached. If you want refacotor the\n> UnicodeData reader or rearrange import sutff, it should be\n> separate patches.\n>\nI'm not sure I'd classify the second change as \"irrelevant.\" Using \"with\"\nis the standard and recommended practice for working with files in Python.\nAt the moment the script does nothing to close the open data file, whether\nthrough regular processing or in the case of an exception. I would argue\nthat's a bug and should be fixed. Creating a separate patch for that seems\nto be adding work for no reason.\n\nHugh\n\nOn Mon, 18 Mar 2019 at 01:14, Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:This patch contains irrelevant changes. The minimal required\nchange would be the attached. If you want refacotor the\nUnicodeData reader or rearrange import sutff, it should be\nseparate patches.I'm not sure I'd classify the second change as \"irrelevant.\" Using \"with\" is the standard and recommended practice for working with files in Python. At the moment the script does nothing to close the open data file, whether through regular processing or in the case of an exception. I would argue that's a bug and should be fixed. Creating a separate patch for that seems to be adding work for no reason.Hugh", "msg_date": "Mon, 18 Mar 2019 09:06:09 -0400", "msg_from": "Hugh Ranalli <hugh@whtc.ca>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "On Mon, Mar 18, 2019 at 09:06:09AM -0400, Hugh Ranalli wrote:\n> I'm not sure I'd classify the second change as \"irrelevant.\" Using \"with\"\n> is the standard and recommended practice for working with files in Python.\n\nI honestly don't know about any standard way to do anythings in\nPython, but it is true that using \"with\" saves from a forgotten\nclose() call.\n\n> At the moment the script does nothing to close the open data file, whether\n> through regular processing or in the case of an exception. I would argue\n> that's a bug and should be fixed. Creating a separate patch for that seems\n> to be adding work for no reason.\n\nThis script runs in a short-lived context, so it is really not a big\ndeal to not close the opened UnicodeData.txt. I agree that it is bad\npractice though, so I think it's fine to fix the problem if there is\nanother patch touching the same area of the code while on it.\n--\nMichael", "msg_date": "Tue, 19 Mar 2019 15:25:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" }, { "msg_contents": "Thanks! I have pushed this patch. I didn't test on Windows, but I did\nverify that it works with python2 and 3 on my Linux machine.\n\nCLDR has made release 35 already, upon download of which the script\ngenerates a few more lines in the unaccent.rules file, as attached.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 10 Sep 2019 18:19:55 -0300", "msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Unaccent extension python script Issue in Windows" } ]
[ { "msg_contents": "In the thread about vacuum_cost_delay vs vacuum_cost_limit,\nI wondered whether nanosleep(2) would provide any better\ntiming resolution than select(2). Some experimentation\nsuggests that it doesn't, but nonetheless I see a good reason\nwhy we should consider making pg_usleep use nanosleep() if\npossible: it's got much better-defined semantics for what\nhappens if a signal arrives. The comments for pg_usleep\nlay out the problems with relying on select():\n\n * CAUTION: the behavior when a signal arrives during the sleep is platform\n * dependent. On most Unix-ish platforms, a signal does not terminate the\n * sleep; but on some, it will (the Windows implementation also allows signals\n * to terminate pg_usleep). And there are platforms where not only does a\n * signal not terminate the sleep, but it actually resets the timeout counter\n * so that the sleep effectively starts over! It is therefore rather hazardous\n * to use this for long sleeps; a continuing stream of signal events could\n * prevent the sleep from ever terminating. Better practice for long sleeps\n * is to use WaitLatch() with a timeout.\n\nWhile the WaitLatch alternative avoids the problem, I doubt\nwe're ever going to remove pg_usleep entirely, so it'd be\ngood if it had fewer sharp edges. nanosleep() has the\nsame behavior as Windows, ie, the sleep is guaranteed to be\nterminated by a signal. So if we used nanosleep() where available\nwe'd have that behavior on just about every interesting platform.\n\nnanosleep() does exist pretty far back: it's in SUSv2, though\nthat version of the standard allows it to fail with ENOSYS.\nNot sure if we'd need to teach configure to check for that\npossibility.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 11 Mar 2019 20:03:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Use nanosleep(2) in pg_usleep, if available?" }, { "msg_contents": "On Mon, Mar 11, 2019 at 8:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While the WaitLatch alternative avoids the problem, I doubt\n> we're ever going to remove pg_usleep entirely, so it'd be\n> good if it had fewer sharp edges. nanosleep() has the\n> same behavior as Windows, ie, the sleep is guaranteed to be\n> terminated by a signal. So if we used nanosleep() where available\n> we'd have that behavior on just about every interesting platform.\n\nIs there any feasible way to go the other way, and make pg_usleep()\nactually always sleep for the requested time, rather than terminating\nearly?\n\n(Probably not, but I'm just asking.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 12 Mar 2019 12:59:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use nanosleep(2) in pg_usleep, if available?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Mar 11, 2019 at 8:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While the WaitLatch alternative avoids the problem, I doubt\n>> we're ever going to remove pg_usleep entirely, so it'd be\n>> good if it had fewer sharp edges. nanosleep() has the\n>> same behavior as Windows, ie, the sleep is guaranteed to be\n>> terminated by a signal. So if we used nanosleep() where available\n>> we'd have that behavior on just about every interesting platform.\n\n> Is there any feasible way to go the other way, and make pg_usleep()\n> actually always sleep for the requested time, rather than terminating\n> early?\n\n> (Probably not, but I'm just asking.)\n\nYes, nanosleep would support that; it returns the remaining time after\nan interrupt, so we could just loop till done. The select-based\nimplementation would have a hard time supporting it, though, and\nI have no idea about Windows.\n\nNow, this proposal is predicated on the idea that we won't need\nto care too much about the select case because few if any\nplatforms would end up using it. So really the question boils\ndown to whether we can provide the continue-to-wait behavior on\nWindows. Anyone?\n\n(I'm not sure what I think about which behavior is really more\ndesirable. We can debate that if there's actually a plausible\nchoice to be made, which seems to depend on Windows.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Mar 2019 13:13:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Use nanosleep(2) in pg_usleep, if available?" }, { "msg_contents": "On Tue, Mar 12, 2019 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (I'm not sure what I think about which behavior is really more\n> desirable. We can debate that if there's actually a plausible\n> choice to be made, which seems to depend on Windows.)\n\nYeah, that's a fair question. My motivation for asking was that I\nsometimes try to insert sleeps when debugging things, and they don't\nactually sleep, because they get interrupted. That's not dispositive,\nthough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Tue, 12 Mar 2019 13:17:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use nanosleep(2) in pg_usleep, if available?" }, { "msg_contents": "Hi,\n\nOn March 12, 2019 10:17:19 AM PDT, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Tue, Mar 12, 2019 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (I'm not sure what I think about which behavior is really more\n>> desirable. We can debate that if there's actually a plausible\n>> choice to be made, which seems to depend on Windows.)\n>\n>Yeah, that's a fair question. My motivation for asking was that I\n>sometimes try to insert sleeps when debugging things, and they don't\n>actually sleep, because they get interrupted. That's not dispositive,\n>though.\n\nHad that happen annoyingly often too. OTOH, we have some sleep loops where it's probably mildly helpful to react faster when an interrupt happens. But those probably should be rewritten to use latches.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n", "msg_date": "Tue, 12 Mar 2019 10:20:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use nanosleep(2) in pg_usleep, if available?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 12, 2019 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (I'm not sure what I think about which behavior is really more\n>> desirable. We can debate that if there's actually a plausible\n>> choice to be made, which seems to depend on Windows.)\n\n> Yeah, that's a fair question. My motivation for asking was that I\n> sometimes try to insert sleeps when debugging things, and they don't\n> actually sleep, because they get interrupted. That's not dispositive,\n> though.\n\nThere never has been a guarantee that we won't sleep for *more*\nthan the requested time; the kernel might not give us back the\nCPU right away. But re-sleeping would at least ensure that\nwe won't sleep for *less* than the requested time. So my opinion\nafter five minutes' thought is that re-sleeping is better, because\nit gives you at least some kind of promise related to the function's\nnominal semantics.\n\nIt still depends on whether we can make Windows do it, though.\nI suppose a brute-force way would be like what WaitLatch does:\ndo our own timekeeping using instr_time.h. (That'd work for\nselect as well, I guess.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Mar 2019 13:36:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Use nanosleep(2) in pg_usleep, if available?" }, { "msg_contents": "On Tue, Mar 12, 2019 at 6:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Mar 11, 2019 at 8:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> While the WaitLatch alternative avoids the problem, I doubt\n> >> we're ever going to remove pg_usleep entirely, so it'd be\n> >> good if it had fewer sharp edges. nanosleep() has the\n> >> same behavior as Windows, ie, the sleep is guaranteed to be\n> >> terminated by a signal. So if we used nanosleep() where available\n> >> we'd have that behavior on just about every interesting platform.\n>\n> > Is there any feasible way to go the other way, and make pg_usleep()\n> > actually always sleep for the requested time, rather than terminating\n> > early?\n>\n> > (Probably not, but I'm just asking.)\n>\n> Yes, nanosleep would support that; it returns the remaining time after\n> an interrupt, so we could just loop till done. The select-based\n> implementation would have a hard time supporting it, though, and\n> I have no idea about Windows.\n>\n> Now, this proposal is predicated on the idea that we won't need\n> to care too much about the select case because few if any\n> platforms would end up using it. So really the question boils\n> down to whether we can provide the continue-to-wait behavior on\n> Windows. Anyone?\n>\n\npg_usleep() on Windows uses WaitForSingleObject with a timeout, which\ncannot do that.\n\nIt seems we could fairly easily reimplement nthat on top of a waitable\ntimer (CreateWaitableTimer/SetWaitableTimer) which should handle this\nsituation. As long as it's only in pg_usleep() we need to change things,\nthe change should be trivial.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Mar 12, 2019 at 6:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Mar 11, 2019 at 8:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While the WaitLatch alternative avoids the problem, I doubt\n>> we're ever going to remove pg_usleep entirely, so it'd be\n>> good if it had fewer sharp edges.  nanosleep() has the\n>> same behavior as Windows, ie, the sleep is guaranteed to be\n>> terminated by a signal.  So if we used nanosleep() where available\n>> we'd have that behavior on just about every interesting platform.\n\n> Is there any feasible way to go the other way, and make pg_usleep()\n> actually always sleep for the requested time, rather than terminating\n> early?\n\n> (Probably not, but I'm just asking.)\n\nYes, nanosleep would support that; it returns the remaining time after\nan interrupt, so we could just loop till done.  The select-based\nimplementation would have a hard time supporting it, though, and\nI have no idea about Windows.\n\nNow, this proposal is predicated on the idea that we won't need\nto care too much about the select case because few if any\nplatforms would end up using it.  So really the question boils\ndown to whether we can provide the continue-to-wait behavior on\nWindows.  Anyone?pg_usleep() on Windows uses WaitForSingleObject with a timeout, which cannot do that.It seems we could fairly easily reimplement nthat on top of a waitable timer (CreateWaitableTimer/SetWaitableTimer) which should handle this situation. As long as it's only in pg_usleep() we need to change things, the change should be trivial.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 13 Mar 2019 13:02:04 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Use nanosleep(2) in pg_usleep, if available?" } ]
[ { "msg_contents": "Hi all,\r\n\r\nHere is a tiny patch removing repetitive characters [if] in fdwhandler.sgml.\r\n\r\nPage: https://github.com/postgres/postgres/blob/master/doc/src/sgml/fdwhandler.sgml\r\n\r\n---------------------------------------------------------------------------\r\n <para>\r\n This function should store the tuple into the provided, or clear it if if★ The characters [if] is repeated.\r\n the row lock couldn't be obtained. The row lock type to acquire is\r\n defined by <literal>erm-&gt;markType</literal>, which is the value\r\n previously returned by <function>GetForeignRowMarkType</function>.\r\n (<literal>ROW_MARK_REFERENCE</literal> means to just re-fetch the tuple\r\n without acquiring any lock, and <literal>ROW_MARK_COPY</literal> will\r\n never be seen by this routine.)\r\n </para>\r\n----------------------------------------------------------------------------\r\n\r\nBest Regards!", "msg_date": "Tue, 12 Mar 2019 01:37:04 +0000", "msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "[PATCH] remove repetitive characters in fdwhandler.sgml" }, { "msg_contents": "On Tue, Mar 12, 2019 at 01:37:04AM +0000, Zhang, Jie wrote:\n> Here is a tiny patch removing repetitive characters [if] in fdwhandler.sgml.\n\n <para>\n- This function should store the tuple into the provided, or clear it if if\n+ This function should store the tuple into the provided, or clear it if\n the row lock couldn't be obtained. The row lock type to acquire is\n\nThe typo is clear, however the formulation of the full sentence is\nconfusing. This function should store the tuple into the provided\nslot, no?\n--\nMichael", "msg_date": "Wed, 13 Mar 2019 14:02:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove repetitive characters in fdwhandler.sgml" }, { "msg_contents": "> This function should store the tuple into the provided slot, no?\r\n\r\nYes, this modification is easier to understand.\r\n\r\n-----Original Message-----\r\nFrom: Michael Paquier [mailto:michael@paquier.xyz] \r\nSent: Wednesday, March 13, 2019 1:02 PM\r\nTo: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: [PATCH] remove repetitive characters in fdwhandler.sgml\r\n\r\nOn Tue, Mar 12, 2019 at 01:37:04AM +0000, Zhang, Jie wrote:\r\n> Here is a tiny patch removing repetitive characters [if] in fdwhandler.sgml.\r\n\r\n <para>\r\n- This function should store the tuple into the provided, or clear it if if\r\n+ This function should store the tuple into the provided, or clear \r\n+ it if\r\n the row lock couldn't be obtained. The row lock type to acquire is\r\n\r\nThe typo is clear, however the formulation of the full sentence is confusing. This function should store the tuple into the provided slot, no?\r\n--\r\nMichael\r\n\n\n", "msg_date": "Wed, 13 Mar 2019 05:43:04 +0000", "msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [PATCH] remove repetitive characters in fdwhandler.sgml" }, { "msg_contents": "(2019/03/13 14:02), Michael Paquier wrote:\n> On Tue, Mar 12, 2019 at 01:37:04AM +0000, Zhang, Jie wrote:\n>> Here is a tiny patch removing repetitive characters [if] in fdwhandler.sgml.\n>\n> <para>\n> - This function should store the tuple into the provided, or clear it if if\n> + This function should store the tuple into the provided, or clear it if\n> the row lock couldn't be obtained. The row lock type to acquire is\n>\n> The typo is clear, however the formulation of the full sentence is\n> confusing. This function should store the tuple into the provided\n> slot, no?\n\nYeah, I think so too.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 13 Mar 2019 14:55:59 +0900", "msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove repetitive characters in fdwhandler.sgml" }, { "msg_contents": "On 2019-03-13 14:55:59 +0900, Etsuro Fujita wrote:\n> (2019/03/13 14:02), Michael Paquier wrote:\n> > On Tue, Mar 12, 2019 at 01:37:04AM +0000, Zhang, Jie wrote:\n> > > Here is a tiny patch removing repetitive characters [if] in fdwhandler.sgml.\n> > \n> > <para>\n> > - This function should store the tuple into the provided, or clear it if if\n> > + This function should store the tuple into the provided, or clear it if\n> > the row lock couldn't be obtained. The row lock type to acquire is\n> > \n> > The typo is clear, however the formulation of the full sentence is\n> > confusing. This function should store the tuple into the provided\n> > slot, no?\n> \n> Yeah, I think so too.\n\nSorry for that, I'll fix the sentence tomorrow. Andres vs Grammar: 3 :\n3305.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Tue, 12 Mar 2019 23:19:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove repetitive characters in fdwhandler.sgml" }, { "msg_contents": "On 2019-03-12 23:19:23 -0700, Andres Freund wrote:\n> On 2019-03-13 14:55:59 +0900, Etsuro Fujita wrote:\n> > (2019/03/13 14:02), Michael Paquier wrote:\n> > > On Tue, Mar 12, 2019 at 01:37:04AM +0000, Zhang, Jie wrote:\n> > > > Here is a tiny patch removing repetitive characters [if] in fdwhandler.sgml.\n> > > \n> > > <para>\n> > > - This function should store the tuple into the provided, or clear it if if\n> > > + This function should store the tuple into the provided, or clear it if\n> > > the row lock couldn't be obtained. The row lock type to acquire is\n> > > \n> > > The typo is clear, however the formulation of the full sentence is\n> > > confusing. This function should store the tuple into the provided\n> > > slot, no?\n> > \n> > Yeah, I think so too.\n> \n> Sorry for that, I'll fix the sentence tomorrow. Andres vs Grammar: 3 :\n> 3305.\n\nAnd pushed. Thanks for the report!\n\n", "msg_date": "Mon, 18 Mar 2019 13:41:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove repetitive characters in fdwhandler.sgml" } ]
[ { "msg_contents": "Sir/Madam\n\nI am Pavan_Gudivada.I have good knowledge in HTML, CSS,JAVASCRIPT,PYTHON\nand SQL.After i know about PostgreSQL and its contributions through open\nsource .i am also intersted to take part in Read/write transaction-level\nrouting in Odyssey (2019).\nLooking forward for quick response from your side and i want to join your\nteam and contribute to development of PostgreSQL.\n\nThanking you,\n\nYour's Sincerely\n\npavan\n\nSir/MadamI am Pavan_Gudivada.I have good knowledge in  HTML, CSS,JAVASCRIPT,PYTHON and SQL.After i know about  PostgreSQL and its contributions through open source .i am also intersted to take part in Read/write transaction-level routing in Odyssey (2019).Looking forward for quick response from your side and i want to join your team and contribute to development of PostgreSQL.Thanking you,Your's Sincerelypavan", "msg_date": "Tue, 12 Mar 2019 12:31:00 +0530", "msg_from": "pavan gudivada <pavangudivada163202@gmail.com>", "msg_from_op": true, "msg_subject": "GSOC Application" }, { "msg_contents": "Hi, Pavan!\n\n> 12 марта 2019 г., в 12:01, pavan gudivada <pavangudivada163202@gmail.com> написал(а):\n> \n> I am Pavan_Gudivada.I have good knowledge in HTML, CSS,JAVASCRIPT,PYTHON and SQL.After i know about PostgreSQL and its contributions through open source .i am also intersted to take part in Read/write transaction-level routing in Odyssey (2019).\n> Looking forward for quick response from your side and i want to join your team and contribute to development of PostgreSQL.\n\nThat's great that you want to contribute to PostgreSQL! And development of Odyssey is one of many cool project ides proposed by PostgreSQL for GSoC.\n\nI'm one of the mentors for any project that would relate to Odyssey. And I'll be happy to answer any question about Odyssey.\n\nBut I should note one important things: Odyssey is written completely in C. There is no HTML, CSS, JAVASCRIPT, PYTHON and even SQL. Though, discussed project relates to SQL to some degree.\nThere are some ideas requiring Python, I'd like to encourage you look closer to them.\n\nBest regards, Andrey Borodin.\n", "msg_date": "Wed, 13 Mar 2019 17:32:23 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: GSOC Application" }, { "msg_contents": "Thank you for your response.\n\nOn Wed, Mar 13, 2019 at 6:02 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> Hi, Pavan!\n>\n> > 12 марта 2019 г., в 12:01, pavan gudivada <pavangudivada163202@gmail.com>\n> написал(а):\n> >\n> > I am Pavan_Gudivada.I have good knowledge in HTML,\n> CSS,JAVASCRIPT,PYTHON and SQL.After i know about PostgreSQL and its\n> contributions through open source .i am also intersted to take part in\n> Read/write transaction-level routing in Odyssey (2019).\n> > Looking forward for quick response from your side and i want to join\n> your team and contribute to development of PostgreSQL.\n>\n> That's great that you want to contribute to PostgreSQL! And development of\n> Odyssey is one of many cool project ides proposed by PostgreSQL for GSoC.\n>\n> I'm one of the mentors for any project that would relate to Odyssey. And\n> I'll be happy to answer any question about Odyssey.\n>\n> But I should note one important things: Odyssey is written completely in\n> C. There is no HTML, CSS, JAVASCRIPT, PYTHON and even SQL. Though,\n> discussed project relates to SQL to some degree.\n> There are some ideas requiring Python, I'd like to encourage you look\n> closer to them.\n>\n> Best regards, Andrey Borodin.\n\nThank you for your response. On Wed, Mar 13, 2019 at 6:02 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:Hi, Pavan!\n\n> 12 марта 2019 г., в 12:01, pavan gudivada <pavangudivada163202@gmail.com> написал(а):\n> \n> I am Pavan_Gudivada.I have good knowledge in  HTML, CSS,JAVASCRIPT,PYTHON and SQL.After i know about  PostgreSQL and its contributions through open source .i am also intersted to take part in Read/write transaction-level routing in Odyssey (2019).\n> Looking forward for quick response from your side and i want to join your team and contribute to development of PostgreSQL.\n\nThat's great that you want to contribute to PostgreSQL! And development of Odyssey is one of many cool project ides proposed by PostgreSQL for GSoC.\n\nI'm one of the mentors for any project that would relate to Odyssey. And I'll be happy to answer any question about Odyssey.\n\nBut I should note one important things: Odyssey is written completely in C. There is no HTML, CSS, JAVASCRIPT, PYTHON and even SQL. Though, discussed project relates to SQL to some degree.\nThere are some ideas requiring Python, I'd like to encourage you look closer to them.\n\nBest regards, Andrey Borodin.", "msg_date": "Tue, 19 Mar 2019 11:16:30 +0530", "msg_from": "pavan gudivada <pavangudivada163202@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSOC Application" } ]
[ { "msg_contents": "I think pg_rewind's feature to rewind the promoted standby as a new\nstandby is broken in 11\n\nSTEPS:\n1. create master standby setup.\nUse below script for same.\n\n2. Promote the standby\n[mithuncy@localhost pgrewmasterbin]$ ./bin/pg_ctl -D standby promote\nwaiting for server to promote.... done\nserver promoted\n\n3. In promoted standby create a database and a table in the new database.\n[mithuncy@localhost pgrewmasterbin]$ ./bin/psql -p 5433 postgres\npostgres=# create database db1;\nCREATE DATABASE\npostgres=# \\c db1\nYou are now connected to database \"db1\" as user \"mithuncy\".\ndb1=# create table t1 (t int);\nCREATE TABLE\n\n4. try to rewind the newly promoted standby (with old master as source)\n[mithuncy@localhost pgrewmasterbin]$ ./bin/pg_ctl -D standby stop\nwaiting for server to shut down....... done\nserver stopped\n[mithuncy@localhost pgrewmasterbin]$ ./bin/pg_rewind -D standby\n--source-server=\"host=127.0.0.1 port=5432 user=mithuncy\ndbname=postgres\"\nservers diverged at WAL location 0/3000060 on timeline 1\nrewinding from last common checkpoint at 0/2000060 on timeline 1\ncould not remove directory \"standby/base/16384\": Directory not empty\nFailure, exiting\n\nNote: dry run was successful!\n[mithuncy@localhost pgrewmasterbin]$ ./bin/pg_rewind -D standby\n--source-server=\"host=127.0.0.1 port=5432 user=mithuncy\ndbname=postgres\" -n\nservers diverged at WAL location 0/3000060 on timeline 1\nrewinding from last common checkpoint at 0/2000060 on timeline 1\nDone!\n\nAlso I have tested same in version 10 it works fine there.\n\nDid below commit has broken this feature? (Thanks to kuntal for\nidentifying same)\ncommit 266b6acb312fc440c1c1a2036aa9da94916beac6\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: Thu Mar 29 04:56:52 2018 +0900\nMake pg_rewind skip files and directories that are removed during server start.\n\n-- \nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 12 Mar 2019 13:23:51 +0530", "msg_from": "Mithun Cy <mithun.cy@gmail.com>", "msg_from_op": true, "msg_subject": "pg_rewind : feature to rewind promoted standby is broken!" }, { "msg_contents": "On Tue, Mar 12, 2019 at 01:23:51PM +0530, Mithun Cy wrote:\n> I think pg_rewind's feature to rewind the promoted standby as a new\n> standby is broken in 11\n\nConfirmed, it is.\n\n> Also I have tested same in version 10 it works fine there.\n> \n> Did below commit has broken this feature? (Thanks to kuntal for\n> identifying same)\n> commit 266b6acb312fc440c1c1a2036aa9da94916beac6\n> Author: Fujii Masao <fujii@postgresql.org>\n> Date: Thu Mar 29 04:56:52 2018 +0900\n> Make pg_rewind skip files and directories that are removed during server start.\n\nAnd you are pointing out to the correct commit. The issue is that\nprocess_target_file() has added a call to check_file_excluded(), and\nthis skips all the folders which it thinks can be skipped. One\nproblem though is that we also filter out pg_internal.init, which is\npresent in each database folder, and remains in the target directory\nmarked for deletion. Then, when the deletion happens, the failure\nhappens as the directory is not fully empty.\n\nWe could consider using rmtree() instead, but it is a nice sanity\ncheck to make sure that all the entries in a path have been marked for\ndeletion. Just removing the filter check on the target is fine I\nthink, as we only should have the filters anyway to avoid copying\nunnecessary files from the source. Attached is a patch. What do you\nthink?\n\n(check_file_excluded() could be simplified further but it's also nice\nto keep some mirroring in this API if we finish by using it for target\nfiles at some point.)\n--\nMichael", "msg_date": "Tue, 12 Mar 2019 18:23:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_rewind : feature to rewind promoted standby is broken!" }, { "msg_contents": "On Tue, Mar 12, 2019 at 06:23:01PM +0900, Michael Paquier wrote:\n> And you are pointing out to the correct commit. The issue is that\n> process_target_file() has added a call to check_file_excluded(), and\n> this skips all the folders which it thinks can be skipped. One\n> problem though is that we also filter out pg_internal.init, which is\n> present in each database folder, and remains in the target directory\n> marked for deletion. Then, when the deletion happens, the failure\n> happens as the directory is not fully empty.\n\nOkay, here is a refined patch with better comments, the addition of a\ntest case (creating tables in the new databases in 002_databases.pl is\nenough to trigger the problem).\n\nCould you check that it fixes the issue on your side?\n--\nMichael", "msg_date": "Wed, 13 Mar 2019 17:08:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_rewind : feature to rewind promoted standby is broken!" }, { "msg_contents": "On Wed, Mar 13, 2019 at 1:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 12, 2019 at 06:23:01PM +0900, Michael Paquier wrote:\n> > And you are pointing out to the correct commit. The issue is that\n> > process_target_file() has added a call to check_file_excluded(), and\n> > this skips all the folders which it thinks can be skipped. One\n> > problem though is that we also filter out pg_internal.init, which is\n> > present in each database folder, and remains in the target directory\n> > marked for deletion. Then, when the deletion happens, the failure\n> > happens as the directory is not fully empty.\n>\n> Okay, here is a refined patch with better comments, the addition of a\n> test case (creating tables in the new databases in 002_databases.pl is\n> enough to trigger the problem).\n>\n\nI have not looked into the patch but quick test show it has fixed the above\nissue.\n\n[mithuncy@localhost pgrewindbin]$ ./bin/pg_rewind -D standby\n--source-server=\"host=127.0.0.1 port=5432 user=mithuncy dbname=postgres\" -n\nservers diverged at WAL location 0/3000000 on timeline 1\nrewinding from last common checkpoint at 0/2000060 on timeline 1\nDone!\n[mithuncy@localhost pgrewindbin]$ ./bin/pg_rewind -D standby\n--source-server=\"host=127.0.0.1 port=5432 user=mithuncy dbname=postgres\"\nservers diverged at WAL location 0/3000000 on timeline 1\nrewinding from last common checkpoint at 0/2000060 on timeline 1\nDone!\n\n-- \nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Mar 13, 2019 at 1:38 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Mar 12, 2019 at 06:23:01PM +0900, Michael Paquier wrote:\n> And you are pointing out to the correct commit.  The issue is that\n> process_target_file() has added a call to check_file_excluded(), and\n> this skips all the folders which it thinks can be skipped.  One\n> problem though is that we also filter out pg_internal.init, which is\n> present in each database folder, and remains in the target directory\n> marked for deletion.  Then, when the deletion happens, the failure\n> happens as the directory is not fully empty.\n\nOkay, here is a refined patch with better comments, the addition of a\ntest case (creating tables in the new databases in 002_databases.pl is\nenough to trigger the problem).I have not looked into the patch but quick test show it has fixed the above issue.[mithuncy@localhost pgrewindbin]$ ./bin/pg_rewind -D standby --source-server=\"host=127.0.0.1 port=5432 user=mithuncy dbname=postgres\" -nservers diverged at WAL location 0/3000000 on timeline 1rewinding from last common checkpoint at 0/2000060 on timeline 1Done![mithuncy@localhost pgrewindbin]$ ./bin/pg_rewind -D standby --source-server=\"host=127.0.0.1 port=5432 user=mithuncy dbname=postgres\"servers diverged at WAL location 0/3000000 on timeline 1rewinding from last common checkpoint at 0/2000060 on timeline 1Done!-- Thanks and RegardsMithun Chicklore YogendraEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Mar 2019 00:15:57 +0530", "msg_from": "Mithun Cy <mithun.cy@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_rewind : feature to rewind promoted standby is broken!" }, { "msg_contents": "On Thu, Mar 14, 2019 at 12:15:57AM +0530, Mithun Cy wrote:\n> I have not looked into the patch but quick test show it has fixed the above\n> issue.\n\nThanks for confirming, Mythun. I'll think about the logic of this\npatch for a couple of days in the background, then I'll try to commit\nit likely at the beginning of next week.\n--\nMichael", "msg_date": "Thu, 14 Mar 2019 09:04:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_rewind : feature to rewind promoted standby is broken!" }, { "msg_contents": "On Thu, Mar 14, 2019 at 09:04:41AM +0900, Michael Paquier wrote:\n> Thanks for confirming, Mythun. I'll think about the logic of this\n> patch for a couple of days in the background, then I'll try to commit\n> it likely at the beginning of next week.\n\nCommitted. I have spent extra time polishing the comments to make the\nfiltering rules clearer when processing the source and target files,\nparticularly why they are useful the way they are.\n--\nMichael", "msg_date": "Mon, 18 Mar 2019 10:36:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_rewind : feature to rewind promoted standby is broken!" } ]
[ { "msg_contents": "Hi,\n\nGetting \"*ERROR: bogus varno: 2*\" and below is the sample SQL.\n\n - Create table \"test_bogus\" as below.\n\n CREATE TABLE test_bogus(\n id serial PRIMARY KEY,\n display_name text NOT NULL,\n description text NOT NULL,\n object_type integer NOT NULL,\n sp_oid integer NOT NULL DEFAULT 0\n );\n\n - Create procedure as below.\n\n CREATE OR REPLACE FUNCTION update_sp_oid() RETURNS\ntrigger AS $$\n BEGIN\n EXECUTE 'SELECT CASE WHEN MAX(sp_oid) > 0 THEN\nMAX(sp_oid) + 1 ELSE 1 END FROM\ntest_bogus WHERE object_type = $1' USING\n NEW.object_type INTO NEW.sp_oid;\n RETURN NEW;\n END\n $$ LANGUAGE plpgsql;\n\n - Create trigger on table as below.\n\n CREATE TRIGGER test_bogus_sp_oid\n BEFORE UPDATE ON test_bogus\n FOR EACH ROW\n WHEN (OLD.object_type != NEW.object_type)\n EXECUTE PROCEDURE update_sp_oid();\n\n - Execute below sql to get the result and it shows error \"bogus varno:\n 2\".\n\n SELECT t.oid,t.tgname AS name, t.xmin, t.*,\nrelname,\n CASE WHEN relkind = 'r' THEN TRUE ELSE FALSE END AS parentistable,\n nspname, des.description, l.lanname, p.prosrc, p.proname AS tfunction,\n trim(pg_catalog.pg_get_expr(t.tgqual, t.tgrelid), '()') AS\nwhenclause,\n (CASE WHEN tgconstraint != 0::OID THEN true ElSE false END) AS\nis_constraint_trigger,\n (CASE WHEN tgenabled = 'O' THEN true ElSE false END) AS\nis_enable_trigger,\n tgoldtable,\n tgnewtable\n FROM pg_trigger t\n JOIN pg_class cl ON cl.oid=tgrelid\n JOIN pg_namespace na ON na.oid=relnamespace\n LEFT OUTER JOIN pg_description des ON (des.objoid=t.oid AND\ndes.classoid='pg_trigger'::regclass)\n LEFT OUTER JOIN pg_proc p ON p.oid=t.tgfoid\n LEFT OUTER JOIN pg_language l ON l.oid=p.prolang\n WHERE NOT tgisinternal\n AND tgrelid = 22584::OID\n AND t.oid = 22595::OID\n ORDER BY tgname;\n\n\nBelow is the example, i have executed above mentioned command on psql\nprompt.\n\npostgres=# select version();\n version\n\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 10.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-23), 64-bit\n(1 row)\n\npostgres=# CREATE TABLE test_bogus(\npostgres(# id serial PRIMARY KEY,\npostgres(# display_name text NOT NULL,\npostgres(# description text NOT NULL,\npostgres(# object_type integer NOT NULL,\npostgres(# sp_oid integer NOT NULL DEFAULT 0\npostgres(# );\nCREATE TABLE\npostgres=#\npostgres=#\npostgres=# CREATE OR REPLACE FUNCTION update_sp_oid() RETURNS trigger AS $$\npostgres$# BEGIN\npostgres$# EXECUTE 'SELECT CASE WHEN MAX(sp_oid) > 0 THEN MAX(sp_oid) +\n1 ELSE 1 END FROM test_bogus WHERE object_type = $1' USING NEW.object_type\nINTO NEW.sp_oid;\npostgres$# RETURN NEW;\npostgres$# END\npostgres$# $$ LANGUAGE plpgsql;\nCREATE FUNCTION\npostgres=#\npostgres=#\npostgres=#\npostgres=#\npostgres=# CREATE TRIGGER test_bogus_sp_oid\npostgres-# BEFORE UPDATE ON test_bogus\npostgres-# FOR EACH ROW\npostgres-# WHEN (OLD.object_type != NEW.object_type)\npostgres-# EXECUTE PROCEDURE update_sp_oid();\nCREATE TRIGGER\npostgres=#\npostgres=#\npostgres=#\npostgres=# SELECT rel.oid, rel.relname AS name\npostgres-# FROM pg_class rel\npostgres-# WHERE rel.relkind IN ('r','s','t') AND rel.relnamespace =\n2200::oid\npostgres-# ORDER BY rel.relname;\n oid | name\n-------+------------\n 22584 | test_bogus\n(1 row)\n\npostgres=# SELECT t.oid, t.tgname as name, (CASE WHEN tgenabled = 'O' THEN\ntrue ElSE false END) AS is_enable_trigger FROM pg_trigger t WHERE tgrelid =\n21723::OID ORDER BY tgname;\n oid | name | is_enable_trigger\n-----+------+-------------------\n(0 rows)\n\npostgres=# SELECT t.oid, t.tgname as name, (CASE WHEN tgenabled = 'O' THEN\ntrue ElSE false END) AS is_enable_trigger FROM pg_trigger t WHERE tgrelid =\n22584::OID ORDER BY tgname;\n oid | name | is_enable_trigger\n-------+-------------------+-------------------\n 22595 | test_bogus_sp_oid | t\n(1 row)\n\npostgres=# SELECT t.oid,t.tgname AS name, t.xmin, t.*, relname, CASE WHEN\nrelkind = 'r' THEN TRUE ELSE FALSE END AS parentistable,\npostgres-# nspname, des.description, l.lanname, p.prosrc, p.proname AS\ntfunction,\npostgres-# trim(pg_catalog.pg_get_expr(t.tgqual, t.tgrelid), '()') AS\nwhenclause,\npostgres-# (CASE WHEN tgconstraint != 0::OID THEN true ElSE false END)\nAS is_constraint_trigger,\npostgres-# (CASE WHEN tgenabled = 'O' THEN true ElSE false END) AS\nis_enable_trigger,\npostgres-# tgoldtable,\npostgres-# tgnewtable\npostgres-# FROM pg_trigger t\npostgres-# JOIN pg_class cl ON cl.oid=tgrelid\npostgres-# JOIN pg_namespace na ON na.oid=relnamespace\npostgres-# LEFT OUTER JOIN pg_description des ON (des.objoid=t.oid AND\ndes.classoid='pg_trigger'::regclass)\npostgres-# LEFT OUTER JOIN pg_proc p ON p.oid=t.tgfoid\npostgres-# LEFT OUTER JOIN pg_language l ON l.oid=p.prolang\npostgres-# WHERE NOT tgisinternal\npostgres-# AND tgrelid = 22584::OID\npostgres-# AND t.oid = 22595::OID\npostgres-# ORDER BY tgname;\n*ERROR: bogus varno: 2*\npostgres=#\n\nIs this error message expected or what should be the behaviour ? Let us\nknow your thoughts.\n\nThanks,\nNeel Patel\n\nHi,Getting \"ERROR:  bogus varno: 2\" and below is the sample SQL.Create table \"test_bogus\" as below.                       CREATE TABLE test_bogus(                              id serial PRIMARY KEY,                              display_name text NOT NULL,                              description text NOT NULL,                              object_type integer NOT NULL,                              sp_oid integer NOT NULL DEFAULT 0                       );Create procedure as below.                     CREATE OR REPLACE FUNCTION update_sp_oid() RETURNS trigger AS $$                     BEGIN                           EXECUTE 'SELECT CASE WHEN MAX(sp_oid) > 0 THEN MAX(sp_oid) + 1                                     ELSE 1 END FROM test_bogus WHERE object_type = $1' USING                                               NEW.object_type INTO NEW.sp_oid;                                 RETURN NEW;                    END                    $$ LANGUAGE plpgsql;Create trigger on table as below.                                   CREATE TRIGGER test_bogus_sp_oid                                   BEFORE UPDATE ON test_bogus                                   FOR EACH ROW                                  WHEN (OLD.object_type != NEW.object_type)                                  EXECUTE PROCEDURE update_sp_oid();Execute below sql to get the result and it shows error \"bogus varno: 2\".                             SELECT t.oid,t.tgname AS name, t.xmin, t.*, relname,  CASE WHEN relkind = 'r' THEN TRUE ELSE FALSE END AS parentistable,     nspname, des.description, l.lanname, p.prosrc, p.proname AS tfunction,     trim(pg_catalog.pg_get_expr(t.tgqual, t.tgrelid), '()') AS whenclause,         (CASE WHEN tgconstraint != 0::OID THEN true ElSE false END) AS is_constraint_trigger,     (CASE WHEN tgenabled = 'O' THEN true ElSE false END) AS is_enable_trigger,     tgoldtable,     tgnewtable FROM pg_trigger t     JOIN pg_class cl ON cl.oid=tgrelid     JOIN pg_namespace na ON na.oid=relnamespace     LEFT OUTER JOIN pg_description des ON (des.objoid=t.oid AND des.classoid='pg_trigger'::regclass)     LEFT OUTER JOIN pg_proc p ON p.oid=t.tgfoid     LEFT OUTER JOIN pg_language l ON l.oid=p.prolang WHERE NOT tgisinternal     AND tgrelid = 22584::OID     AND t.oid = 22595::OID ORDER BY tgname;Below is the example, i have executed above mentioned command on psql prompt.postgres=# select version();                                                 version                                                 --------------------------------------------------------------------------------------------------------- PostgreSQL 10.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit(1 row)postgres=# CREATE TABLE test_bogus(postgres(#     id serial PRIMARY KEY,postgres(#     display_name text NOT NULL,postgres(#     description text NOT NULL,postgres(#     object_type integer NOT NULL,postgres(#     sp_oid integer NOT NULL DEFAULT 0postgres(# );CREATE TABLEpostgres=# postgres=# postgres=# CREATE OR REPLACE FUNCTION update_sp_oid() RETURNS trigger AS $$postgres$# BEGINpostgres$#     EXECUTE 'SELECT CASE WHEN MAX(sp_oid) > 0 THEN MAX(sp_oid) + 1 ELSE 1 END FROM test_bogus WHERE object_type = $1' USING NEW.object_type INTO NEW.sp_oid;postgres$#     RETURN NEW;postgres$# ENDpostgres$# $$ LANGUAGE plpgsql;CREATE FUNCTIONpostgres=# postgres=# postgres=# postgres=# postgres=# CREATE TRIGGER test_bogus_sp_oidpostgres-#     BEFORE UPDATE ON test_boguspostgres-#     FOR EACH ROWpostgres-#     WHEN (OLD.object_type != NEW.object_type)postgres-#     EXECUTE PROCEDURE update_sp_oid();CREATE TRIGGERpostgres=# postgres=# postgres=# postgres=# SELECT rel.oid, rel.relname AS namepostgres-# FROM pg_class relpostgres-#     WHERE rel.relkind IN ('r','s','t') AND rel.relnamespace = 2200::oidpostgres-#     ORDER BY rel.relname;  oid  |    name    -------+------------ 22584 | test_bogus(1 row)postgres=# SELECT t.oid, t.tgname as name, (CASE WHEN tgenabled = 'O' THEN true ElSE false END) AS is_enable_trigger FROM pg_trigger t WHERE tgrelid = 21723::OID ORDER BY tgname; oid | name | is_enable_trigger -----+------+-------------------(0 rows)postgres=# SELECT t.oid, t.tgname as name, (CASE WHEN tgenabled = 'O' THEN true ElSE false END) AS is_enable_trigger FROM pg_trigger t WHERE tgrelid = 22584::OID ORDER BY tgname;  oid  |       name        | is_enable_trigger -------+-------------------+------------------- 22595 | test_bogus_sp_oid | t(1 row)postgres=# SELECT t.oid,t.tgname AS name, t.xmin, t.*, relname, CASE WHEN relkind = 'r' THEN TRUE ELSE FALSE END AS parentistable,postgres-#     nspname, des.description, l.lanname, p.prosrc, p.proname AS tfunction,postgres-#     trim(pg_catalog.pg_get_expr(t.tgqual, t.tgrelid), '()') AS whenclause,    postgres-#     (CASE WHEN tgconstraint != 0::OID THEN true ElSE false END) AS is_constraint_trigger,postgres-#     (CASE WHEN tgenabled = 'O' THEN true ElSE false END) AS is_enable_trigger,postgres-#     tgoldtable,postgres-#     tgnewtablepostgres-# FROM pg_trigger tpostgres-#     JOIN pg_class cl ON cl.oid=tgrelidpostgres-#     JOIN pg_namespace na ON na.oid=relnamespacepostgres-#     LEFT OUTER JOIN pg_description des ON (des.objoid=t.oid AND des.classoid='pg_trigger'::regclass)postgres-#     LEFT OUTER JOIN pg_proc p ON p.oid=t.tgfoidpostgres-#     LEFT OUTER JOIN pg_language l ON l.oid=p.prolangpostgres-# WHERE NOT tgisinternalpostgres-#     AND tgrelid = 22584::OIDpostgres-#     AND t.oid = 22595::OIDpostgres-# ORDER BY tgname;ERROR:  bogus varno: 2postgres=# Is this error message expected or what should be the behaviour ? Let us know your thoughts.Thanks,Neel Patel", "msg_date": "Tue, 12 Mar 2019 23:04:17 +0530", "msg_from": "Neel Patel <neel.patel@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Getting ERROR: bogus varno: 2" }, { "msg_contents": "Neel Patel <neel.patel@enterprisedb.com> writes:\n> Getting \"*ERROR: bogus varno: 2*\" and below is the sample SQL.\n\nHmm, reproduced here on HEAD.\n\n> Is this error message expected or what should be the behaviour ?\n\nIt's certainly a bug. Don't know the cause yet, but it looks like\npg_get_expr() is getting confused:\n\n#0 errfinish (dummy=0) at elog.c:411\n#1 0x00000000008aea4f in elog_finish (elevel=<value optimized out>, \n fmt=0xaa30e3 \"bogus varno: %d\") at elog.c:1365\n#2 0x000000000084d761 in resolve_special_varno (node=0x2e70598, \n context=0x7ffc79094e80, private=0x0, \n callback=0x852b90 <get_special_variable>) at ruleutils.c:6862\n#3 0x00000000008529d5 in get_variable (var=0x2e70598, \n levelsup=<value optimized out>, istoplevel=false, context=0x7ffc79094e80)\n at ruleutils.c:6652\n#4 0x0000000000850c21 in get_rule_expr (node=0x2e70598, \n context=0x7ffc79094e80, showimplicit=true) at ruleutils.c:7812\n#5 0x00000000008521ba in get_oper_expr (node=0x2e70488, \n context=0x7ffc79094e80, showimplicit=<value optimized out>)\n at ruleutils.c:9076\n#6 get_rule_expr (node=0x2e70488, context=0x7ffc79094e80, \n showimplicit=<value optimized out>) at ruleutils.c:7921\n#7 0x0000000000858a0d in deparse_expression_pretty (expr=0x2e70488, \n dpcontext=0x2e70be8, forceprefix=<value optimized out>, \n showimplicit=false, prettyFlags=2, startIndent=0) at ruleutils.c:3202\n#8 0x000000000085a682 in pg_get_expr_worker (expr=<value optimized out>, \n relid=33887, relname=0x2e70248 \"test_bogus\", prettyFlags=2)\n at ruleutils.c:2393\n\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Mar 2019 13:43:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR: bogus varno: 2" }, { "msg_contents": "I wrote:\n> Neel Patel <neel.patel@enterprisedb.com> writes:\n>> Is this error message expected or what should be the behaviour ?\n\n> It's certainly a bug.\n\nOh, no, I take that back: it's not a bug, you're just abusing\npg_get_expr() to try to do something it can't do, which is make\nsense of an expression involving more than one relation.\n(OLD and NEW are different relations in a trigger WHEN clause.)\n\nYou can use pg_get_triggerdef() to decompile a trigger WHEN clause,\nalthough that might do more than you want.\n\nNot sure if there's any value in trying to make the failure\nmessage more user-friendly. You can get weird errors by\nmisusing pg_get_expr() in other ways too, such as giving it\nthe wrong relation OID.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Mar 2019 13:57:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR: bogus varno: 2" } ]
[ { "msg_contents": "Add support for hyperbolic functions, as well as log10().\n\nThe SQL:2016 standard adds support for the hyperbolic functions\nsinh(), cosh(), and tanh(). POSIX has long required libm to\nprovide those functions as well as their inverses asinh(),\nacosh(), atanh(). Hence, let's just expose the libm functions\nto the SQL level. As with the trig functions, we only implement\nversions for float8, not numeric.\n\nFor the moment, we'll assume that all platforms actually do have\nthese functions; if experience teaches otherwise, some autoconf\neffort may be needed.\n\nSQL:2016 also adds support for base-10 logarithm, but with the\nfunction name log10(), whereas the name we've long used is log().\nAdd aliases named log10() for the float8 and numeric versions.\n\nLætitia Avrot\n\nDiscussion: https://postgr.es/m/CAB_COdguG22LO=rnxDQ2DW1uzv8aQoUzyDQNJjrR4k00XSgm5w@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f1d85aa98ee71d9662309f6f0384b2f7f8f16f02\n\nModified Files\n--------------\ndoc/src/sgml/func.sgml | 107 ++++++++++++++++++++++-\nsrc/backend/utils/adt/float.c | 160 ++++++++++++++++++++++++++++++++++-\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 25 ++++++\nsrc/test/regress/expected/float8.out | 37 ++++++++\nsrc/test/regress/sql/float8.sql | 8 ++\n6 files changed, 333 insertions(+), 6 deletions(-)\n\n", "msg_date": "Tue, 12 Mar 2019 19:55:14 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "On Tue, Mar 12, 2019 at 07:55:14PM +0000, Tom Lane wrote:\n> Add support for hyperbolic functions, as well as log10().\n> \n> The SQL:2016 standard adds support for the hyperbolic functions\n> sinh(), cosh(), and tanh(). POSIX has long required libm to\n> provide those functions as well as their inverses asinh(),\n> acosh(), atanh(). Hence, let's just expose the libm functions\n> to the SQL level. As with the trig functions, we only implement\n> versions for float8, not numeric.\n\njacana is not a fan of this commit, and failed on float8:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-03-13%2000%3A00%3A27\n@@ -476,7 +476,7 @@\nSELECT asinh(float8 '0');\n asinh\n-------\n- 0\n+ -0\n(1 row)\n--\nMichael", "msg_date": "Wed, 13 Mar 2019 11:21:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 12, 2019 at 07:55:14PM +0000, Tom Lane wrote:\n>> Add support for hyperbolic functions, as well as log10().\n\n> jacana is not a fan of this commit, and failed on float8:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-03-13%2000%3A00%3A27\n> @@ -476,7 +476,7 @@\n> SELECT asinh(float8 '0');\n> asinh\n> -------\n> - 0\n> + -0\n> (1 row)\n\nYeah. I warned Laetitia about not testing corner cases, but\nit hadn't occurred to me that zero might be a corner case :-(\n\nI'm inclined to leave it as-is for a day or so and see if any\nother failures turn up, before deciding what to do about it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Mar 2019 23:16:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "On Tue, Mar 12, 2019 at 11:16:42PM -0400, Tom Lane wrote:\n> Yeah. I warned Laetitia about not testing corner cases, but\n> it hadn't occurred to me that zero might be a corner case :-(\n\nI was honestly expecting more failures than that when I saw the patch\nlanding. This stuff is tricky :)\n\n> I'm inclined to leave it as-is for a day or so and see if any\n> other failures turn up, before deciding what to do about it.\n\nFine by me.\n--\nMichael", "msg_date": "Wed, 13 Mar 2019 14:03:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 12, 2019 at 11:16:42PM -0400, Tom Lane wrote:\n>> I'm inclined to leave it as-is for a day or so and see if any\n>> other failures turn up, before deciding what to do about it.\n\n> Fine by me.\n\nWell, so far jacana is the only critter that's shown any problem.\n\nI don't find any of the possible solutions to be super attractive:\n\n1. Put in an explicit special case, along the lines of\n\n\tif (arg1 == 0.0)\n\t\tresult = arg1; /* Handle 0 and -0 explicitly */\n\telse\n\t\tresult = asinh(arg1);\n\nAside from being ugly, this'd mean that our regression tests weren't\nreally exercising the library asinh function at all.\n\n2. Drop that test case entirely, again leaving us with no coverage\nof the asinh function.\n\n3. Switch to some other test value besides 0. This is also kinda ugly\nbecause we almost certainly won't get identical results everywhere.\nHowever, we could probably make the results pretty portable by using\nextra_float_digits to suppress the low-order digit or two. (If we did\nthat, I'd be inclined to do similarly for the other hyperbolic functions,\njust so we're testing cases that actually show different results, and\nthereby proving we didn't fat-finger which function we're calling.)\n\n4. Carry an additional expected-results file.\n\n5. Write our own asinh implementation. Dean already did the work, of\ncourse, but I think this'd be way overkill just because one platform\ndid their roundoff handling sloppily. We're not in the business\nof implementing transcendental functions better than libm does it.\n\n\nOf these, probably the least bad is #3, even though it might require\na few rounds of experimentation to find the best extra_float_digits\nsetting to use. I'll go try it without any roundoff, just to see\nwhat the raw results look like ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 17:56:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "On Wed, 13 Mar 2019, 21:56 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Of these, probably the least bad is #3, even though it might require\n> a few rounds of experimentation to find the best extra_float_digits\n> setting to use. I'll go try it without any roundoff, just to see\n> what the raw results look like ...\n>\n\n\nYeah, that seems like a reasonable thing to try.\n\nI'm amazed that jacana's asinh() returned -0 for an input of +0. I'm not\naware of any implementation that does that. I'd be quite interested to know\nwhat it returned for an input like 1e-20. If that returned any variety of\nzero, I'd say that it's worse than useless. Another interesting test case\nwould be whether or not it satisfies asinh(-x) = -asinh(x) for a variety of\ndifferent values of x, because that's something that commonly breaks down\nbadly with naive implementations.\n\nIt's not unreasonable to expect these functions to be accurate to within\nthe last 1 or 2 digits, so testing with extra_float_digits or whatever\nseems reasonable, but I think a wider variety of test inputs is required.\n\nI also wonder if we should be doing what we do for the regular trig\nfunctions and explicitly handle special cases like Inf and NaN to ensure\nPOSIX compatibility on all platforms.\n\nRegards,\nDean\n\n\n>\n\nOn Wed, 13 Mar 2019, 21:56 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\nOf these, probably the least bad is #3, even though it might require\na few rounds of experimentation to find the best extra_float_digits\nsetting to use.  I'll go try it without any roundoff, just to see\nwhat the raw results look like ...Yeah, that seems like a reasonable thing to try.I'm amazed that jacana's asinh() returned -0 for an input of +0. I'm not aware of any implementation that does that. I'd be quite interested to know what it returned for an input like 1e-20. If that returned any variety of zero, I'd say that it's worse than useless. Another interesting test case would be whether or not it satisfies asinh(-x) = -asinh(x) for a variety of different values of x, because that's something that commonly breaks down badly with naive implementations.It's not unreasonable to expect these functions to be accurate to within the last 1 or 2 digits, so testing with extra_float_digits or whatever seems reasonable, but I think a wider variety of test inputs is required.I also wonder if we should be doing what we do for the regular trig functions and explicitly handle special cases like Inf and NaN to ensure POSIX compatibility on all platforms.Regards,Dean", "msg_date": "Thu, 14 Mar 2019 00:35:30 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> It's not unreasonable to expect these functions to be accurate to within\n> the last 1 or 2 digits, so testing with extra_float_digits or whatever\n> seems reasonable, but I think a wider variety of test inputs is required.\n\nMeh. As I said before, we're not in the business of improving on what\nlibm does --- if someone has a beef with the results, they need to take\nit to their platform's libm maintainer, not us. The point of testing\nthis at all is just to ensure that we've wired up the SQL functions\nto the library functions correctly.\n\n> I also wonder if we should be doing what we do for the regular trig\n> functions and explicitly handle special cases like Inf and NaN to ensure\n> POSIX compatibility on all platforms.\n\nI'm not too excited about this, but perhaps it would be interesting to\nthrow in tests of the inf/nan cases temporarily, just to see how big\na problem there is of that sort. If the answer comes out to be\n\"all modern platforms get this right\", I don't think it's our job to\nclean up after the stragglers. But if the answer is not that, maybe\nI could be talked into spending code on it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 20:48:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "On Wed, Mar 13, 2019 at 8:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh. As I said before, we're not in the business of improving on what\n> libm does --- if someone has a beef with the results, they need to take\n> it to their platform's libm maintainer, not us. The point of testing\n> this at all is just to ensure that we've wired up the SQL functions\n> to the library functions correctly.\n\nPretty sure we don't even need a test for that. asinh() isn't going\nto call creat() by mistake.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 22:22:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 13, 2019 at 8:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Meh. As I said before, we're not in the business of improving on what\n>> libm does --- if someone has a beef with the results, they need to take\n>> it to their platform's libm maintainer, not us. The point of testing\n>> this at all is just to ensure that we've wired up the SQL functions\n>> to the library functions correctly.\n\n> Pretty sure we don't even need a test for that. asinh() isn't going\n> to call creat() by mistake.\n\nNo, but that's not the hazard. I have a very fresh-in-mind example:\nat one point while tweaking Laetitia's patch, I'd accidentally changed\ndatanh so that it called tanh not atanh. The previous set of tests did\nnot reveal that :-(\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 22:39:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "On Wed, Mar 13, 2019 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Mar 13, 2019 at 8:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Meh. As I said before, we're not in the business of improving on what\n> >> libm does --- if someone has a beef with the results, they need to take\n> >> it to their platform's libm maintainer, not us. The point of testing\n> >> this at all is just to ensure that we've wired up the SQL functions\n> >> to the library functions correctly.\n>\n> > Pretty sure we don't even need a test for that. asinh() isn't going\n> > to call creat() by mistake.\n>\n> No, but that's not the hazard. I have a very fresh-in-mind example:\n> at one point while tweaking Laetitia's patch, I'd accidentally changed\n> datanh so that it called tanh not atanh. The previous set of tests did\n> not reveal that :-(\n\nWell, that was a goof, but it's not likely that such a regression will\never be reintroduced.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 22:40:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 13, 2019 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> No, but that's not the hazard. I have a very fresh-in-mind example:\n>> at one point while tweaking Laetitia's patch, I'd accidentally changed\n>> datanh so that it called tanh not atanh. The previous set of tests did\n>> not reveal that :-(\n\n> Well, that was a goof, but it's not likely that such a regression will\n> ever be reintroduced.\n\nSure, but how about this for another example: maybe a given platform\nhasn't got these functions (or they're in a different library we\ndidn't pull in), but you don't see a failure until you actually\ncall them. We try to set up our link options so that that sort\nof failure is reported at build time, but I wouldn't care to bet\nthat we've succeeded at that everywhere.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 22:44:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "\nOn 3/13/19 5:56 PM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Tue, Mar 12, 2019 at 11:16:42PM -0400, Tom Lane wrote:\n>>> I'm inclined to leave it as-is for a day or so and see if any\n>>> other failures turn up, before deciding what to do about it.\n>> Fine by me.\n> Well, so far jacana is the only critter that's shown any problem.\n>\n> I don't find any of the possible solutions to be super attractive:\n>\n> 1. Put in an explicit special case, along the lines of\n>\n> \tif (arg1 == 0.0)\n> \t\tresult = arg1; /* Handle 0 and -0 explicitly */\n> \telse\n> \t\tresult = asinh(arg1);\n>\n> Aside from being ugly, this'd mean that our regression tests weren't\n> really exercising the library asinh function at all.\n\n\nOr we could possibly call the function and then turn a result of -0 into 0?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 Mar 2019 23:06:51 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Or we could possibly call the function and then turn a result of -0 into 0?\n\nBut -0 is the correct output if the input is -0. So that approach\nrequires distinguishing -0 from 0, which is annoyingly difficult.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 23:18:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "At Wed, 13 Mar 2019 23:18:27 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <2503.1552533507@sss.pgh.pa.us>\ntgl> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\ntgl> > Or we could possibly call the function and then turn a result of -0 into 0?\ntgl> \ntgl> But -0 is the correct output if the input is -0. So that approach\ntgl> requires distinguishing -0 from 0, which is annoyingly difficult.\n\nI think just turning both of -0 and +0 into +0 works, and, FWIW,\nit is what is done in geo_ops.c (e.g. line_construct()) as a kind\nof normalization and I think it is legit for geo_ops, but I don't\nthink so for fundamental functions like (d)asinh().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Mar 2019 12:30:00 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as\n log10()." }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I'm amazed that jacana's asinh() returned -0 for an input of +0.\n\nEven more amusingly, it returns NaN for acosh('infinity'), cf\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-03-14%2003%3A00%3A34\n\nPresumably that means they calculated \"infinity - infinity\" at some\npoint, but why?\n\nSo far, no other failures ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 00:41:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "On Thu, 14 Mar 2019 at 04:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > I'm amazed that jacana's asinh() returned -0 for an input of +0.\n>\n> Even more amusingly, it returns NaN for acosh('infinity'), cf\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-03-14%2003%3A00%3A34\n>\n> Presumably that means they calculated \"infinity - infinity\" at some\n> point, but why?\n>\n\nGiven the -0 result, I don't find that particularly surprising. I\nsuspect lots of formulae would end up doing that without proper\nspecial-case handling upfront.\n\nIt looks like that's the only platform that isn't POSIX compliant\nthough, so maybe it's not worth worrying about.\n\nRegards,\nDean\n\n", "msg_date": "Thu, 14 Mar 2019 08:28:16 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "\nOn 3/14/19 12:41 AM, Tom Lane wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> I'm amazed that jacana's asinh() returned -0 for an input of +0.\n> Even more amusingly, it returns NaN for acosh('infinity'), cf\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-03-14%2003%3A00%3A34\n>\n> Presumably that means they calculated \"infinity - infinity\" at some\n> point, but why?\n>\n> So far, no other failures ...\n>\n> \t\t\t\n\n\n\nI have replicated this on my Msys2 test system.\n\n\nI assume it's a bug in the mingw math library. I think jacana is the\nonly currently reporting mingw member :-( The MSVC members appear to be\nhappy.\n\n\nI have several releases of the mingw64 toolsets installed on jacana -\nI'll try an earlier version to see if it makes a difference.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Mar 2019 08:54:49 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/14/19 12:41 AM, Tom Lane wrote:\n>> So far, no other failures ...\n\n> I have replicated this on my Msys2 test system.\n> I assume it's a bug in the mingw math library. I think jacana is the\n> only currently reporting mingw member :-( The MSVC members appear to be\n> happy.\n> I have several releases of the mingw64 toolsets installed on jacana -\n> I'll try an earlier version to see if it makes a difference.\n\nYeah, it would be interesting to know whether it's consistent across\ndifferent mingw versions.\n\nSo far, though, jacana is still the only buildfarm animal that's having\ntrouble with those tests as of c015f853b. I want to wait another day or\nso in hopes of getting more reports from stragglers. But assuming that\nthat stays true, I do not feel any need to try to work around jacana's\nissues. We already have proof of two deficiencies in their\nhyoerbolic-function code, and considering the tiny number of test cases\nwe've tried, it'd be folly to think there are only two. I don't want\nto embark on a project to clean that up for the sake of one substandard\nimplementation.\n\nI feel therefore that what we should do (barring new evidence) is either\n\n1. Remove all the inf/nan test cases for the hyoerbolic functions, on\nthe grounds that they're not really worth expending buildfarm cycles on\nin the long run; or\n\n2. Just comment out the one failing test, with a note about why.\n\nI haven't got a strong preference as to which. Thoughts?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 15:08:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "\nOn 3/14/19 3:08 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/14/19 12:41 AM, Tom Lane wrote:\n>>> So far, no other failures ...\n>> I have replicated this on my Msys2 test system.\n>> I assume it's a bug in the mingw math library. I think jacana is the\n>> only currently reporting mingw member :-( The MSVC members appear to be\n>> happy.\n>> I have several releases of the mingw64 toolsets installed on jacana -\n>> I'll try an earlier version to see if it makes a difference.\n> Yeah, it would be interesting to know whether it's consistent across\n> different mingw versions.\n\n\n\nTried with mingw64-gcc-5.4.0  (jacana is currently on 8.1.0). Same result.\n\n\n>\n> So far, though, jacana is still the only buildfarm animal that's having\n> trouble with those tests as of c015f853b. I want to wait another day or\n> so in hopes of getting more reports from stragglers. But assuming that\n> that stays true, I do not feel any need to try to work around jacana's\n> issues. We already have proof of two deficiencies in their\n> hyoerbolic-function code, and considering the tiny number of test cases\n> we've tried, it'd be folly to think there are only two. I don't want\n> to embark on a project to clean that up for the sake of one substandard\n> implementation.\n>\n> I feel therefore that what we should do (barring new evidence) is either\n>\n> 1. Remove all the inf/nan test cases for the hyoerbolic functions, on\n> the grounds that they're not really worth expending buildfarm cycles on\n> in the long run; or\n>\n> 2. Just comment out the one failing test, with a note about why.\n>\n> I haven't got a strong preference as to which. Thoughts?\n>\n> \t\t\t\n\n\n2. would help us memorialize the problem.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Mar 2019 17:09:30 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/14/19 3:08 PM, Tom Lane wrote:\n>> I feel therefore that what we should do (barring new evidence) is either\n>> 1. Remove all the inf/nan test cases for the hyoerbolic functions ...\n>> 2. Just comment out the one failing test, with a note about why.\n\n> 2. would help us memorialize the problem.\n\nHearing no other comments, done that way.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 16 Mar 2019 15:51:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add support for hyperbolic functions, as well as log10()." } ]
[ { "msg_contents": "Hello hackers,\n\nA user complained about CREATE DATABASE taking > 200ms even with fsync\nset to off. Andres pointed out that that'd be the clunky poll/sleep\nloops in checkpointer.c.\n\nHere's a draft patch to use condition variables instead.\n\nUnpatched:\n\npostgres=# checkpoint;\nCHECKPOINT\nTime: 101.848 ms\n\nPatched:\n\npostgres=# checkpoint;\nCHECKPOINT\nTime: 1.851 ms\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Wed, 13 Mar 2019 11:56:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Using condition variables to wait for checkpoints" }, { "msg_contents": "Hi,\n\nOn 2019-03-13 11:56:19 +1300, Thomas Munro wrote:\n> A user complained about CREATE DATABASE taking > 200ms even with fsync\n> set to off. Andres pointed out that that'd be the clunky poll/sleep\n> loops in checkpointer.c.\n> \n> Here's a draft patch to use condition variables instead.\n> \n> Unpatched:\n> \n> postgres=# checkpoint;\n> CHECKPOINT\n> Time: 101.848 ms\n> \n> Patched:\n> \n> postgres=# checkpoint;\n> CHECKPOINT\n> Time: 1.851 ms\n\nNeat. That's with tiny shmem though, I bet?\n\n\n> + <row>\n> + <entry><literal>CheckpointDone</literal></entry>\n> + <entry>Waiting for a checkpoint to complete.</entry>\n> + </row>\n\n> + <row>\n> + <entry><literal>CheckpointStart</literal></entry>\n> + <entry>Waiting for a checkpoint to start.</entry>\n> + </row>\n\nNot sure I like these much, but I can't quite ome up with something\nmeaningfully better.\n\n\nLooks good to me. Having useful infrastructure is sure cool.\n\n\n- Andres\n\n", "msg_date": "Tue, 12 Mar 2019 16:12:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Using condition variables to wait for checkpoints" }, { "msg_contents": "On Tue, Mar 12, 2019 at 7:12 PM Andres Freund <andres@anarazel.de> wrote:\n> Having useful infrastructure is sure cool.\n\nYay!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 08:15:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using condition variables to wait for checkpoints" }, { "msg_contents": "On Thu, Mar 14, 2019 at 1:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Mar 12, 2019 at 7:12 PM Andres Freund <andres@anarazel.de> wrote:\n> > Having useful infrastructure is sure cool.\n>\n> Yay!\n\n+1\n\nI renamed the CVs because the names I had used before broke the\nconvention that variables named ckpt_* are protected by ckpt_lck, and\npushed.\n\nThere are some other things like this in the tree (grepping for\npoll/pg_usleep loops finds examples in xlog.c, standby.c, ...). That\nmight be worth looking into.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Thu, 14 Mar 2019 11:05:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using condition variables to wait for checkpoints" }, { "msg_contents": "On Thu, Mar 14, 2019 at 11:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I renamed the CVs because the names I had used before broke the\n> convention that variables named ckpt_* are protected by ckpt_lck, and\n> pushed.\n\nErm... this made successful checkpoints slightly faster but failed\ncheckpoints infinitely slower. It would help if we woke up CV waiters\nin the error path too. Patch attached.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Fri, 5 Apr 2019 22:05:02 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using condition variables to wait for checkpoints" } ]
[ { "msg_contents": "Hello,\n\nPostgres today doesn't support waiting for a condition variable with a \ntimeout, although the framework it relies upon, does. This change wraps \nthe existing ConditionVariableSleep functionality and introduces a new \nAPI, ConditionVariableTimedSleep, to allow callers to specify a timeout \nvalue.\n\nA scenario that highlights this use case is a backend is waiting on \nstatus update from multiple workers but needs to time out if that signal \ndoesn't arrive within a certain period. There was a workaround prior \nto aced5a92, but with that change, the semantics are now different.\n\nI chose to go with -1 instead of 0 for the return from \nConditionVariableTimedSleep to indicate timeout error as it seems \ncleaner for this API. WaitEventSetWaitBlock returns -1 for timeout but \nWaitEventSetWait treats timeout as 0 (to represent 0 events indicating \ntimeout).\n\nIf there's an alternative, cleaner way to achieve this outcome, I am all \nears.\n\nThanks.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)", "msg_date": "Tue, 12 Mar 2019 16:24:54 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "Hi Shawn,\n\nOn Wed, Mar 13, 2019 at 12:25 PM Shawn Debnath <sdn@amazon.com> wrote:\n> Postgres today doesn't support waiting for a condition variable with a\n> timeout, although the framework it relies upon, does. This change wraps\n> the existing ConditionVariableSleep functionality and introduces a new\n> API, ConditionVariableTimedSleep, to allow callers to specify a timeout\n> value.\n\nSeems reasonable, I think, and should be familiar to anyone used to\nwell known multithreading libraries.\n\n+/*\n+ * Wait for the given condition variable to be signaled or till timeout.\n+ * This should be called in a predicate loop that tests for a specific exit\n+ * condition and otherwise sleeps, like so:\n+ *\n+ * ConditionVariablePrepareToSleep(cv); // optional\n+ * while (condition for which we are waiting is not true)\n+ * ConditionVariableSleep(cv, wait_event_info);\n+ * ConditionVariableCancelSleep();\n+ *\n+ * wait_event_info should be a value from one of the WaitEventXXX enums\n+ * defined in pgstat.h. This controls the contents of pg_stat_activity's\n+ * wait_event_type and wait_event columns while waiting.\n+ *\n+ * Returns 0 or -1 if timed out.\n+ */\n+int\n+ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n+ uint32 wait_event_info)\n\n\nCan we just refer to the other function's documentation for this? I\ndon't want two copies of this blurb (and this copy-paste already\nfailed to include \"Timed\" in the example function name).\n\nOne difference compared to pthread_cond_timedwait() is that pthread\nuses an absolute time and here you use a relative time (as we do in\nWaitEventSet API). The first question is which makes a better API,\nand the second is what the semantics of a relative timeout should be:\nstart again every time we get a spurious wake-up, or track time\nalready waited? Let's see...\n\n- (void) WaitEventSetWait(cv_wait_event_set, -1, &event, 1,\n+ ret = WaitEventSetWait(cv_wait_event_set, timeout, &event, 1,\n wait_event_info);\n\nHere you're restarting the timeout clock every time through the loop\nwithout adjustment, and I think that's not a good choice: spurious\nwake-ups cause bonus waiting.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n", "msg_date": "Wed, 13 Mar 2019 12:40:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "Hi Thomas,\n\nThanks for reviewing!\n\nOn Wed, Mar 13, 2019 at 12:40:57PM +1300, Thomas Munro wrote:\n> Can we just refer to the other function's documentation for this? I\n> don't want two copies of this blurb (and this copy-paste already\n> failed to include \"Timed\" in the example function name).\n\nHah - point well taken. Fixed.\n\n> One difference compared to pthread_cond_timedwait() is that pthread\n> uses an absolute time and here you use a relative time (as we do in\n> WaitEventSet API). The first question is which makes a better API,\n> and the second is what the semantics of a relative timeout should be:\n> start again every time we get a spurious wake-up, or track time\n> already waited? Let's see...\n> \n> - (void) WaitEventSetWait(cv_wait_event_set, -1, &event, 1,\n> + ret = WaitEventSetWait(cv_wait_event_set, timeout, &event, 1,\n> wait_event_info);\n> \n> Here you're restarting the timeout clock every time through the loop\n> without adjustment, and I think that's not a good choice: spurious\n> wake-ups cause bonus waiting.\n\nAgree. In my testing WaitEventSetWait did the work for calculating the \nright timeout remaining. It's a bummer that we have to repeat the same \npattern at the ConditionVariableTimedSleep() but I guess anyone who \nloops in such cases will have to adjust their values accordingly.\n\nBTW, I am curious why Andres in 98a64d0bd71 didn't just create an \nartificial event with WL_TIMEOUT and return that from \nWaitEventSetWait(). Would have made it cleaner than checking checking \nreturn values for -1.\n\nUpdated v2 patch attached.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)", "msg_date": "Tue, 12 Mar 2019 17:53:43 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "Hello.\n\nAt Tue, 12 Mar 2019 17:53:43 -0700, Shawn Debnath <sdn@amazon.com> wrote in <20190313005342.GA8301@f01898859afd.ant.amazon.com>\n> Hi Thomas,\n> \n> Thanks for reviewing!\n> \n> On Wed, Mar 13, 2019 at 12:40:57PM +1300, Thomas Munro wrote:\n> > Can we just refer to the other function's documentation for this? I\n> > don't want two copies of this blurb (and this copy-paste already\n> > failed to include \"Timed\" in the example function name).\n> \n> Hah - point well taken. Fixed.\n> \n> > One difference compared to pthread_cond_timedwait() is that pthread\n> > uses an absolute time and here you use a relative time (as we do in\n> > WaitEventSet API). The first question is which makes a better API,\n> > and the second is what the semantics of a relative timeout should be:\n> > start again every time we get a spurious wake-up, or track time\n> > already waited? Let's see...\n> > \n> > - (void) WaitEventSetWait(cv_wait_event_set, -1, &event, 1,\n> > + ret = WaitEventSetWait(cv_wait_event_set, timeout, &event, 1,\n> > wait_event_info);\n> > \n> > Here you're restarting the timeout clock every time through the loop\n> > without adjustment, and I think that's not a good choice: spurious\n> > wake-ups cause bonus waiting.\n> \n> Agree. In my testing WaitEventSetWait did the work for calculating the \n> right timeout remaining. It's a bummer that we have to repeat the same \n> pattern at the ConditionVariableTimedSleep() but I guess anyone who \n> loops in such cases will have to adjust their values accordingly.\n\nI think so, too. And actually the duplicate timeout calculation\ndoesn't seem good. We could eliminate the duplicate by allowing\nWaitEventSetWait to exit by unwanted events, something like the\nattached.\n\n> BTW, I am curious why Andres in 98a64d0bd71 didn't just create an \n> artificial event with WL_TIMEOUT and return that from \n> WaitEventSetWait(). Would have made it cleaner than checking checking \n> return values for -1.\n\nMaybe because it is not a kind of WaitEvent, so it naturally\ncannot be a part of occurred_events.\n\n# By the way, you can obtain a short hash of a commit by git\n# rev-parse --short <full hash>.\n\n> Updated v2 patch attached.\n\nI'd like to comment on the patch.\n\n+ * In the event of a timeout, we simply return and the caller\n+ * calls ConditionVariableCancelSleep to remove themselves from the\n+ * wait queue. See ConditionVariableSleep() for notes on how to correctly check\n+ * for the exit condition.\n+ *\n+ * Returns 0, or -1 if timed out.\n\nMaybe this could be more simpler, that like:\n\n* ConditionVariableTimedSleep - allows us to specify timeout\n* \n* If timeout = =1, block until the condition is satisfied.\n* \n* Returns -1 when timeout expires, otherwise returns 0.\n* \n* See ConditionVariableSleep() for general behavior and usage.\n\n\n+int\n+ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n\nCounldn't the two-state return value be a boolean?\n\n\n+\tint\t\t\tret = 0;\n\nAs a general coding convention, we are not to give such a generic\nname for a variable with such a long life if avoidable. In the\ncase the 'ret' could be 'timeout_fired' or something and it would\nbe far verbose.\n\n\n+\t\tif (rc == 0 && timeout >= 0)\n\nWaitEventSetWait returns 0 only in the case of timeout\nexpiration, so the second term is useless. Just setting ret to\n-1 and break seems to me almost the same with \"goto\". The reason\nwhy the existing ConditionVariableSleep uses do {} while(done) is\nthat it is straightforwad. Timeout added everal exit point in the\nloop so it's make the loop rather complex by going around with\nthe variable. Whole the loop could be in the following more flat\nshape.\n\n while (true)\n {\n CHECK_FOR_INTERRUPTS();\n rc = WaitEventSetWait();\n ResetLatch();\n\n /* timeout expired, return */\n if (rc == 0) return -1;\n SpinLockAcquire();\n if (!proclist...)\n {\n done = true;\n }\n SpinLockRelease();\n\n /* condition satisfied, return */\n if (done) return 0;\n\n /* if we're here, we should wait for the remaining time */\n INSTR_TIME_SET_CURRENT()\n ...\n }\n\n\n+\t\t\tAssert(ret == 0);\n\nI don't see a point in the assertion so much.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 13 Mar 2019 17:24:15 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "Thank you reviewing. Comments inline.\n\nOn Wed, Mar 13, 2019 at 05:24:15PM +0900, Kyotaro HORIGUCHI wrote:\n> > Agree. In my testing WaitEventSetWait did the work for calculating \n> > the right timeout remaining. It's a bummer that we have to repeat \n> > the same pattern at the ConditionVariableTimedSleep() but I guess \n> > anyone who loops in such cases will have to adjust their values \n> > accordingly.\n> \n> I think so, too. And actually the duplicate timeout calculation\n> doesn't seem good. We could eliminate the duplicate by allowing\n> WaitEventSetWait to exit by unwanted events, something like the\n> attached.\n\nAfter thinking about this more, I see WaitEventSetWait()'s contract as \nwait for an event or timeout if no events are received in that time \nframe. Although ConditionVariableTimedSleep() is also using the same \nword, I believe the semantics are different. The timeout period in \nConditionVariableTimedSleep() is intended to limit the time we wait \nuntil removal from the wait queue. Whereas, in the case of \nWaitEventSetWait, the timeout period is intended to limit the time the \ncaller waits till the first event.\n\nHence, I believe the code is correct as is and we shouldn't change the \ncontract for WaitEventSetWait.\n\n> > BTW, I am curious why Andres in 98a64d0bd71 didn't just create an \n> > artificial event with WL_TIMEOUT and return that from \n> > WaitEventSetWait(). Would have made it cleaner than checking checking \n> > return values for -1.\n> \n> Maybe because it is not a kind of WaitEvent, so it naturally\n> cannot be a part of occurred_events.\n\nHmm, I don't agree with that completely. One could argue that the \nbackend is waiting for any event in order to be woken up, including a \ntimeout event.\n\n> # By the way, you can obtain a short hash of a commit by git\n> # rev-parse --short <full hash>.\n\nGood to know! :-) Luckily git is smart enough to still match it to the \ncorrect commit.\n\n> > Updated v2 patch attached.\n> \n> I'd like to comment on the patch.\n> \n> + * In the event of a timeout, we simply return and the caller\n> + * calls ConditionVariableCancelSleep to remove themselves from the\n> + * wait queue. See ConditionVariableSleep() for notes on how to correctly check\n> + * for the exit condition.\n> + *\n> + * Returns 0, or -1 if timed out.\n> \n> Maybe this could be more simpler, that like:\n> \n> * ConditionVariableTimedSleep - allows us to specify timeout\n> * \n> * If timeout = =1, block until the condition is satisfied.\n> * \n> * Returns -1 when timeout expires, otherwise returns 0.\n> * \n> * See ConditionVariableSleep() for general behavior and usage.\n\nAgree. Changed to:\n\n * Wait for the given condition variable to be signaled or till timeout.\n *\n * Returns -1 when timeout expires, otherwise returns 0.\n *\n * See ConditionVariableSleep() for general usage.\n\n> +ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n> \n> Counldn't the two-state return value be a boolean?\n\nI wanted to leave the option open to use the positive integers for other \npurposes but you are correct, bool suffices for now. If needed, we can \nchange it in the future.\n\n> +\tint\t\t\tret = 0;\n> \n> As a general coding convention, we are not to give such a generic\n> name for a variable with such a long life if avoidable. In the\n> case the 'ret' could be 'timeout_fired' or something and it would\n> be far verbose.\n> \n> \n> +\t\tif (rc == 0 && timeout >= 0)\n> \n> WaitEventSetWait returns 0 only in the case of timeout\n> expiration, so the second term is useless. Just setting ret to\n> -1 and break seems to me almost the same with \"goto\". The reason\n> why the existing ConditionVariableSleep uses do {} while(done) is\n> that it is straightforwad. Timeout added everal exit point in the\n> loop so it's make the loop rather complex by going around with\n> the variable. Whole the loop could be in the following more flat\n> shape.\n> \n> while (true)\n> {\n> CHECK_FOR_INTERRUPTS();\n> rc = WaitEventSetWait();\n> ResetLatch();\n> \n> /* timeout expired, return */\n> if (rc == 0) return -1;\n> SpinLockAcquire();\n> if (!proclist...)\n> {\n> done = true;\n> }\n> SpinLockRelease();\n> \n> /* condition satisfied, return */\n> if (done) return 0;\n> \n> /* if we're here, we should wait for the remaining time */\n> INSTR_TIME_SET_CURRENT()\n> ...\n> }\n\nAgree. The timeout did complicate the logic for a single variable to\ntrack the exit condition. Adopted the approach above.\n\n> +\t\t\tAssert(ret == 0);\n> \n> I don't see a point in the assertion so much.\n\nBeing overly verbose. Removed.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)", "msg_date": "Thu, 14 Mar 2019 17:26:11 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "Hello.\n\nAt Thu, 14 Mar 2019 17:26:11 -0700, Shawn Debnath <sdn@amazon.com> wrote in <20190315002611.GA1119@f01898859afd.ant.amazon.com>\n> Thank you reviewing. Comments inline.\n> \n> On Wed, Mar 13, 2019 at 05:24:15PM +0900, Kyotaro HORIGUCHI wrote:\n> > > Agree. In my testing WaitEventSetWait did the work for calculating \n> > > the right timeout remaining. It's a bummer that we have to repeat \n> > > the same pattern at the ConditionVariableTimedSleep() but I guess \n> > > anyone who loops in such cases will have to adjust their values \n> > > accordingly.\n> > \n> > I think so, too. And actually the duplicate timeout calculation\n> > doesn't seem good. We could eliminate the duplicate by allowing\n> > WaitEventSetWait to exit by unwanted events, something like the\n> > attached.\n> \n> After thinking about this more, I see WaitEventSetWait()'s contract as \n> wait for an event or timeout if no events are received in that time \n\nSure.\n\n> frame. Although ConditionVariableTimedSleep() is also using the same \n> word, I believe the semantics are different. The timeout period in \n> ConditionVariableTimedSleep() is intended to limit the time we wait \n> until removal from the wait queue. Whereas, in the case of \n> WaitEventSetWait, the timeout period is intended to limit the time the \n> caller waits till the first event.\n\nMmm. The two look the same to me.. Timeout means for both that\n\"Wait for one of these events or timeout expiration to\noccur\". Removal from waiting queue is just a subtask of exiting\nfrom waiting state.\n\nThe \"don't exit until timeout expires unless any expected events\noccur\" part is to be done in the uppermost layer so it is enough\nthat the lower layer does just \"exit when something\nhappened\". This is the behavior of WaitEventSetWaitBlock for\nWaitEventSetWait. My proposal is giving callers capabliy to tell\nWaitEventSetWait not to perform useless timeout contination.\n\n> Hence, I believe the code is correct as is and we shouldn't change the \n> contract for WaitEventSetWait.\n> \n> > > BTW, I am curious why Andres in 98a64d0bd71 didn't just create an \n> > > artificial event with WL_TIMEOUT and return that from \n> > > WaitEventSetWait(). Would have made it cleaner than checking checking \n> > > return values for -1.\n> > \n> > Maybe because it is not a kind of WaitEvent, so it naturally\n> > cannot be a part of occurred_events.\n> \n> Hmm, I don't agree with that completely. One could argue that the \n> backend is waiting for any event in order to be woken up, including a \n> timeout event.\n\nRight, I understand that. I didn't mean that it is the right\ndesign for everyone. Just meant that it is in that shape. (And I\nrather like it.)\n\nlatch.h:127\n#define WL_TIMEOUT (1 << 3) /* not for WaitEventSetWait() */\n\nWe can make it one of the events for WaitEventSetWait, but I\ndon't see such a big point on that, and also that doesn't this\npatch better in any means.\n\n\n> > # By the way, you can obtain a short hash of a commit by git\n> > # rev-parse --short <full hash>.\n> \n> Good to know! :-) Luckily git is smart enough to still match it to the \n> correct commit.\n\nAnd too complex so that infrequent usage easily get out from my\nbrain:(\n\n\n> > > Updated v2 patch attached.\n\nThank you . It looks fine execpt the above point. But still I\nhave some questions on it. (the reason for they not being\ncomments is that they are about wordings..).\n\n+ * Track the current time so that we can calculate the remaining timeout\n+ * if we are woken up spuriously.\n\nI think tha \"track\" means chasing a moving objects. So it might\nbe bettter that it is record or something?\n\n> * Wait for the given condition variable to be signaled or till timeout.\n> *\n> * Returns -1 when timeout expires, otherwise returns 0.\n> *\n> * See ConditionVariableSleep() for general usage.\n> \n> > +ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n> > \n> > Counldn't the two-state return value be a boolean?\n> \n> I wanted to leave the option open to use the positive integers for other \n> purposes but you are correct, bool suffices for now. If needed, we can \n> change it in the future.\n\nYes, we can do that after we found it needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 15 Mar 2019 14:15:17 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Fri, Mar 15, 2019 at 02:15:17PM +0900, Kyotaro HORIGUCHI wrote:\n> > After thinking about this more, I see WaitEventSetWait()'s contract \n> > as wait for an event or timeout if no events are received in that \n> > time \n> \n> Sure.\n> \n> > frame. Although ConditionVariableTimedSleep() is also using the same \n> > word, I believe the semantics are different. The timeout period in \n> > ConditionVariableTimedSleep() is intended to limit the time we wait \n> > until removal from the wait queue. Whereas, in the case of \n> > WaitEventSetWait, the timeout period is intended to limit the time the \n> > caller waits till the first event.\n> \n> Mmm. The two look the same to me.. Timeout means for both that\n> \"Wait for one of these events or timeout expiration to\n> occur\". Removal from waiting queue is just a subtask of exiting\n> from waiting state.\n> \n> The \"don't exit until timeout expires unless any expected events\n> occur\" part is to be done in the uppermost layer so it is enough\n> that the lower layer does just \"exit when something\n> happened\".\n\nAgree with the fact that lower layers should return and let the upper \nlayer determine and filter events as needed.\n\n> This is the behavior of WaitEventSetWaitBlock for\n> WaitEventSetWait. My proposal is giving callers capabliy to tell\n> WaitEventSetWait not to perform useless timeout contination.\n\nThis is where I disagree. WaitEventSetWait needs its own loop and \ntimeout calculation because WaitEventSetWaitBlock can return when EINTR \nis received. This gets filtered in WaitEventSetWait and doesn't bubble \nup by design. Since it's involved in filtering events, it now also has \nto manage the timeout value. ConditionVariableTimedSleep being at a \nhigher level, and waitng for certain events that the lower layers are \nunaware of, also shares the timeout management reponsibility. \n\nDo note that there is no performance impact of having multiple timeout \nloops. The current design allows for each layer to filter events and \nhence per layer timeout management seems fine. If one would want to \navoid this, perhaps we need to introduce a non-static version of \nWaitEventSetWaitBlock and call that directly. But that of course is \nbeyond this patch.\n\n> Thank you . It looks fine execpt the above point. But still I\n> have some questions on it. (the reason for they not being\n> comments is that they are about wordings..).\n> \n> + * Track the current time so that we can calculate the remaining timeout\n> + * if we are woken up spuriously.\n> \n> I think tha \"track\" means chasing a moving objects. So it might\n> be bettter that it is record or something?\n> \n> > * Wait for the given condition variable to be signaled or till timeout.\n> > *\n> > * Returns -1 when timeout expires, otherwise returns 0.\n> > *\n> > * See ConditionVariableSleep() for general usage.\n> > \n> > > +ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n> > > \n> > > Counldn't the two-state return value be a boolean?\n\nI will change it to Record in the next iteration of the patch.\n \n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n", "msg_date": "Sat, 16 Mar 2019 15:27:17 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Sat, Mar 16, 2019 at 03:27:17PM -0700, Shawn Debnath wrote:\n> > + * Track the current time so that we can calculate the \n> > remaining timeout\n> > + * if we are woken up spuriously.\n> > \n> > I think tha \"track\" means chasing a moving objects. So it might\n> > be bettter that it is record or something?\n> > \n> > > * Wait for the given condition variable to be signaled or till timeout.\n> > > *\n> > > * Returns -1 when timeout expires, otherwise returns 0.\n> > > *\n> > > * See ConditionVariableSleep() for general usage.\n> > > \n> > > > +ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n> > > > \n> > > > Counldn't the two-state return value be a boolean?\n> \n> I will change it to Record in the next iteration of the patch.\n\nPosting rebased and updated patch. Changed the word 'Track' to 'Record' \nand also changed variable name rem_timeout to cur_timeout to match \nnaming in other use cases.\n\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)", "msg_date": "Thu, 21 Mar 2019 14:20:50 -0400", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Fri, Mar 22, 2019 at 7:21 AM Shawn Debnath <sdn@amazon.com> wrote:\n>\n> On Sat, Mar 16, 2019 at 03:27:17PM -0700, Shawn Debnath wrote:\n> > > + * Track the current time so that we can calculate the\n> > > remaining timeout\n> > > + * if we are woken up spuriously.\n> > >\n> > > I think tha \"track\" means chasing a moving objects. So it might\n> > > be bettter that it is record or something?\n> > >\n> > > > * Wait for the given condition variable to be signaled or till timeout.\n> > > > *\n> > > > * Returns -1 when timeout expires, otherwise returns 0.\n> > > > *\n> > > > * See ConditionVariableSleep() for general usage.\n> > > >\n> > > > > +ConditionVariableTimedSleep(ConditionVariable *cv, long timeout,\n> > > > >\n> > > > > Counldn't the two-state return value be a boolean?\n> >\n> > I will change it to Record in the next iteration of the patch.\n>\n> Posting rebased and updated patch. Changed the word 'Track' to 'Record'\n> and also changed variable name rem_timeout to cur_timeout to match\n> naming in other use cases.\n\nHi Shawn,\n\nI think this is looking pretty good and I'm planning to commit it\nsoon. The convention for CHECK_FOR_INTERRUPTS() in latch wait loops\nseems to be to put it after the ResetLatch(), so I've moved it in the\nattached version (though I don't think it was wrong where it was).\nAlso pgindented and with my proposed commit message. I've also\nattached a throw-away test module that gives you CALL poke() and\nSELECT wait_for_poke(timeout) using a CV.\n\nObservations that I'm not planning to do anything about:\n1. I don't really like the data type \"long\", but it's already\nestablished that we use that for latches so maybe it's too late for me\nto complain about that.\n2. I don't really like the fact that we have to do floating point\nstuff in INSTR_TIME_GET_MILLISEC(). That's not really your patch's\nfault and you've copied the timeout adjustment code from latch.c,\nwhich seems reasonable.\n\n-- \nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Fri, 5 Jul 2019 13:40:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Fri, Jul 5, 2019 at 1:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I think this is looking pretty good and I'm planning to commit it\n> soon. The convention for CHECK_FOR_INTERRUPTS() in latch wait loops\n> seems to be to put it after the ResetLatch(), so I've moved it in the\n> attached version (though I don't think it was wrong where it was).\n> Also pgindented and with my proposed commit message. I've also\n> attached a throw-away test module that gives you CALL poke() and\n> SELECT wait_for_poke(timeout) using a CV.\n\nI thought of one small problem with the current coding. Suppose there\nare two processes A and B waiting on a CV, and another process C calls\nConditionVariableSignal() to signal one process. Around the same\ntime, A times out and exits via this code path:\n\n+ /* Timed out */\n+ if (rc == 0)\n+ return true;\n\nSuppose ConditionVariableSignal() set A's latch immediately after\nWaitEventSetWait() returned 0 in A. Now A won't report the CV signal\nto the caller, and B is still waiting, so effectively nobody has\nreceived the message and yet C thinks it has signalled a waiter if\nthere is one. My first thought is that we could simply remove the\nabove-quoted hunk and fall through to the second timeout-detecting\ncode. That'd mean that if we've been signalled AND timed out as of\nthat point in the code, we'll prefer to report the signal, and it also\nreduces the complexity of the function to have only one \"return true\"\npath.\n\nThat still leaves the danger that the CV can be signalled some time\nafter ConditionVariableTimedSleep() returns. So now I'm wondering if\nConditionVariableCancelSleep() should signal the CV if it discovers\nthat this process is not in the proclist, on the basis that that must\nindicate that we've been signalled even though we're not interested in\nthe message anymore, and yet some other process else might be\ninterested, and that might have been the only signal that is ever\ngoing to be delivered by ConditionVariableSignal().\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sun, 7 Jul 2019 15:09:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Sun, Jul 7, 2019 at 3:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> + /* Timed out */\n> + if (rc == 0)\n> + return true;\n\nHere's a version without that bit, because I don't think we need it.\n\n> That still leaves the danger that the CV can be signalled some time\n> after ConditionVariableTimedSleep() returns. So now I'm wondering if\n> ConditionVariableCancelSleep() should signal the CV if it discovers\n> that this process is not in the proclist, on the basis that that must\n> indicate that we've been signalled even though we're not interested in\n> the message anymore, and yet some other process else might be\n> interested, and that might have been the only signal that is ever\n> going to be delivered by ConditionVariableSignal().\n\nAnd a separate patch to do that. Thoughts?\n\n\n--\nThomas Munro\nhttps://enterprisedb.com", "msg_date": "Tue, 9 Jul 2019 23:03:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Tue, Jul 09, 2019 at 11:03:18PM +1200, Thomas Munro wrote:\n> On Sun, Jul 7, 2019 at 3:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > + /* Timed out */\n> > + if (rc == 0)\n> > + return true;\n> \n> Here's a version without that bit, because I don't think we need it.\n\nThis works. Agree that letting it fall through covers the first gap.\n\n> > That still leaves the danger that the CV can be signalled some time\n> > after ConditionVariableTimedSleep() returns. So now I'm wondering if\n> > ConditionVariableCancelSleep() should signal the CV if it discovers\n> > that this process is not in the proclist, on the basis that that must\n> > indicate that we've been signalled even though we're not interested in\n> > the message anymore, and yet some other process else might be\n> > interested, and that might have been the only signal that is ever\n> > going to be delivered by ConditionVariableSignal().\n> \n> And a separate patch to do that. Thoughts?\n\nI like it. This covers the gap all the way till cancel is invoked and it \nmanipulates the list to remove itself or realizes that it needs to \nforward the signal to some other process.\n\nThanks Thomas!\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Thu, 11 Jul 2019 23:08:13 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Fri, Jul 12, 2019 at 6:08 PM Shawn Debnath <sdn@amazon.com> wrote:\n> On Tue, Jul 09, 2019 at 11:03:18PM +1200, Thomas Munro wrote:\n> > On Sun, Jul 7, 2019 at 3:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > + /* Timed out */\n> > > + if (rc == 0)\n> > > + return true;\n> >\n> > Here's a version without that bit, because I don't think we need it.\n>\n> This works. Agree that letting it fall through covers the first gap.\n\nPushed, like that (with the now unused rc variable also removed).\nThanks for the patch!\n\n> > > That still leaves the danger that the CV can be signalled some time\n> > > after ConditionVariableTimedSleep() returns. So now I'm wondering if\n> > > ConditionVariableCancelSleep() should signal the CV if it discovers\n> > > that this process is not in the proclist, on the basis that that must\n> > > indicate that we've been signalled even though we're not interested in\n> > > the message anymore, and yet some other process else might be\n> > > interested, and that might have been the only signal that is ever\n> > > going to be delivered by ConditionVariableSignal().\n> >\n> > And a separate patch to do that. Thoughts?\n>\n> I like it. This covers the gap all the way till cancel is invoked and it\n> manipulates the list to remove itself or realizes that it needs to\n> forward the signal to some other process.\n\nI pushed this too. It's a separate commit, because I think there is\nat least a theoretical argument that it should be back-patched. I'm\nnot going to do that today though, because I doubt anyone is relying\non ConditionVariableSignal() working that reliably yet, and it's\nreally with timeouts that it becomes a likely problem.\n\nI thought about this edge case because I have long wanted to propose a\npair of functions that provide a simplified payloadless blocking\nalternative to NOTIFY, that would allow for just the right number of\nwaiting sessions to wake up to handle SKIP LOCKED-style job queues.\nOtherwise you sometimes get thundering herds of wakeups fighting over\ncrumbs. That made me think about the case where a worker session\ndecides to time out and shut down due to being idle for too long, but\neats a wakeup on its way out. Another question that comes up in that\nuse case is whether CV wakeup queues should be LIFO or FIFO. I think\nthe answer is LIFO, to support class worker pool designs that\nstabilise at the right size using a simple idle timeout rule. They're\ncurrently FIFO (proclist_pop_head_node() to wake up, but\nproclist_push_tail() to sleep). I understand why Robert didn't care\nabout that last time I mentioned it: all our uses of CVs today are\n\"broadcast\" wakeups. But a productised version of the \"poke\" hack I\nshowed earlier that supports poking just one waiter would care about\nthe thing this patch fixed, and also the wakeup queue order.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Sat, 13 Jul 2019 15:02:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Sat, Jul 13, 2019 at 03:02:25PM +1200, Thomas Munro wrote:\n\n> Pushed, like that (with the now unused rc variable also removed).\n> Thanks for the patch!\n\nAwesome - thank you!\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n", "msg_date": "Fri, 12 Jul 2019 21:56:10 -0700", "msg_from": "Shawn Debnath <sdn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Fri, Jul 12, 2019 at 11:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I pushed this too. It's a separate commit, because I think there is\n> at least a theoretical argument that it should be back-patched. I'm\n> not going to do that today though, because I doubt anyone is relying\n> on ConditionVariableSignal() working that reliably yet, and it's\n> really with timeouts that it becomes a likely problem.\n\nTo make it work reliably, you'd need to be sure that a process can't\nERROR or FATAL after getting signaled and before doing whatever the\nassociated work is (or that if it does, it will first pass on the\nsignal). Since that seems impossible, I'm not sure I see the point of\ntrying to do anything at all.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Jul 2019 09:11:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" }, { "msg_contents": "On Tue, Jul 16, 2019 at 1:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jul 12, 2019 at 11:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I pushed this too. It's a separate commit, because I think there is\n> > at least a theoretical argument that it should be back-patched. I'm\n> > not going to do that today though, because I doubt anyone is relying\n> > on ConditionVariableSignal() working that reliably yet, and it's\n> > really with timeouts that it becomes a likely problem.\n>\n> To make it work reliably, you'd need to be sure that a process can't\n> ERROR or FATAL after getting signaled and before doing whatever the\n> associated work is (or that if it does, it will first pass on the\n> signal). Since that seems impossible, I'm not sure I see the point of\n> trying to do anything at all.\n\nI agree that that on its own doesn't fix problems in <some\nnon-existent client of this facility>, but that doesn't mean we\nshouldn't try to make this API as reliable as possible. Unlike\ntypical CV implementations, our wait primitive is not atomic. When we\ninvented two-step wait, we created a way for ConditionVariableSignal()\nto have no effect due to bad timing. Surely that's a bug.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jul 2019 16:50:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Introduce timeout capability for ConditionVariableSleep" } ]
[ { "msg_contents": "If a FOR ALL TABLES publication exists, unlogged tables are ignored\nfor publishing changes. But CheckCmdReplicaIdentity() would still\ncheck in that case that such a table has a replica identity set before\naccepting updates. That is useless, so check first whether the given\ntable is publishable and skip the check if not.\n\nExample:\n\nCREATE PUBLICATION pub FOR ALL TABLES;\nCREATE UNLOGGED TABLE logical_replication_test AS SELECT 1 AS number;\nUPDATE logical_replication_test SET number = 2;\nERROR: cannot update table \"logical_replication_test\" because it does\nnot have a replica identity and publishes updates\n\nPatch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 13 Mar 2019 13:03:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Fix handling of unlogged tables in FOR ALL TABLES publications" }, { "msg_contents": "On 2019/03/13 21:03, Peter Eisentraut wrote:\n> If a FOR ALL TABLES publication exists, unlogged tables are ignored\n> for publishing changes. But CheckCmdReplicaIdentity() would still\n> check in that case that such a table has a replica identity set before\n> accepting updates. That is useless, so check first whether the given\n> table is publishable and skip the check if not.\n> \n> Example:\n> \n> CREATE PUBLICATION pub FOR ALL TABLES;\n> CREATE UNLOGGED TABLE logical_replication_test AS SELECT 1 AS number;\n> UPDATE logical_replication_test SET number = 2;\n> ERROR: cannot update table \"logical_replication_test\" because it does\n> not have a replica identity and publishes updates\n> \n> Patch attached.\n\nAn email on -bugs earlier this morning complains of the same problem but\nfor temporary tables.\n\nhttps://www.postgresql.org/message-id/CAHOFxGr%3DmqPZXbAuoR7Nbq-bU4HxqVWHbTTUy5%3DPKQut_F0%3DXA%40mail.gmail.com\n\nIt seems your patch fixes their case too.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 14 Mar 2019 11:30:12 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix handling of unlogged tables in FOR ALL TABLES publications" }, { "msg_contents": "At Thu, 14 Mar 2019 11:30:12 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <59e5a734-9e06-1035-385b-6267175819aa@lab.ntt.co.jp>\n> On 2019/03/13 21:03, Peter Eisentraut wrote:\n> > If a FOR ALL TABLES publication exists, unlogged tables are ignored\n> > for publishing changes. But CheckCmdReplicaIdentity() would still\n> > check in that case that such a table has a replica identity set before\n> > accepting updates. That is useless, so check first whether the given\n> > table is publishable and skip the check if not.\n> > \n> > Example:\n> > \n> > CREATE PUBLICATION pub FOR ALL TABLES;\n> > CREATE UNLOGGED TABLE logical_replication_test AS SELECT 1 AS number;\n> > UPDATE logical_replication_test SET number = 2;\n> > ERROR: cannot update table \"logical_replication_test\" because it does\n> > not have a replica identity and publishes updates\n> > \n> > Patch attached.\n> \n> An email on -bugs earlier this morning complains of the same problem but\n> for temporary tables.\n> \n> https://www.postgresql.org/message-id/CAHOFxGr%3DmqPZXbAuoR7Nbq-bU4HxqVWHbTTUy5%3DPKQut_F0%3DXA%40mail.gmail.com\n> \n> It seems your patch fixes their case too.\n\nIs it the right thing that GetRelationPublicationsActions sets\nwrong rd_publicatons for the relations?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 14 Mar 2019 15:03:28 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix handling of unlogged tables in FOR ALL TABLES publications" }, { "msg_contents": "On 2019/03/14 15:03, Kyotaro HORIGUCHI wrote:\n> At Thu, 14 Mar 2019 11:30:12 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <59e5a734-9e06-1035-385b-6267175819aa@lab.ntt.co.jp>\n>> On 2019/03/13 21:03, Peter Eisentraut wrote:\n>>> If a FOR ALL TABLES publication exists, unlogged tables are ignored\n>>> for publishing changes. But CheckCmdReplicaIdentity() would still\n>>> check in that case that such a table has a replica identity set before\n>>> accepting updates. That is useless, so check first whether the given\n>>> table is publishable and skip the check if not.\n>>>\n>>> Example:\n>>>\n>>> CREATE PUBLICATION pub FOR ALL TABLES;\n>>> CREATE UNLOGGED TABLE logical_replication_test AS SELECT 1 AS number;\n>>> UPDATE logical_replication_test SET number = 2;\n>>> ERROR: cannot update table \"logical_replication_test\" because it does\n>>> not have a replica identity and publishes updates\n>>>\n>>> Patch attached.\n>>\n>> An email on -bugs earlier this morning complains of the same problem but\n>> for temporary tables.\n>>\n>> https://www.postgresql.org/message-id/CAHOFxGr%3DmqPZXbAuoR7Nbq-bU4HxqVWHbTTUy5%3DPKQut_F0%3DXA%40mail.gmail.com\n>>\n>> It seems your patch fixes their case too.\n> \n> Is it the right thing that GetRelationPublicationsActions sets\n> wrong rd_publicatons for the relations?\n\nActually, after applying Peter's patch, maybe we should add an\nAssert(is_publishable_relation(relation)) at the top of\nGetRelationPublicationActions(), also adding a line in the function header\ncomment that callers must ensure that. There's only one caller at the\nmoment anyway, which Peter's patch is fixing to ensure that.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 14 Mar 2019 15:31:03 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix handling of unlogged tables in FOR ALL TABLES publications" }, { "msg_contents": "At Thu, 14 Mar 2019 15:31:03 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <26bfa053-3fb2-ad1d-efbb-7c930b41c0fd@lab.ntt.co.jp>\n> On 2019/03/14 15:03, Kyotaro HORIGUCHI wrote:\n> > Is it the right thing that GetRelationPublicationsActions sets\n> > wrong rd_publicatons for the relations?\n> \n> Actually, after applying Peter's patch, maybe we should add an\n> Assert(is_publishable_relation(relation)) at the top of\n> GetRelationPublicationActions(), also adding a line in the function header\n> comment that callers must ensure that. There's only one caller at the\n> moment anyway, which Peter's patch is fixing to ensure that.\n\nYeah, that's a reasnable alternative.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Mar 2019 16:27:16 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Fix handling of unlogged tables in FOR ALL TABLES publications" }, { "msg_contents": "So perhaps push the check down to GetRelationPublicationActions()\ninstead. That way we don't have to patch up two places and everything\n\"just works\" even for possible other callers. See attached patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 25 Mar 2019 09:04:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix handling of unlogged tables in FOR ALL TABLES publications" }, { "msg_contents": "On 2019-03-25 09:04, Peter Eisentraut wrote:\n> So perhaps push the check down to GetRelationPublicationActions()\n> instead. That way we don't have to patch up two places and everything\n> \"just works\" even for possible other callers. See attached patch.\n\nThis has been committed and backpatched to PG10.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 18 Apr 2019 10:14:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix handling of unlogged tables in FOR ALL TABLES publications" } ]
[ { "msg_contents": "(This seems to be the only occurrence of this error, after about 2\nminutes of git-grep-ing.)", "msg_date": "Wed, 13 Mar 2019 13:26:28 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "[PATCH] Remove stray comma from heap_page_item_attrs() docs" }, { "msg_contents": "On Wed, Mar 13, 2019 at 1:26 PM Christoph Berg <myon@debian.org> wrote:\n\n> (This seems to be the only occurrence of this error, after about 2\n> minutes of git-grep-ing.)\n>\n\n\nThanks, pushed!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Mar 13, 2019 at 1:26 PM Christoph Berg <myon@debian.org> wrote:(This seems to be the only occurrence of this error, after about 2\nminutes of git-grep-ing.)\nThanks, pushed!--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 13 Mar 2019 13:44:23 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove stray comma from heap_page_item_attrs() docs" } ]
[ { "msg_contents": "(moving this over from pgsql-performance)\n\nA client had an issue with a where that had a where clause something like\nthis:\n\nWHERE 123456 = ANY(integer_array_column)\n\n\nI was surprised that this didn't use the pre-existing GIN index on\ninteger_array_column, whereas recoding as\n\nWHERE ARRAY[123456] <@ integer_array_column\n\n\ndid cause the GIN index to be used. Is this a known/expected behavior? If\nso, is there any logical reason why we couldn't have the planner pick up on\nthat?\n\nFlo Rance (tourance@gmail.com) was nice enough to show that yes, this is\nexpected behavior.\n\nWhich leaves the questions:\n- is the transformation I made is algebraically correct in a general case?\n- if so, could we have the planner do that automatically when in the\npresence of a matching GIN index?\n\nThis seems like it might tie in with the enforcement of foreign keys within\nan array thread (which I can't presently find...).\n\n(moving this over from pgsql-performance) A client had an issue with a where that had a where clause something like this:WHERE 123456 = ANY(integer_array_column)I was surprised that this didn't use the pre-existing GIN index on integer_array_column, whereas recoding asWHERE ARRAY[123456] <@ integer_array_columndid cause the GIN index to be used. Is this a known/expected behavior? If so, is there any logical reason why we couldn't have the planner pick up on that?Flo Rance (tourance@gmail.com) was nice enough to show that yes, this is expected behavior.Which leaves the questions: - is the transformation I made is algebraically correct in a general case?- if so, could we have the planner do that automatically when in the presence of a matching GIN index?This seems like it might tie in with the enforcement of foreign keys within an array thread (which I can't presently find...).", "msg_date": "Wed, 13 Mar 2019 10:06:04 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "GIN indexes on an = ANY(array) clause" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> A client had an issue with a where that had a where clause something like\n> this:\n> WHERE 123456 = ANY(integer_array_column)\n> I was surprised that this didn't use the pre-existing GIN index on\n> integer_array_column, whereas recoding as\n> WHERE ARRAY[123456] <@ integer_array_column\n> did cause the GIN index to be used. Is this a known/expected behavior? If\n> so, is there any logical reason why we couldn't have the planner pick up on\n> that?\n> Flo Rance (tourance@gmail.com) was nice enough to show that yes, this is\n> expected behavior.\n\nThe planner doesn't know enough about the semantics of array <@ to make\nsuch a transformation. (As pointed out in the stackoverflow article Flo\npointed you to, the equivalence might not even hold, depending on which\nversion of <@ we're talking about.)\n\nSince the GIN index type is heavily oriented towards array-related\noperators, I spent some time wondering whether we could get any mileage\nby making ScalarArrayOpExpr indexquals be natively supported by GIN\n(right now they aren't). But really I don't see where the GIN AM would\nget the knowledge from, either. What it knows about the array_ops\nopclass is basically the list of associated operators:\n\nregression=# select amopopr::regoperator from pg_amop where amopfamily = 2745;\n amopopr \n-----------------------\n &&(anyarray,anyarray)\n @>(anyarray,anyarray)\n <@(anyarray,anyarray)\n =(anyarray,anyarray)\n(4 rows)\n\nand none of those are obviously related to the =(int4,int4) operator that\nis in the ScalarArrayOp. The only way to get from point A to point B is\nto know very specifically that =(anyarray,anyarray) is related to any\nscalar-type btree equality operator, which is not the kind of thing the\nGIN AM ought to know either.\n\nReally the array_ops opclass itself is the widest scope where it'd be\nreasonable to embed knowledge about this sort of thing --- but we lack\nany API at all whereby opclass-specific code could affect planner behavior\nat this level. Even if we had one, there's no obvious reason why we\nshould be consulting a GIN opclass about a ScalarArrayOp that does not\ncontain an operator visibly related to the opclass. That path soon\nleads to consulting everybody about everything and planner performance\ngoing into the tank.\n\nExtensibility is a harsh mistress. \n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 12:38:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GIN indexes on an = ANY(array) clause" }, { "msg_contents": "On 3/13/19 5:38 PM, Tom Lane wrote:\n> regression=# select amopopr::regoperator from pg_amop where amopfamily = 2745;\n> amopopr\n> -----------------------\n> &&(anyarray,anyarray)\n> @>(anyarray,anyarray)\n> <@(anyarray,anyarray)\n> =(anyarray,anyarray)\n> (4 rows)\n> \n> and none of those are obviously related to the =(int4,int4) operator that\n> is in the ScalarArrayOp. The only way to get from point A to point B is\n> to know very specifically that =(anyarray,anyarray) is related to any\n> scalar-type btree equality operator, which is not the kind of thing the\n> GIN AM ought to know either.\n\nIn the discussions for the patch for foreign keys from arrays[1] some \npeople proposed add a new operator, <<@(anyelement,anyarray), to avoid \nhaving to construct left hand side arrays. Would that help here or does \nit still have the same issues?\n\n1. \nhttps://www.postgresql.org/message-id/CAJvoCut7zELHnBSC8HrM6p-R6q-NiBN1STKhqnK5fPE-9%3DGq3g%40mail.gmail.com\n\nAndreas\n\n", "msg_date": "Thu, 14 Mar 2019 16:11:17 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: GIN indexes on an = ANY(array) clause" } ]
[ { "msg_contents": "Hi,\n\nIt seems to me that since the pg_stats view is supposed to be\nhuman-readable, it would make sense to show a human-readable version\nof n_distinct.\nCurrently, when the stats collector estimates that the number of\ndistinct values is more than 10% of the total row count, what is\nstored in pg_statistic.stadistinct is -1 * n_distinct / totalrows, the\nrationale being that if new rows are inserted in the table, they are\nlikely to introduce new values, and storing that value allows the\nstadistinct not to get stale too fast.\n\nYou can find attached a simple WIP patch to show the proper n_distinct\nvalue in pg_stats.\n\n* Is this desired?\n* Would it make sense to add a column in the pg_stats view to display\nthe information \"lost\", that is the fact that postgres will assume\nthat inserting new rows means a higher n_distinct?\n* Am I right to assume that totalrows in the code\n(src/backend/commands/analyze.c:2170) actually corresponds to\nn_live_tup? That's what I gathered from glancing at the code, but I\nmight be wrong.\n* Should the catalog version be changed for this kind of change?\n* Should I add this patch to the commitfest?\n\nIf this patch is actually desired, I'll update the documentation as well.\nI'm guessing this patch would break scripts relying on the pg_stats\nview, but I do not know how much we want to avoid that, since they\nshould rely on the base tables rather than on the views.\n\nThanks in advance for your input!\n\nRegards,\nMaxence Ahlouche", "msg_date": "Wed, 13 Mar 2019 15:14:43 +0100", "msg_from": "Maxence Ahlouche <maxence.ahlouche@gmail.com>", "msg_from_op": true, "msg_subject": "Show a human-readable n_distinct in pg_stats view" }, { "msg_contents": "On Wed, 13 Mar 2019 at 15:14, Maxence Ahlouche\n<maxence.ahlouche@gmail.com> wrote:\n> * Should I add this patch to the commitfest?\n\nI added it: https://commitfest.postgresql.org/23/2061/\n\n", "msg_date": "Fri, 15 Mar 2019 10:57:24 +0100", "msg_from": "Maxence Ahlouche <maxence.ahlouche@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Show a human-readable n_distinct in pg_stats view" }, { "msg_contents": "Maxence Ahlouche wrote:\n> It seems to me that since the pg_stats view is supposed to be\n> human-readable, it would make sense to show a human-readable version\n> of n_distinct.\n> Currently, when the stats collector estimates that the number of\n> distinct values is more than 10% of the total row count, what is\n> stored in pg_statistic.stadistinct is -1 * n_distinct / totalrows, the\n> rationale being that if new rows are inserted in the table, they are\n> likely to introduce new values, and storing that value allows the\n> stadistinct not to get stale too fast.\n> \n> You can find attached a simple WIP patch to show the proper n_distinct\n> value in pg_stats.\n> \n> * Is this desired?\n> * Would it make sense to add a column in the pg_stats view to display\n> the information \"lost\", that is the fact that postgres will assume\n> that inserting new rows means a higher n_distinct?\n> * Am I right to assume that totalrows in the code\n> (src/backend/commands/analyze.c:2170) actually corresponds to\n> n_live_tup? That's what I gathered from glancing at the code, but I\n> might be wrong.\n> * Should the catalog version be changed for this kind of change?\n> * Should I add this patch to the commitfest?\n> \n> If this patch is actually desired, I'll update the documentation as well.\n> I'm guessing this patch would break scripts relying on the pg_stats\n> view, but I do not know how much we want to avoid that, since they\n> should rely on the base tables rather than on the views.\n\nThis may make things easier for those who are confused by a negative\nentry, but it will obfuscate matters for those who are not.\n\nI don't think that is a win, particularly since the semantics are\nexplained in great detail in the documentation of \"pg_stats\".\n\nSo I am -1 on that one.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 15 Mar 2019 11:11:07 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Show a human-readable n_distinct in pg_stats view" } ]
[ { "msg_contents": "Hi hackers,\n\nOver in the \"Include all columns in default names for foreign key\nconstraints\" thread[1], I noticed the patch added the following:\n\n+\t\tstrlcpy(buf + buflen, name, NAMEDATALEN);\n+\t\tbuflen += strlen(buf + buflen);\n\nSeeing as strlcpy() returns the copied length, this seems rather\nredundant. A quick bit of grepping shows that this pattern occurs in\nseveral places, including the ChooseIndexNameAddition and\nChooseExtendedStatisticNameAddition functions this was no doubt inspired\nby.\n\nAttached is a patch that instead uses the return value of strlcpy() and\nstrlcat(). I left some strlen() calls alone in places where it wasn't\nconvenient (e.g. pg_open_tzfile(), where it would need an extra\nvariable).\n\n- ilmari\n\n[1] https://postgr.es/m/CAF+2_SHzBU0tWKvJMZAXfcmrnCwJUeCrAohga0awDf9uDBptnw@mail.gmail.com\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Wed, 13 Mar 2019 15:54:26 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Using the return value of strlcpy() and strlcat()" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> [ let's convert\n> +\t\tstrlcpy(buf + buflen, name, NAMEDATALEN);\n> +\t\tbuflen += strlen(buf + buflen);\n> to\n> +\t\tbuflen += strlcpy(buf + buflen, name, NAMEDATALEN);\n> ]\n\nI don't think that's a safe transformation: what strlcpy returns is\nstrlen(src), which might be different from what it was actually\nable to fit into the destination.\n\nSure, they're equivalent if no truncation occurred; but if we were\n100.00% sure of no truncation, we'd likely not bother with strlcpy.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 12:50:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using the return value of strlcpy() and strlcat()" }, { "msg_contents": "On Wed, Mar 13, 2019 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> > [ let's convert\n> > + strlcpy(buf + buflen, name, NAMEDATALEN);\n> > + buflen += strlen(buf + buflen);\n> > to\n> > + buflen += strlcpy(buf + buflen, name, NAMEDATALEN);\n> > ]\n>\n> I don't think that's a safe transformation: what strlcpy returns is\n> strlen(src), which might be different from what it was actually\n> able to fit into the destination.\n>\n> Sure, they're equivalent if no truncation occurred; but if we were\n> 100.00% sure of no truncation, we'd likely not bother with strlcpy.\n>\n\nSo, if return value < length (3rd argument) we should be able to use the\nreturn value and avoid the strlen, else do the strlen ?\n\nOn Wed, Mar 13, 2019 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> [ let's convert\n> +             strlcpy(buf + buflen, name, NAMEDATALEN);\n> +             buflen += strlen(buf + buflen);\n> to\n> +             buflen += strlcpy(buf + buflen, name, NAMEDATALEN);\n> ]\n\nI don't think that's a safe transformation: what strlcpy returns is\nstrlen(src), which might be different from what it was actually\nable to fit into the destination.\n\nSure, they're equivalent if no truncation occurred; but if we were\n100.00% sure of no truncation, we'd likely not bother with strlcpy.So, if return value < length (3rd argument) we should be able to use the return value and avoid the strlen, else do the strlen ?", "msg_date": "Wed, 13 Mar 2019 18:41:52 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Using the return value of strlcpy() and strlcat()" }, { "msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> On Wed, Mar 13, 2019 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think that's a safe transformation: what strlcpy returns is\n>> strlen(src), which might be different from what it was actually\n>> able to fit into the destination.\n>> Sure, they're equivalent if no truncation occurred; but if we were\n>> 100.00% sure of no truncation, we'd likely not bother with strlcpy.\n\n> So, if return value < length (3rd argument) we should be able to use the\n> return value and avoid the strlen, else do the strlen ?\n\nMmm ... if there's a way to do it that's not messy and typo-prone,\nmaybe. But I'm dubious that the potential gain is worth complicating\nthe code. The strings involved aren't usually all that long.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 21:57:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using the return value of strlcpy() and strlcat()" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ashwin Agrawal <aagrawal@pivotal.io> writes:\n>> On Wed, Mar 13, 2019 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I don't think that's a safe transformation: what strlcpy returns is\n>>> strlen(src), which might be different from what it was actually\n>>> able to fit into the destination.\n\nYeah, Andrew Gierth pointed this out on IRC as well.\n\n>>> Sure, they're equivalent if no truncation occurred; but if we were\n>>> 100.00% sure of no truncation, we'd likely not bother with strlcpy.\n>\n>> So, if return value < length (3rd argument) we should be able to use the\n>> return value and avoid the strlen, else do the strlen ?\n>\n> Mmm ... if there's a way to do it that's not messy and typo-prone,\n> maybe. But I'm dubious that the potential gain is worth complicating\n> the code. The strings involved aren't usually all that long.\n\nPlease consider this patch withdrawn.\n\n- ilmari\n-- \n\"I use RMS as a guide in the same way that a boat captain would use\n a lighthouse. It's good to know where it is, but you generally\n don't want to find yourself in the same spot.\" - Tollef Fog Heen\n\n", "msg_date": "Thu, 14 Mar 2019 11:10:58 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: Using the return value of strlcpy() and strlcat()" }, { "msg_contents": "On Fri, 15 Mar 2019 at 00:11, Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > Mmm ... if there's a way to do it that's not messy and typo-prone,\n> > maybe. But I'm dubious that the potential gain is worth complicating\n> > the code. The strings involved aren't usually all that long.\n>\n> Please consider this patch withdrawn.\n\nAmusingly it seems the strlcpy() return value of the chars it would\nhave copied if it didn't run out of space is not all that useful for\nus. I see exactly 2 matches of git grep \"= strlcpy\". The\nisolation_init() one looks genuine, but the SerializeLibraryState()\nlooks a bit bogus. Looking at EstimateLibraryStateSpace() it seems it\nestimates the exact space, so the strlcpy should never cut short, but\nit does seem like a bad example to leave laying around.\n\nWe should have maybe thought a bit harder when we put that strlcpy\ncode into the codebase and considered if we might have been better off\ninventing our own function that just returns what it did copy instead\nof what it would have.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 15 Mar 2019 00:45:22 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Using the return value of strlcpy() and strlcat()" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> We should have maybe thought a bit harder when we put that strlcpy\n> code into the codebase and considered if we might have been better off\n> inventing our own function that just returns what it did copy instead\n> of what it would have.\n\nWell, strlcpy is (somewhat) standardized; we didn't just invent it\noff the cuff. I thought a little bit about whether it would be worth\nhaving a variant version with a different return value, but concluded\nthat having YA strcpy variant would more likely be a dangerous source\nof thinkos than something that was actually helpful. Otherwise I'd\nhave given Ashwin a more positive reaction...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 09:50:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using the return value of strlcpy() and strlcat()" } ]
[ { "msg_contents": "Hi,\n\nAmit Kapila pointed out to be that there are some buidfarm failures on\nhyrax which seem to have started happening around the time I committed\n898e5e3290a72d288923260143930fb32036c00c. It failed like this once:\n\n2019-03-07 19:57:40.231 EST [28073:11] DETAIL: Failed process was\nrunning: /* same as above */\n explain (costs off) select * from rlp where a = 1::numeric;\n\nAnd then the next three runs failed like this:\n\n2019-03-09 04:56:04.939 EST [1727:4] LOG: server process (PID 2898)\nwas terminated by signal 9: Killed\n2019-03-09 04:56:04.939 EST [1727:5] DETAIL: Failed process was\nrunning: UPDATE range_parted set c = (case when c = 96 then 110 else c\n+ 1 end ) WHERE a = 'b' and b > 10 and c >= 96;\n\nI couldn't think of an explanation for this other than that the\nprocess involved had used a lot of memory and gotten killed by the OOM\nkiller, and it turns out that RelationBuildPartitionDesc leaks memory\ninto the surrounding context every time it's called. It's not a lot\nof memory, but in a CLOBBER_CACHE_ALWAYS build it adds up, because\nthis function gets called a lot. All of this is true even before the\ncommit in question, but for some reason that I don't understand that\ncommit makes it worse.\n\nI tried having that function use a temporary context for its scratch\nspace and that causes a massive drop in memory usage when running\nupdate.sql frmo the regression tests under CLOBBER_CACHE_ALWAYS. I\nran MacOS X's vmmap tool to see the impact, measuring the size of the\n\"DefaultMallocZone\". Without this patch, that peaks at >450MB; with\nthis patch it peaks ~270MB. There is a significant reduct in typical\nmemory usage, too. It's noticeably better with this patch than it was\nbefore 898e5e3290a72d288923260143930fb32036c00c.\n\nI'm not sure whether this is the right way to address the problem.\nRelationBuildPartitionDesc() creates basically all of the data\nstructures it needs and then copies them into rel->rd_pdcxt, which has\nalways seemed a bit inefficient to me. Another way to redesign this\nwould be to have the function create a temporary context, do all of\nits work there, and then reparent the context under CacheMemoryContext\nat the end. That means any leaks would go into a relatively\nlong-lifespan context, but on the other hand you wouldn't leak into\nthe same context a zillion times over, and you'd save the expense of\ncopying everything. I think that the biggest thing that is being\ncopied around here is the partition bounds, so maybe the leak wouldn't\namount to much, and we could also do things like list_free(inhoids) to\nmake it a little tighter.\n\nOpinions? Suggestions?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 13 Mar 2019 12:13:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019-Mar-13, Robert Haas wrote:\n\n> RelationBuildPartitionDesc() creates basically all of the data\n> structures it needs and then copies them into rel->rd_pdcxt, which has\n> always seemed a bit inefficient to me. Another way to redesign this\n> would be to have the function create a temporary context, do all of\n> its work there, and then reparent the context under CacheMemoryContext\n> at the end. That means any leaks would go into a relatively\n> long-lifespan context, but on the other hand you wouldn't leak into\n> the same context a zillion times over, and you'd save the expense of\n> copying everything. I think that the biggest thing that is being\n> copied around here is the partition bounds, so maybe the leak wouldn't\n> amount to much, and we could also do things like list_free(inhoids) to\n> make it a little tighter.\n\nI remember going over this code's memory allocation strategy a bit to\navoid the copy while not incurring potential leaks CacheMemoryContext;\nas I recall, my idea was to use two contexts, one of which is temporary\nand used for any potentially leaky callees, and destroyed at the end of\nthe function, and the other contains the good stuff and is reparented to\nCacheMemoryContext at the end. So if you have any accidental leaks,\nthey don't affect a long-lived context. You have to be mindful of not\ncalling leaky code when you're using the permanent one.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 13 Mar 2019 13:42:32 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 12:42 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I remember going over this code's memory allocation strategy a bit to\n> avoid the copy while not incurring potential leaks CacheMemoryContext;\n> as I recall, my idea was to use two contexts, one of which is temporary\n> and used for any potentially leaky callees, and destroyed at the end of\n> the function, and the other contains the good stuff and is reparented to\n> CacheMemoryContext at the end. So if you have any accidental leaks,\n> they don't affect a long-lived context. You have to be mindful of not\n> calling leaky code when you're using the permanent one.\n\nWell, that assumes that the functions which allocate the good stuff do\nnot also leak, which seems a bit fragile.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 12:51:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 13, 2019 at 12:42 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n>> I remember going over this code's memory allocation strategy a bit to\n>> avoid the copy while not incurring potential leaks CacheMemoryContext;\n>> as I recall, my idea was to use two contexts, one of which is temporary\n>> and used for any potentially leaky callees, and destroyed at the end of\n>> the function, and the other contains the good stuff and is reparented to\n>> CacheMemoryContext at the end. So if you have any accidental leaks,\n>> they don't affect a long-lived context. You have to be mindful of not\n>> calling leaky code when you're using the permanent one.\n\n> Well, that assumes that the functions which allocate the good stuff do\n> not also leak, which seems a bit fragile.\n\nI'm a bit confused as to why there's an issue here at all. The usual\nplan for computed-on-demand relcache sub-structures is that we compute\na working copy that we're going to return to the caller using the\ncaller's context (which is presumably statement-duration at most)\nand then do the equivalent of copyObject to stash a long-lived copy\ninto the relcache context. Is this case being done differently, and if\nso why? If it's being done the same, where are we leaking?\n\nI recall having noticed someplace where I thought the relcache partition\nsupport was simply failing to make provisions for cleaning up a cached\nstructure at relcache entry drop, but I didn't have time to pursue it\nright then. Let me see if I can reconstruct what I was worried about.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 13:15:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 1:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm a bit confused as to why there's an issue here at all. The usual\n> plan for computed-on-demand relcache sub-structures is that we compute\n> a working copy that we're going to return to the caller using the\n> caller's context (which is presumably statement-duration at most)\n> and then do the equivalent of copyObject to stash a long-lived copy\n> into the relcache context. Is this case being done differently, and if\n> so why? If it's being done the same, where are we leaking?\n\nIt's being done in just that way. The caller's context is\nMessageContext, which is indeed statement duration. But if you plunk\n10k into MessageContext a few thousand times per statement, then you\nchew through a couple hundred meg of memory, and apparently hyrax\ncan't tolerate that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 13:27:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "I wrote:\n> I recall having noticed someplace where I thought the relcache partition\n> support was simply failing to make provisions for cleaning up a cached\n> structure at relcache entry drop, but I didn't have time to pursue it\n> right then. Let me see if I can reconstruct what I was worried about.\n\nAh, here we are: it was rel->rd_partcheck. I'm not sure exactly how\ncomplicated that structure can be, but I'm pretty sure that this is\na laughably inadequate method of cleaning it up:\n\n\tif (relation->rd_partcheck)\n\t\tpfree(relation->rd_partcheck);\n\nHaving it be loose data in CacheMemoryContext, which is the current state\nof affairs, is just not going to be practical to clean up. I suggest that\nmaybe it could be copied into rd_partkeycxt or rd_pdcxt, so that it'd go\naway as a byproduct of freeing those. If there's a reason it has to be\nindependent of both, it'll have to have its own small context.\n\nDunno if that's related to hyrax's issue, though.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 13:30:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019-Mar-13, Robert Haas wrote:\n\n> On Wed, Mar 13, 2019 at 12:42 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > I remember going over this code's memory allocation strategy a bit to\n> > avoid the copy while not incurring potential leaks CacheMemoryContext;\n> > as I recall, my idea was to use two contexts, one of which is temporary\n> > and used for any potentially leaky callees, and destroyed at the end of\n> > the function, and the other contains the good stuff and is reparented to\n> > CacheMemoryContext at the end. So if you have any accidental leaks,\n> > they don't affect a long-lived context. You have to be mindful of not\n> > calling leaky code when you're using the permanent one.\n> \n> Well, that assumes that the functions which allocate the good stuff do\n> not also leak, which seems a bit fragile.\n\nA bit, yes, but not overly so, and it's less fragile that not having\nsuch a protection. Anything that allocates in CacheMemoryContext needs\nto be very careful anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 13 Mar 2019 14:38:02 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 1:38 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> A bit, yes, but not overly so, and it's less fragile that not having\n> such a protection. Anything that allocates in CacheMemoryContext needs\n> to be very careful anyway.\n\nTrue, but I think it's more fragile than either of the options I proposed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 14:03:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ah, here we are: it was rel->rd_partcheck. I'm not sure exactly how\n> complicated that structure can be, but I'm pretty sure that this is\n> a laughably inadequate method of cleaning it up:\n>\n> if (relation->rd_partcheck)\n> pfree(relation->rd_partcheck);\n\nOh, for crying out loud.\n\n> Dunno if that's related to hyrax's issue, though.\n\nIt's related in the sense that it's a leak, and any leak will tend to\nrun the system out of memory more easily, but what I observed was a\nlarge leak into MessageContext, and that would be a leak into\nCacheMemoryContext, so I think it's probably a sideshow rather than\nthe main event.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 14:07:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019-Mar-13, Robert Haas wrote:\n\n> On Wed, Mar 13, 2019 at 1:38 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > A bit, yes, but not overly so, and it's less fragile that not having\n> > such a protection. Anything that allocates in CacheMemoryContext needs\n> > to be very careful anyway.\n> \n> True, but I think it's more fragile than either of the options I proposed.\n\nYou do? Unless I misunderstood, your options are:\n\n1. (the patch you attached) create a temporary memory context that is\nused for everything, then at the end copy the good stuff to CacheMemCxt\n(or a sub-context thereof). This still needs to copy.\n\n2. create a temp memory context, do everything there, do retail freeing\nof everything we don't want, reparenting the context to CacheMemCxt.\nHope we didn't forget to pfree anything.\n\nHow is any of those superior to what I propose?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 13 Mar 2019 15:21:45 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n>> Dunno if that's related to hyrax's issue, though.\n\n> It's related in the sense that it's a leak, and any leak will tend to\n> run the system out of memory more easily, but what I observed was a\n> large leak into MessageContext, and that would be a leak into\n> CacheMemoryContext, so I think it's probably a sideshow rather than\n> the main event.\n\nOK, in that case it's definitely all the temporary data that gets created\nthat is the problem. I've not examined your patch in great detail but\nit looks plausible for fixing that.\n\nI think that RelationBuildPartitionDesc could use some additional cleanup\nor at least better commenting. In particular, it's neither documented nor\nobvious to the naked eye why rel->rd_partdesc mustn't get set till the\nvery end. As the complainant, I'm willing to go fix that, but do you want\nto push your patch first so it doesn't get broken? Or I could include\nyour patch in the cleanup.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 14:26:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> You do? Unless I misunderstood, your options are:\n\n> 1. (the patch you attached) create a temporary memory context that is\n> used for everything, then at the end copy the good stuff to CacheMemCxt\n> (or a sub-context thereof). This still needs to copy.\n\n> 2. create a temp memory context, do everything there, do retail freeing\n> of everything we don't want, reparenting the context to CacheMemCxt.\n> Hope we didn't forget to pfree anything.\n\n> How is any of those superior to what I propose?\n\nI doubt that what you're suggesting is terribly workable. It's not\njust RelationBuildPartitionDesc that's at issue. Part of what will\nbe the long-lived data structure is made by partition_bounds_create,\nand that invokes quite a boatload of code that both makes the final\ndata structure and leaks a lot of intermediate junk. Having to be\nvery careful about permanent vs temporary data throughout all of that\nseems like a recipe for bugs.\n\nThe existing code in RelationBuildPartitionDesc is already pretty good\nabout avoiding copying of data other than the output of\npartition_bounds_create. In fact, I think it's already close to what\nyou're suggesting other than that point. So I think --- particularly\ngiven that we need something we can back-patch into v11 --- that we\nshouldn't try to do anything much more complicated than what Robert is\nsuggesting.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 14:36:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 2:21 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Mar-13, Robert Haas wrote:\n> > On Wed, Mar 13, 2019 at 1:38 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > A bit, yes, but not overly so, and it's less fragile that not having\n> > > such a protection. Anything that allocates in CacheMemoryContext needs\n> > > to be very careful anyway.\n> >\n> > True, but I think it's more fragile than either of the options I proposed.\n>\n> You do? Unless I misunderstood, your options are:\n>\n> 1. (the patch you attached) create a temporary memory context that is\n> used for everything, then at the end copy the good stuff to CacheMemCxt\n> (or a sub-context thereof). This still needs to copy.\n>\n> 2. create a temp memory context, do everything there, do retail freeing\n> of everything we don't want, reparenting the context to CacheMemCxt.\n> Hope we didn't forget to pfree anything.\n>\n> How is any of those superior to what I propose?\n\nWell, *I* thought of it, so obviously it must be superior. :-)\n\nMore seriously, your idea does seem better in some ways. My #1\ndoesn't avoid the copy, but does kill the leaks. My #2 avoids the\ncopy, but risks a different flavor of leakage. Your idea also avoids\nthe copy and leaks in fewer cases than my #2. In that sense, it is\nthe technically superior option. However, it also requires more\nmemory context switches than either of my options, and I guess that\nseems fragile to me in the sense that I think future code changes are\nmore likely to go wrong due to the complexity involved. I might be\nmistaken about that, though.\n\nOne other note is that, although the extra copy looks irksome, it's\nprobably not very significant from a performance point of view. In a\nnon-CLOBBER_CACHE_ALWAYS build it doesn't happen frequently enough to\nmatter, and in a CLOBBER_CACHE_ALWAYS build everything is already so\nslow that it still doesn't matter. That's not the only consideration,\nthough. Code which copies data structures might be buggy, or might\ndevelop bugs in the future, and avoiding that copy would avoid\nexposure to such bugs. On the other hand, it's not really possible to\nremove the copying without increasing the risk of leaking into the\nlong-lived context.\n\nIn some ways, I think this is all a natural outgrowth of the fact that\nwe rely on palloc() in so many places instead of forcing code to be\nexplicit about which memory context it intends to target. Global\nvariables are considered harmful, and it's not that difficult to see\nthe connection between that general principle and present\ndifficulties. However, I will not have time to write and debug a\npatch reversing that choice between now and feature freeze. :-)\n\nI'm kinda inclined to go for the brute-force approach of slamming the\ntemporary context in there as in the patch I proposed upthread, which\nshould solve the immediate problem, and we can implement one of these\nother ideas later if we want. What do you think about that?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 14:49:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 2:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> OK, in that case it's definitely all the temporary data that gets created\n> that is the problem. I've not examined your patch in great detail but\n> it looks plausible for fixing that.\n\nCool.\n\n> I think that RelationBuildPartitionDesc could use some additional cleanup\n> or at least better commenting. In particular, it's neither documented nor\n> obvious to the naked eye why rel->rd_partdesc mustn't get set till the\n> very end. As the complainant, I'm willing to go fix that, but do you want\n> to push your patch first so it doesn't get broken? Or I could include\n> your patch in the cleanup.\n\nYeah, that probably makes sense, though it might be polite to wait\nanother hour or two to see if anyone wants to argue with that approach\nfurther.\n\nIt seems kinda obvious to me why rel->rd_partdesc can't get set until\nthe end. Isn't it just that you'd better not set a permanent pointer\nto a data structure until you're past any code that might ERROR, which\nis pretty much everything? That principle applies to lots of\nPostgreSQL code, not just this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 14:59:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 13, 2019 at 2:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think that RelationBuildPartitionDesc could use some additional cleanup\n>> or at least better commenting. In particular, it's neither documented nor\n>> obvious to the naked eye why rel->rd_partdesc mustn't get set till the\n>> very end. As the complainant, I'm willing to go fix that, but do you want\n>> to push your patch first so it doesn't get broken? Or I could include\n>> your patch in the cleanup.\n\n> It seems kinda obvious to me why rel->rd_partdesc can't get set until\n> the end. Isn't it just that you'd better not set a permanent pointer\n> to a data structure until you're past any code that might ERROR, which\n> is pretty much everything? That principle applies to lots of\n> PostgreSQL code, not just this.\n\nYeah, but usually there's some comment pointing it out. I also wonder\nif there aren't corner-case bugs; it seems a bit bogus for example that\nrd_pdcxt is created without any thought as to whether it might be set\nalready. It's not clear whether this has been written with the\nlevel of paranoia that's appropriate for messing with a relcache entry,\nand some comments would make it a lot clearer (a) if that is true and\n(b) what assumptions are implicitly being shared with relcache.c.\n\nMeanwhile, who's going to take point on cleaning up rd_partcheck?\nI don't really understand this code well enough to know whether that\ncan share one of the existing partitioning-related sub-contexts.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 15:14:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, but usually there's some comment pointing it out. I also wonder\n> if there aren't corner-case bugs; it seems a bit bogus for example that\n> rd_pdcxt is created without any thought as to whether it might be set\n> already. It's not clear whether this has been written with the\n> level of paranoia that's appropriate for messing with a relcache entry,\n> and some comments would make it a lot clearer (a) if that is true and\n> (b) what assumptions are implicitly being shared with relcache.c.\n\nYeah, this is all pretty new code, and it probably has some bugs we\nhaven't found yet.\n\n> Meanwhile, who's going to take point on cleaning up rd_partcheck?\n> I don't really understand this code well enough to know whether that\n> can share one of the existing partitioning-related sub-contexts.\n\nWell, it would be nice if Amit Langote worked on it since he wrote the\noriginal version of most of this code, but I'm sure he'll defer to you\nif you feel the urge to work on it, or I can take a look at it (not\ntoday).\n\nTo your question, I think it probably can't share a context. Briefly,\nrd_partkey can't change ever, except that when a partitioned relation\nis in the process of being created it is briefly NULL; once it obtains\na value, that value cannot be changed. If you want to range-partition\na list-partitioned table or something like that, you have to drop the\ntable and create a new one. I think that's a perfectly acceptable\npermanent limitation and I have no urge whatever to change it.\nrd_partdesc changes when you attach or detach a child partition.\nrd_partcheck is the reverse: it changes when you attach or detach this\npartition to or from a parent. It's probably helpful to think of the\ncase of a table with partitions each of which is itself partitioned --\nthe table at that middle level has to worry both about gaining or\nlosing children and about being ripped away from its parent.\n\nAs a parenthetical note, I observe that relcache.c seems to know\nalmost nothing about rd_partcheck. rd_partkey and rd_partdesc both\nhave handling in RelationClearRelation(), but rd_partcheck does not,\nand I suspect that's wrong. So the problems are probably not confined\nto the relcache-drop-time problem that you observed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 15:47:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 13, 2019 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Meanwhile, who's going to take point on cleaning up rd_partcheck?\n>> I don't really understand this code well enough to know whether that\n>> can share one of the existing partitioning-related sub-contexts.\n\n> To your question, I think it probably can't share a context. Briefly,\n> rd_partkey can't change ever, except that when a partitioned relation\n> is in the process of being created it is briefly NULL; once it obtains\n> a value, that value cannot be changed. If you want to range-partition\n> a list-partitioned table or something like that, you have to drop the\n> table and create a new one. I think that's a perfectly acceptable\n> permanent limitation and I have no urge whatever to change it.\n> rd_partdesc changes when you attach or detach a child partition.\n> rd_partcheck is the reverse: it changes when you attach or detach this\n> partition to or from a parent.\n\nGot it. Yeah, it seems like the clearest and least bug-prone solution\nis for those to be in three separate sub-contexts. The only reason\nI was trying to avoid that was the angle of how to back-patch into 11.\nHowever, we can just add the additional context pointer field at the\nend of the Relation struct in v11, and that should be good enough to\navoid ABI problems.\n\nOff topic for the moment, since this clearly wouldn't be back-patch\nmaterial, but I'm starting to wonder if we should just have a context\nfor each relcache entry and get rid of most or all of the retail\ncleanup logic in RelationDestroyRelation ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 16:18:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Off topic for the moment, since this clearly wouldn't be back-patch\n> material, but I'm starting to wonder if we should just have a context\n> for each relcache entry and get rid of most or all of the retail\n> cleanup logic in RelationDestroyRelation ...\n\nI think that idea might have a lot of merit, but I haven't studied it closely.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 16:38:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 4:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Mar 13, 2019 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Off topic for the moment, since this clearly wouldn't be back-patch\n> > material, but I'm starting to wonder if we should just have a context\n> > for each relcache entry and get rid of most or all of the retail\n> > cleanup logic in RelationDestroyRelation ...\n>\n> I think that idea might have a lot of merit, but I haven't studied it closely.\n\nIt just occurred to me that one advantage of this would be that you\ncould see how much memory was being used by each relcache entry using\nMemoryContextStats(), which seems super-appealing. In fact, what\nabout getting rid of all allocations in CacheMemoryContext itself in\nfavor of some more specific context in each case? That would make it\na lot clearer where to look for leaks -- or efficiencies.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 16:50:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Hi,\n\nOn 2019-03-13 16:50:53 -0400, Robert Haas wrote:\n> On Wed, Mar 13, 2019 at 4:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Mar 13, 2019 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Off topic for the moment, since this clearly wouldn't be back-patch\n> > > material, but I'm starting to wonder if we should just have a context\n> > > for each relcache entry and get rid of most or all of the retail\n> > > cleanup logic in RelationDestroyRelation ...\n> >\n> > I think that idea might have a lot of merit, but I haven't studied it closely.\n> \n> It just occurred to me that one advantage of this would be that you\n> could see how much memory was being used by each relcache entry using\n> MemoryContextStats(), which seems super-appealing. In fact, what\n> about getting rid of all allocations in CacheMemoryContext itself in\n> favor of some more specific context in each case? That would make it\n> a lot clearer where to look for leaks -- or efficiencies.\n\nBut it might also make it frustrating to look at memory context dumps -\nwe'd suddenly have many many more memory context lines we'd displayed,\nright? Wouldn't that often make the dump extremely long?\n\nTo be clear, I think the idea has merit. Just want to raise the above\npoint.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 13 Mar 2019 13:56:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-13 16:50:53 -0400, Robert Haas wrote:\n>> On Wed, Mar 13, 2019 at 4:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>> On Wed, Mar 13, 2019 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Off topic for the moment, since this clearly wouldn't be back-patch\n>>>> material, but I'm starting to wonder if we should just have a context\n>>>> for each relcache entry and get rid of most or all of the retail\n>>>> cleanup logic in RelationDestroyRelation ...\n\n>> It just occurred to me that one advantage of this would be that you\n>> could see how much memory was being used by each relcache entry using\n>> MemoryContextStats(), which seems super-appealing.\n\n> But it might also make it frustrating to look at memory context dumps -\n> we'd suddenly have many many more memory context lines we'd displayed,\n> right? Wouldn't that often make the dump extremely long?\n\nThere's already a mechanism in there to suppress child contexts after\n100 or so, which would almost inevitably kick in on the relcache if we\ndid this. So I don't believe we'd have a problem with the context dumps\ngetting too long --- more likely, the complaints would be the reverse.\n\nMy gut feeling is that right now relcache entries tend to be mas-o-menos\nthe same size, except for stuff that is *already* in sub-contexts, like\nindex and partition descriptors. So I'm not that excited about this\nadding useful info to context dumps. I was just looking at it as a way\nto make relcache entry cleanup simpler and less leak-prone.\n\nHaving said that, I do agree that CacheMemoryContext is too much of an\nundifferentiated blob right now, and splitting it up seems like it'd be\ngood for accountability. I'd definitely be +1 for a catcache vs. relcache\nvs. other caches split. You could imagine per-catcache contexts, too.\nThe main limiting factor here is that the per-context overhead could get\nexcessive.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 13 Mar 2019 17:10:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Hi,\n\nOn 2019-03-13 17:10:55 -0400, Tom Lane wrote:\n> There's already a mechanism in there to suppress child contexts after\n> 100 or so, which would almost inevitably kick in on the relcache if we\n> did this. So I don't believe we'd have a problem with the context dumps\n> getting too long --- more likely, the complaints would be the reverse.\n\nWell, that's two sides of the same coin.\n\n\n> Having said that, I do agree that CacheMemoryContext is too much of an\n> undifferentiated blob right now, and splitting it up seems like it'd be\n> good for accountability. I'd definitely be +1 for a catcache vs. relcache\n> vs. other caches split.\n\nThat'd make a lot of sense.\n\n\n> You could imagine per-catcache contexts, too.\n> The main limiting factor here is that the per-context overhead could get\n> excessive.\n\nYea, per relcache entry contexts seem like they'd get really expensive\nfast. Even per-catcache seems like it might be noticable additional\noverhead for a new backend.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 13 Mar 2019 14:24:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/03/14 5:18, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Wed, Mar 13, 2019 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Meanwhile, who's going to take point on cleaning up rd_partcheck?\n>>> I don't really understand this code well enough to know whether that\n>>> can share one of the existing partitioning-related sub-contexts.\n> \n>> To your question, I think it probably can't share a context. Briefly,\n>> rd_partkey can't change ever, except that when a partitioned relation\n>> is in the process of being created it is briefly NULL; once it obtains\n>> a value, that value cannot be changed. If you want to range-partition\n>> a list-partitioned table or something like that, you have to drop the\n>> table and create a new one. I think that's a perfectly acceptable\n>> permanent limitation and I have no urge whatever to change it.\n>> rd_partdesc changes when you attach or detach a child partition.\n>> rd_partcheck is the reverse: it changes when you attach or detach this\n>> partition to or from a parent.\n> \n> Got it. Yeah, it seems like the clearest and least bug-prone solution\n> is for those to be in three separate sub-contexts. The only reason\n> I was trying to avoid that was the angle of how to back-patch into 11.\n> However, we can just add the additional context pointer field at the\n> end of the Relation struct in v11, and that should be good enough to\n> avoid ABI problems.\n\nAgree that rd_partcheck needs its own context as you have complained in\nthe past [1].\n\nI think we'll need to back-patch this fix to PG 10 as well. I've attached\npatches for all 3 branches.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/22236.1523374067%40sss.pgh.pa.us", "msg_date": "Thu, 14 Mar 2019 10:40:31 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Mar 13, 2019 at 2:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, that probably makes sense, though it might be polite to wait\n> another hour or two to see if anyone wants to argue with that approach\n> further.\n\nHearing nobody, done. If someone wants to argue more we can always\nchange it later.\n\nI did not back-patch, because the code is in a different file in v11,\nnone of the hunks of the patch apply on v11, and v11 is not failing on\nhyrax. But feel free to do it if you feel strongly about it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 14 Mar 2019 12:16:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I did not back-patch, because the code is in a different file in v11,\n> none of the hunks of the patch apply on v11, and v11 is not failing on\n> hyrax.\n\nHmm, I wonder why not. I suppose the answer is that\nthe leak is worse in HEAD than before, but how come?\n\nI followed your reference to 898e5e329, and I've got to say that\nthe hunk it adds in relcache.c looks fishy as can be. The argument\nthat the rd_pdcxt \"will get cleaned up eventually\" reads to me like\n\"this leaks memory like a sieve\", especially in the repeated-rebuild\nscenario which is exactly what CLOBBER_CACHE_ALWAYS would provoke.\nProbably the only thing that keeps it from being effectively a\nsession-lifespan leak is that CCA will generally result in relcache\nentries being flushed entirely as soon as their refcount goes to 0.\nStill, because of that, I wouldn't think it'd move the needle very\nmuch on a CCA animal; so my guess is that there's something else.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 12:36:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Thu, Mar 14, 2019 at 12:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm, I wonder why not. I suppose the answer is that\n> the leak is worse in HEAD than before, but how come?\n\nI'd like to know that, too, but right now I don't.\n\n> I followed your reference to 898e5e329, and I've got to say that\n> the hunk it adds in relcache.c looks fishy as can be. The argument\n> that the rd_pdcxt \"will get cleaned up eventually\" reads to me like\n> \"this leaks memory like a sieve\", especially in the repeated-rebuild\n> scenario which is exactly what CLOBBER_CACHE_ALWAYS would provoke.\n> Probably the only thing that keeps it from being effectively a\n> session-lifespan leak is that CCA will generally result in relcache\n> entries being flushed entirely as soon as their refcount goes to 0.\n> Still, because of that, I wouldn't think it'd move the needle very\n> much on a CCA animal; so my guess is that there's something else.\n\nI'm a little uncertain of that logic, too, but keep in mind that if\nkeep_partdesc is true, we're just going to throw away the newly-build\ndata and keep the old stuff. So I think that in order for this to\nactually be a problem, you would have to have other sessions that\nrepeatedly alter the partitiondesc via ATTACH PARTITION, and at the\nsame time, the current session would have to keep on reading it. But\nyou also have to never add a partition while the local session is\nbetween queries, because if you do, rebuild will be false and we'll\nblow away the old entry in its entirety and any old contexts that are\nhanging off of it will be destroyed as well. So it seems a little\nticklish to me to figure out a realistic scenario in which we actually\nleak enough memory to matter here, but maybe you have an idea of which\nI have not thought.\n\nWe could certainly do better - just refcount each PartitionDesc. Have\nthe relcache entry hold a refcount on the PartitionDesc and have a\nPartitionDirectory hold a refcount on the PartitionDesc; then, any\ntime the refcount goes to 0, you can immediately destroy the\nPartitionDesc. Or, simpler, arrange things so that when the\nrefcache's refcount goes to 0, any old PartitionDescs that are still\nhanging off of the latest one get destroyed then, not later. It's\njust a question of whether it's really worth the code, or whether\nwe're fixing imaginary bugs by adding code that might have real ones.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 14 Mar 2019 13:18:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 14, 2019 at 12:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, I wonder why not. I suppose the answer is that\n>> the leak is worse in HEAD than before, but how come?\n\n> I'd like to know that, too, but right now I don't.\n\nI poked at this some more, by dint of adding some memory context usage\ntracking and running it under CLOBBER_CACHE_ALWAYS --- specifically,\nI looked at the update.sql test script, which is one of the two that\nis giving hyrax trouble. (Conveniently, that one runs successfully\nby itself, without need for the rest of the regression suite.)\n\nI found that even with 2455ab488, there still seemed to be an\nunreasonable amount of MessageContext bloat during some of the queries,\non the order of 50MB in some of them. Investigating more closely,\nI found that 2455ab488 has a couple of simple oversights: it's still\ncalling partition_bounds_create in the caller's context, allowing\nwhatever that builds or leaks to be a MessageContext leak. And it's\nstill calling get_rel_relkind() in the rd_pdcxt context, potentially\nleaking a *lot* of stuff into that normally-long-lived context, since\nthat will result in fresh catcache (and thence potentially relcache)\nloads in some cases.\n\nBut fixing that didn't seem to move the needle very much for update.sql.\nWith some further investigation, I identified a few other main\ncontributors to the bloat:\n\n\tRelationBuildTriggers\n\tRelationBuildRowSecurity\n\tRelationBuildPartitionKey\n\nAre you noticing a pattern yet? The real issue here is that we have\nthis its-okay-to-leak-in-the-callers-context mindset throughout\nrelcache.c, and when CLOBBER_CACHE_ALWAYS causes us to reload relcache\nentries a lot, that adds up fast. The partition stuff makes this worse,\nI think, primarily just because it increases the number of tables we\ntouch in a typical query.\n\nWhat I'm thinking, therefore, is that 2455ab488 had the right idea but\ndidn't take it far enough. We should remove the temp-context logic it\nadded to RelationBuildPartitionDesc and instead put that one level up,\nin RelationBuildDesc, where the same temp context can serve all of these\nleak-prone sub-facilities.\n\nPossibly it'd make sense to conditionally compile this so that we only\ndo it in a CLOBBER_CACHE_ALWAYS build. I'm not very sure about that,\nbut arguably in a normal build the overhead of making and destroying\na context would outweigh the cost of the leaked memory. The main\nargument I can think of for doing it all the time is that having memory\nallocation work noticeably differently in CCA builds than normal ones\nseems like a recipe for masking normal-mode bugs from the CCA animals.\nBut that's only a guess not proven fact; it's also possible that having\ntwo different behaviors would expose bugs we'd otherwise have a hard\ntime detecting, such as continuing to rely on the \"temporary\" data\nstructures longer than we should.\n\n(This is all independent of the other questions raised on this and\nnearby threads.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 18:08:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 15/03/2019 00:08, Tom Lane wrote:\n> What I'm thinking, therefore, is that 2455ab488 had the right idea but\n> didn't take it far enough. We should remove the temp-context logic it\n> added to RelationBuildPartitionDesc and instead put that one level up,\n> in RelationBuildDesc, where the same temp context can serve all of these\n> leak-prone sub-facilities.\n> \n> Possibly it'd make sense to conditionally compile this so that we only\n> do it in a CLOBBER_CACHE_ALWAYS build. I'm not very sure about that,\n> but arguably in a normal build the overhead of making and destroying\n> a context would outweigh the cost of the leaked memory. The main\n> argument I can think of for doing it all the time is that having memory\n> allocation work noticeably differently in CCA builds than normal ones\n> seems like a recipe for masking normal-mode bugs from the CCA animals.\n\nHaving CLOBBER_CACHE_ALWAYS behave differently sounds horrible.\n\nWe maintain a free list of AllocSetContexts nowadays, so creating a \nshort-lived context should be pretty cheap. Or if it's still too \nexpensive, we could create one short-lived context as a child of \nTopMemoryContext, and reuse that on every call, resetting it at the end \nof the function.\n\n- Heikki\n\n", "msg_date": "Fri, 15 Mar 2019 09:32:22 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Fri, Mar 15, 2019 at 3:32 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> We maintain a free list of AllocSetContexts nowadays, so creating a\n> short-lived context should be pretty cheap. Or if it's still too\n> expensive, we could create one short-lived context as a child of\n> TopMemoryContext, and reuse that on every call, resetting it at the end\n> of the function.\n\nRelcache rebuild is reeentrant, I think, so we have to be careful about that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 15 Mar 2019 08:22:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Thu, Mar 14, 2019 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I found that even with 2455ab488, there still seemed to be an\n> unreasonable amount of MessageContext bloat during some of the queries,\n> on the order of 50MB in some of them. Investigating more closely,\n> I found that 2455ab488 has a couple of simple oversights: it's still\n> calling partition_bounds_create in the caller's context, allowing\n> whatever that builds or leaks to be a MessageContext leak.\n\nOops.\n\n> And it's\n> still calling get_rel_relkind() in the rd_pdcxt context, potentially\n> leaking a *lot* of stuff into that normally-long-lived context, since\n> that will result in fresh catcache (and thence potentially relcache)\n> loads in some cases.\n\nThat's really unfortunate. I know CLOBBER_CACHE_ALWAYS testing has a\nlot of value, but get_rel_relkind() is the sort of function that\ndevelopers need to be able to call without worrying with creating a\nmassive memory leak. Maybe the only time it is really going to get\ncalled enough to matter is from within the cache invalidation\nmachinery itself, in which case your idea will fix it. But I'm not\nsure about that.\n\n> What I'm thinking, therefore, is that 2455ab488 had the right idea but\n> didn't take it far enough. We should remove the temp-context logic it\n> added to RelationBuildPartitionDesc and instead put that one level up,\n> in RelationBuildDesc, where the same temp context can serve all of these\n> leak-prone sub-facilities.\n\nYeah, that sounds good.\n\n> Possibly it'd make sense to conditionally compile this so that we only\n> do it in a CLOBBER_CACHE_ALWAYS build. I'm not very sure about that,\n> but arguably in a normal build the overhead of making and destroying\n> a context would outweigh the cost of the leaked memory. The main\n> argument I can think of for doing it all the time is that having memory\n> allocation work noticeably differently in CCA builds than normal ones\n> seems like a recipe for masking normal-mode bugs from the CCA animals.\n> But that's only a guess not proven fact; it's also possible that having\n> two different behaviors would expose bugs we'd otherwise have a hard\n> time detecting, such as continuing to rely on the \"temporary\" data\n> structures longer than we should.\n\nI lean toward thinking it makes more sense to be consistent, but I'm\nnot sure that's right.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 15 Mar 2019 09:35:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 15/03/2019 00:08, Tom Lane wrote:\n>> Possibly it'd make sense to conditionally compile this so that we only\n>> do it in a CLOBBER_CACHE_ALWAYS build. I'm not very sure about that,\n>> but arguably in a normal build the overhead of making and destroying\n>> a context would outweigh the cost of the leaked memory. The main\n>> argument I can think of for doing it all the time is that having memory\n>> allocation work noticeably differently in CCA builds than normal ones\n>> seems like a recipe for masking normal-mode bugs from the CCA animals.\n\n> Having CLOBBER_CACHE_ALWAYS behave differently sounds horrible.\n\nI'm not entirely convinced of that. It's already pretty different...\n\n> We maintain a free list of AllocSetContexts nowadays, so creating a \n> short-lived context should be pretty cheap. Or if it's still too \n> expensive, we could create one short-lived context as a child of \n> TopMemoryContext, and reuse that on every call, resetting it at the end \n> of the function.\n\nRelationBuildDesc potentially recurses, so a single context won't do.\nPerhaps you're right that the context freelist would make this cheap\nenough to not matter, but I'm not sure of that either.\n\nWhat I'm inclined to do to get the buildfarm pressure off is to set\nthings up so that by default we do this only in CCA builds, but there's\na #define you can set from the compile command line to override that\ndecision in either direction. Then, if somebody wants to investigate\nthe performance effects in more detail, they could twiddle it easily.\nDepending on the results, we could alter the default policy.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 15 Mar 2019 09:39:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 14, 2019 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> And it's\n>> still calling get_rel_relkind() in the rd_pdcxt context, potentially\n>> leaking a *lot* of stuff into that normally-long-lived context, since\n>> that will result in fresh catcache (and thence potentially relcache)\n>> loads in some cases.\n\n> That's really unfortunate. I know CLOBBER_CACHE_ALWAYS testing has a\n> lot of value, but get_rel_relkind() is the sort of function that\n> developers need to be able to call without worrying with creating a\n> massive memory leak.\n\nI don't find that to be a concern. The bug here is that random code is\nbeing called while CurrentMemoryContext is pointing to a long-lived\ncontext, and that's just a localized violation of a well understood coding\nconvention. I don't think that that means we need some large fire drill\nin response.\n\n>> What I'm thinking, therefore, is that 2455ab488 had the right idea but\n>> didn't take it far enough. We should remove the temp-context logic it\n>> added to RelationBuildPartitionDesc and instead put that one level up,\n>> in RelationBuildDesc, where the same temp context can serve all of these\n>> leak-prone sub-facilities.\n\n> Yeah, that sounds good.\n\nOK, I'll have a go at that today.\n\n>> Possibly it'd make sense to conditionally compile this so that we only\n>> do it in a CLOBBER_CACHE_ALWAYS build. I'm not very sure about that,\n\n> I lean toward thinking it makes more sense to be consistent, but I'm\n> not sure that's right.\n\nMy feeling right now is that the its-okay-to-leak policy has been in\nplace for decades and we haven't detected a problem with it before.\nHence, doing that differently in normal builds should require some\npositive evidence that it'd be beneficial (or at least not a net loss).\nI don't have the time or interest to collect such evidence. But I'm\nhappy to set up the patch to make it easy for someone else to do so,\nif anyone is sufficiently excited about this to do the work.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 15 Mar 2019 09:54:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 14, 2019 at 12:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, I wonder why not. I suppose the answer is that\n>> the leak is worse in HEAD than before, but how come?\n\n> I'd like to know that, too, but right now I don't.\n\nBTW, after closer study of 898e5e329 I have a theory as to why it\nmade things worse for CCA animals: it causes relcache entries to be\nheld open (via incremented refcounts) throughout planning, which\nthey were not before. This means that, during each of the many\nRelationCacheInvalidate events that will occur during a planning\ncycle, we have to rebuild those relcache entries; previously they\nwere just dropped once and that was the end of it till execution.\n\nYou noted correctly that the existing PartitionDesc structure would\ngenerally get preserved during each reload --- but that has exactly\nzip to do with the amount of transient space consumed by execution\nof RelationBuildDesc. We still build a transient PartitionDesc\nthat we then elect not to install. And besides that there's all the\nnot-partitioning-related stuff that RelationBuildDesc can leak.\n\nThis is probably largely irrelevant for non-CCA situations, since\nwe don't expect forced relcache invals to occur often otherwise.\nBut it seems to fit the facts, particularly that hyrax is only dying\non queries that involve moderately large partitioning structures\nand hence quite a few relations pinned by PartitionDirectory entries.\n\nI remain of the opinion that we ought not have PartitionDirectories\npinning relcache entries ... but this particular effect isn't really\nsignificant to that argument, I think.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 15 Mar 2019 14:18:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Fri, Mar 15, 2019 at 2:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, after closer study of 898e5e329 I have a theory as to why it\n> made things worse for CCA animals: it causes relcache entries to be\n> held open (via incremented refcounts) throughout planning, which\n> they were not before. This means that, during each of the many\n> RelationCacheInvalidate events that will occur during a planning\n> cycle, we have to rebuild those relcache entries; previously they\n> were just dropped once and that was the end of it till execution.\n\nThat makes sense. Thanks for looking at it, and for the insight.\n\n> You noted correctly that the existing PartitionDesc structure would\n> generally get preserved during each reload --- but that has exactly\n> zip to do with the amount of transient space consumed by execution\n> of RelationBuildDesc. We still build a transient PartitionDesc\n> that we then elect not to install. And besides that there's all the\n> not-partitioning-related stuff that RelationBuildDesc can leak.\n\nRight. So it's just that we turned a bunch of rebuild = false\nsituations into rebuild = true situations. I think.\n\n> This is probably largely irrelevant for non-CCA situations, since\n> we don't expect forced relcache invals to occur often otherwise.\n> But it seems to fit the facts, particularly that hyrax is only dying\n> on queries that involve moderately large partitioning structures\n> and hence quite a few relations pinned by PartitionDirectory entries.\n>\n> I remain of the opinion that we ought not have PartitionDirectories\n> pinning relcache entries ... but this particular effect isn't really\n> significant to that argument, I think.\n\nCool. I would still like to hear more about why you think that. I\nagree that the pinning is not great, but copying moderately large\npartitioning structures that are only likely to get more complex in\nfuture releases also does not seem great, so I think it may be the\nleast of evils. You, on the other hand, seem to have a rather\nvisceral hatred for it. That may be based on facts with which I am\nnot acquainted or it may be that we have a different view of what good\ndesign looks like. If it's the latter, we may just have to agree to\ndisagree and see if other people want to opine, but if it's the\nformer, I'd like to know those facts.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 15 Mar 2019 14:39:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 15, 2019 at 2:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, after closer study of 898e5e329 I have a theory as to why it\n>> made things worse for CCA animals: it causes relcache entries to be\n>> held open (via incremented refcounts) throughout planning, which\n>> they were not before. This means that, during each of the many\n>> RelationCacheInvalidate events that will occur during a planning\n>> cycle, we have to rebuild those relcache entries; previously they\n>> were just dropped once and that was the end of it till execution.\n\n> Right. So it's just that we turned a bunch of rebuild = false\n> situations into rebuild = true situations. I think.\n\nMore to the point, we turned *one* rebuild = false situation into\na bunch of rebuild = true situations. I haven't studied it closely,\nbut I think a CCA animal would probably see O(N^2) rebuild = true\ninvocations in a query with N partitions, since each time\nexpand_partitioned_rtentry opens another child table, we'll incur\nan AcceptInvalidationMessages call which leads to forced rebuilds\nof the previously-pinned rels. In non-CCA situations, almost always\nnothing happens with the previously-examined relcache entries.\n\n>> I remain of the opinion that we ought not have PartitionDirectories\n>> pinning relcache entries ... but this particular effect isn't really\n>> significant to that argument, I think.\n\n> Cool. I would still like to hear more about why you think that. I\n> agree that the pinning is not great, but copying moderately large\n> partitioning structures that are only likely to get more complex in\n> future releases also does not seem great, so I think it may be the\n> least of evils. You, on the other hand, seem to have a rather\n> visceral hatred for it.\n\nI agree that copying data isn't great. What I don't agree is that this\nsolution is better. In particular, copying data out of the relcache\nrather than expecting the relcache to hold still over long periods\nis the way we've done things everywhere else, cf RelationGetIndexList,\nRelationGetStatExtList, RelationGetIndexExpressions,\nRelationGetIndexPredicate, RelationGetIndexAttrBitmap,\nRelationGetExclusionInfo, GetRelationPublicationActions. I don't care\nfor a patch randomly deciding to do things differently on the basis of an\nunsupported-by-evidence argument that it might cost too much to copy the\ndata. If we're going to make a push to reduce the amount of copying of\nthat sort that we do, it should be a separately (and carefully) designed\nthing that applies to all the relcache substructures that have the issue,\nnot one-off hacks that haven't been reviewed thoroughly.\n\nI especially don't care for the much-less-than-half-baked kluge of\nchaining the old rd_pdcxt onto the new one and hoping that it will go away\nat a suitable time. Yeah, that will work all right as long as it's not\nstressed very hard, but we've found repeatedly that users will find a way\nto overstress places where we're sloppy about resource management.\nIn this case, since the leak is potentially of session lifespan, I doubt\nit's even very hard to cause unbounded growth in relcache space usage ---\nyou just have to do $whatever over and over. Let's see ...\n\nregression=# create table parent (a text, b int) partition by list (a);\nCREATE TABLE\nregression=# create table child (a text, b int);\nCREATE TABLE\nregression=# do $$\nregression$# begin\nregression$# for i in 1..10000000 loop\nregression$# alter table parent attach partition child for values in ('x');\nregression$# alter table parent detach partition child;\nregression$# end loop;\nregression$# end $$;\n\nAfter letting that run for awhile, I've got enough rd_pdcxts to send\nMemoryContextStats into a nasty loop.\n\n CacheMemoryContext: 4259904 total in 11 blocks; 1501120 free (19 chunks); 2758784 used\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n partition descriptor: 1024 total in 1 blocks; 400 free (0 chunks); 624 used: parent\n partition descriptor: 1024 total in 1 blocks; 680 free (0 chunks); 344 used: parent\n... yadda yadda ...\n\n\nThe leak isn't terribly fast, since this is an expensive thing to be\ndoing, but it's definitely leaking.\n\n> That may be based on facts with which I am\n> not acquainted or it may be that we have a different view of what good\n> design looks like.\n\nI don't think solving the same problem in multiple ways constitutes\ngood design, barring compelling reasons for the solutions being different,\nwhich you haven't presented. Solving the same problem in multiple ways\nsome of which are buggy is *definitely* not good design.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 15 Mar 2019 15:45:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Fri, Mar 15, 2019 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> More to the point, we turned *one* rebuild = false situation into\n> a bunch of rebuild = true situations. I haven't studied it closely,\n> but I think a CCA animal would probably see O(N^2) rebuild = true\n> invocations in a query with N partitions, since each time\n> expand_partitioned_rtentry opens another child table, we'll incur\n> an AcceptInvalidationMessages call which leads to forced rebuilds\n> of the previously-pinned rels. In non-CCA situations, almost always\n> nothing happens with the previously-examined relcache entries.\n\nThat's rather unfortunate. I start to think that clobbering the cache\n\"always\" is too hard a line.\n\n> I agree that copying data isn't great. What I don't agree is that this\n> solution is better. In particular, copying data out of the relcache\n> rather than expecting the relcache to hold still over long periods\n> is the way we've done things everywhere else, cf RelationGetIndexList,\n> RelationGetStatExtList, RelationGetIndexExpressions,\n> RelationGetIndexPredicate, RelationGetIndexAttrBitmap,\n> RelationGetExclusionInfo, GetRelationPublicationActions. I don't care\n> for a patch randomly deciding to do things differently on the basis of an\n> unsupported-by-evidence argument that it might cost too much to copy the\n> data. If we're going to make a push to reduce the amount of copying of\n> that sort that we do, it should be a separately (and carefully) designed\n> thing that applies to all the relcache substructures that have the issue,\n> not one-off hacks that haven't been reviewed thoroughly.\n\nThat's not an unreasonable argument. On the other hand, if you never\ntry new stuff, you lose opportunities to explore things that might\nturn out to be better and worth adopting more widely.\n\nI am not very convinced that it makes sense to lump something like\nRelationGetIndexAttrBitmap in with something like\nRelationGetPartitionDesc. RelationGetIndexAttrBitmap is returning a\nsingle Bitmapset, whereas the data structure RelationGetPartitionDesc\nis vastly larger and more complex. You can't say \"well, if it's best\nto copy 32 bytes of data out of the relcache every time we need it, it\nmust also be right to copy 10k or 100k of data out of the relcache\nevery time we need it.\"\n\nThere is another difference as well: there's a good chance that\nsomebody is going to want to mutate a Bitmapset, whereas they had\nBETTER NOT think that they can mutate the PartitionDesc. So returning\nan uncopied Bitmapset is kinda risky in a way that returning an\nuncopied PartitionDesc is not.\n\nIf we want an at-least-somewhat unified solution here, I think we need\nto bite the bullet and make the planner hold a reference to the\nrelcache throughout planning. (The executor already keeps it open, I\nbelieve.) Otherwise, how is the relcache supposed to know when it can\nthrow stuff away and when it can't? The only alternative seems to be\nto have each subsystem hold its own reference count, as I did with the\nPartitionDirectory stuff, which is not something we'd want to scale\nout.\n\n> I especially don't care for the much-less-than-half-baked kluge of\n> chaining the old rd_pdcxt onto the new one and hoping that it will go away\n> at a suitable time.\n\nIt will go away at a suitable time, but maybe not at the soonest\nsuitable time. It wouldn't be hard to improve that, though. The\neasiest thing to do, I think, would be to have an rd_oldpdcxt and\nstuff the old context there. If there already is one there, parent\nthe new one under it. When RelationDecrementReferenceCount reduces\nthe reference count to zero, destroy anything found in rd_oldpdcxt.\nWith that change, things get destroyed at the earliest time at which\nwe know the old things aren't referenced, instead of the earliest time\nat which they are not referenced + an invalidation arrives.\n\n> regression=# create table parent (a text, b int) partition by list (a);\n> CREATE TABLE\n> regression=# create table child (a text, b int);\n> CREATE TABLE\n> regression=# do $$\n> regression$# begin\n> regression$# for i in 1..10000000 loop\n> regression$# alter table parent attach partition child for values in ('x');\n> regression$# alter table parent detach partition child;\n> regression$# end loop;\n> regression$# end $$;\n\nInteresting example.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 15 Mar 2019 17:41:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/03/14 10:40, Amit Langote wrote:\n> On 2019/03/14 5:18, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Wed, Mar 13, 2019 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Meanwhile, who's going to take point on cleaning up rd_partcheck?\n>>>> I don't really understand this code well enough to know whether that\n>>>> can share one of the existing partitioning-related sub-contexts.\n>>\n>>> To your question, I think it probably can't share a context. Briefly,\n>>> rd_partkey can't change ever, except that when a partitioned relation\n>>> is in the process of being created it is briefly NULL; once it obtains\n>>> a value, that value cannot be changed. If you want to range-partition\n>>> a list-partitioned table or something like that, you have to drop the\n>>> table and create a new one. I think that's a perfectly acceptable\n>>> permanent limitation and I have no urge whatever to change it.\n>>> rd_partdesc changes when you attach or detach a child partition.\n>>> rd_partcheck is the reverse: it changes when you attach or detach this\n>>> partition to or from a parent.\n>>\n>> Got it. Yeah, it seems like the clearest and least bug-prone solution\n>> is for those to be in three separate sub-contexts. The only reason\n>> I was trying to avoid that was the angle of how to back-patch into 11.\n>> However, we can just add the additional context pointer field at the\n>> end of the Relation struct in v11, and that should be good enough to\n>> avoid ABI problems.\n> \n> Agree that rd_partcheck needs its own context as you have complained in\n> the past [1].\n> \n> I think we'll need to back-patch this fix to PG 10 as well. I've attached\n> patches for all 3 branches.\n> \n> Thanks,\n> Amit\n> \n> [1] https://www.postgresql.org/message-id/22236.1523374067%40sss.pgh.pa.us\n\nShould I add this patch to Older Bugs [1]?\n\nThanks,\nAmit\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\n\n\n", "msg_date": "Mon, 8 Apr 2019 17:56:53 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> Should I add this patch to Older Bugs [1]?\n\nYeah, it's an open issue IMO. I think we've been focusing on getting\nas many feature patches done as we could during the CF, but now it's\ntime to start mopping up problems like this one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2019 09:59:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Mon, Apr 8, 2019 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> > Should I add this patch to Older Bugs [1]?\n>\n> Yeah, it's an open issue IMO. I think we've been focusing on getting\n> as many feature patches done as we could during the CF, but now it's\n> time to start mopping up problems like this one.\n\nDo you have any further thoughts based on my last response?\n\nDoes anyone else wish to offer opinions?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Apr 2019 10:40:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/04/08 22:59, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> Should I add this patch to Older Bugs [1]?\n> \n> Yeah, it's an open issue IMO. I think we've been focusing on getting\n> as many feature patches done as we could during the CF, but now it's\n> time to start mopping up problems like this one.\n\nThanks, done.\n\nRegards,\nAmit\n\n\n\n", "msg_date": "Tue, 9 Apr 2019 13:00:41 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/03/16 6:41, Robert Haas wrote:\n> On Fri, Mar 15, 2019 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I agree that copying data isn't great. What I don't agree is that this\n>> solution is better. In particular, copying data out of the relcache\n>> rather than expecting the relcache to hold still over long periods\n>> is the way we've done things everywhere else, cf RelationGetIndexList,\n>> RelationGetStatExtList, RelationGetIndexExpressions,\n>> RelationGetIndexPredicate, RelationGetIndexAttrBitmap,\n>> RelationGetExclusionInfo, GetRelationPublicationActions. I don't care\n>> for a patch randomly deciding to do things differently on the basis of an\n>> unsupported-by-evidence argument that it might cost too much to copy the\n>> data. If we're going to make a push to reduce the amount of copying of\n>> that sort that we do, it should be a separately (and carefully) designed\n>> thing that applies to all the relcache substructures that have the issue,\n>> not one-off hacks that haven't been reviewed thoroughly.\n> \n> That's not an unreasonable argument. On the other hand, if you never\n> try new stuff, you lose opportunities to explore things that might\n> turn out to be better and worth adopting more widely.\n> \n> I am not very convinced that it makes sense to lump something like\n> RelationGetIndexAttrBitmap in with something like\n> RelationGetPartitionDesc. RelationGetIndexAttrBitmap is returning a\n> single Bitmapset, whereas the data structure RelationGetPartitionDesc\n> is vastly larger and more complex. You can't say \"well, if it's best\n> to copy 32 bytes of data out of the relcache every time we need it, it\n> must also be right to copy 10k or 100k of data out of the relcache\n> every time we need it.\"\n> \n> There is another difference as well: there's a good chance that\n> somebody is going to want to mutate a Bitmapset, whereas they had\n> BETTER NOT think that they can mutate the PartitionDesc. So returning\n> an uncopied Bitmapset is kinda risky in a way that returning an\n> uncopied PartitionDesc is not.\n> \n> If we want an at-least-somewhat unified solution here, I think we need\n> to bite the bullet and make the planner hold a reference to the\n> relcache throughout planning. (The executor already keeps it open, I\n> believe.) Otherwise, how is the relcache supposed to know when it can\n> throw stuff away and when it can't? The only alternative seems to be\n> to have each subsystem hold its own reference count, as I did with the\n> PartitionDirectory stuff, which is not something we'd want to scale\n> out.\n\nFwiw, I'd like to vote for planner holding the relcache reference open\nthroughout planning. The planner could then reference the various\nsubstructures directly (using a non-copying accessor), except those that\nsomething in the planner might want to modify, in which case use the\ncopying accessor.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Tue, 9 Apr 2019 13:38:33 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Mon, Apr 08, 2019 at 10:40:41AM -0400, Robert Haas wrote:\n> On Mon, Apr 8, 2019 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> Yeah, it's an open issue IMO. I think we've been focusing on getting\n>> as many feature patches done as we could during the CF, but now it's\n>> time to start mopping up problems like this one.\n\nPlease note that it is registered as an older bug and not an open\nitem.\n\n> Do you have any further thoughts based on my last response?\n\nSo your last response is that:\nhttps://www.postgresql.org/message-id/CA+Tgmoa5rT+ZR+Vv+q1XLwQtDMCqkNL6B4PjR4V6YAC9K_LBxw@mail.gmail.com\nAnd what are you proposing as patch? Perhaps something among those\nlines?\nhttps://www.postgresql.org/message-id/036852f2-ba7f-7a1f-21c6-00bc3515eda3@lab.ntt.co.jp\n\n> Does anyone else wish to offer opinions?\n\nIt seems to me that Tom's argument to push in the way relcache\ninformation is handled by copying its contents sounds sensible to me.\nThat's not perfect, but it is consistent with what exists (note\nPublicationActions for a rather-still-not-much-complex example of\nstructure which gets copied from the relcache).\n--\nMichael", "msg_date": "Wed, 10 Apr 2019 15:42:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Hi,\n\nOn 2019/04/10 15:42, Michael Paquier wrote:\n> On Mon, Apr 08, 2019 at 10:40:41AM -0400, Robert Haas wrote:\n>> On Mon, Apr 8, 2019 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>>> Yeah, it's an open issue IMO. I think we've been focusing on getting\n>>> as many feature patches done as we could during the CF, but now it's\n>>> time to start mopping up problems like this one.\n> \n> Please note that it is registered as an older bug and not an open\n> item.\n\nThe problem lies in all branches that have partitioning, so it should be\nlisted under Older Bugs, right? You may have noticed that I posted\npatches for all branches down to 10.\n\n>> Do you have any further thoughts based on my last response?\n> \n> So your last response is that:\n> https://www.postgresql.org/message-id/CA+Tgmoa5rT+ZR+Vv+q1XLwQtDMCqkNL6B4PjR4V6YAC9K_LBxw@mail.gmail.com\n> And what are you proposing as patch? Perhaps something among those\n> lines?\n> https://www.postgresql.org/message-id/036852f2-ba7f-7a1f-21c6-00bc3515eda3@lab.ntt.co.jp\n\nAFAIK, the patch there isn't meant to solve problems discussed at the 1st\nlink. It's meant to fix poor cache memory management of partition\nconstraint expression trees, which seems to be a separate issue from the\nPartitionDesc memory management issue (the latter is the main topic of\nthis thread.) Problem with partition constraint expression trees was only\nmentioned in passing in this thread [1], although it had also come up in\nthe past, as I said when posting the patch.\n\n>> Does anyone else wish to offer opinions?\n> \n> It seems to me that Tom's argument to push in the way relcache\n> information is handled by copying its contents sounds sensible to me.\n> That's not perfect, but it is consistent with what exists (note\n> PublicationActions for a rather-still-not-much-complex example of\n> structure which gets copied from the relcache).\n\nI gave my vote for direct access of unchangeable relcache substructures\n(TupleDesc, PartitionDesc, etc.), because they're accessed during the\nplanning of every user query and fairly expensive to copy compared to list\nof indexes or PublicationActions that you're citing. To further my point\na bit, I wonder why PublicationActions needs to be copied out of relcache\nat all? Looking at its usage in execReplication.c, it seems we can do\nfine without copying, because we are holding both a lock and a relcache\nreference on the replication target relation during the entirety of\napply_handle_insert(), which means that information can't change under us,\nneither logically nor physically.\n\nAlso, to reiterate what I think was one of Robert's points upthread [2],\nthe reason for requiring some code to copy the relcache substructures out\nof relcache should be that the caller might want change its content; for\nexample, planner or its hooks may want to add/remove an index to/from the\nlist of indexes copied from the relcache. The reason for copying should\nnot be that relcache content may change under us despite holding a lock\nand relcache reference.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/7961.1552498252%40sss.pgh.pa.us\n\n[2]\nhttps://www.postgresql.org/message-id/CA%2BTgmoa5rT%2BZR%2BVv%2Bq1XLwQtDMCqkNL6B4PjR4V6YAC9K_LBxw%40mail.gmail.com\n\n\n\n", "msg_date": "Wed, 10 Apr 2019 17:03:21 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Apr 10, 2019 at 05:03:21PM +0900, Amit Langote wrote:\n> The problem lies in all branches that have partitioning, so it should be\n> listed under Older Bugs, right? You may have noticed that I posted\n> patches for all branches down to 10.\n\nI have noticed. The message from Tom upthread outlined that an open\nitem should be added, but this is not one. That's what I wanted to\nemphasize. Thanks for making it clearer.\n--\nMichael", "msg_date": "Wed, 10 Apr 2019 20:09:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Apr 10, 2019 at 05:03:21PM +0900, Amit Langote wrote:\n>> The problem lies in all branches that have partitioning, so it should be\n>> listed under Older Bugs, right? You may have noticed that I posted\n>> patches for all branches down to 10.\n\n> I have noticed. The message from Tom upthread outlined that an open\n> item should be added, but this is not one. That's what I wanted to\n> emphasize. Thanks for making it clearer.\n\nTo clarify a bit: there's more than one problem here. Amit added an\nopen item about pre-existing leaks related to rd_partcheck. (I'm going\nto review and hopefully push his fix for that today.) However, there's\na completely separate leak associated with mismanagement of rd_pdcxt,\nas I showed in [1], and it doesn't seem like we have consensus about\nwhat to do about that one. AFAIK that's a new bug in 12 (caused by\n898e5e329) and so it ought to be in the main open-items list.\n\nThis thread has discussed a bunch of possible future changes like\ntrying to replace copying of relcache-provided data structures\nwith reference-counting, but I don't think any such thing could be\nv12 material at this point. We do need to fix the newly added\nleak though.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/10797.1552679128%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 12 Apr 2019 11:42:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> As a parenthetical note, I observe that relcache.c seems to know\n> almost nothing about rd_partcheck. rd_partkey and rd_partdesc both\n> have handling in RelationClearRelation(), but rd_partcheck does not,\n> and I suspect that's wrong. So the problems are probably not confined\n> to the relcache-drop-time problem that you observed.\n\nI concluded that that's not parenthetical but pretty relevant...\n\nAttached is a revised version of Amit's patch at [1] that makes these\ndata structures be treated more similarly. I also added some Asserts\nand comment improvements to address the complaints I made upthread about\nunder-documentation of all this logic.\n\nI also cleaned up the problem the code had with failing to distinguish\n\"partcheck list is NIL\" from \"partcheck list hasn't been computed yet\".\nFor a relation with no such constraints, generate_partition_qual would do\nthe full pushups every time. I'm not sure if the case actually occurs\ncommonly enough that that's a performance problem, but failure to account\nfor it made my added assertions fall over :-( and I thought fixing it\nwas better than weakening the assertions.\n\nI haven't made back-patch versions yet. I'd expect they could be\nsubstantially the same, except the two new fields have to go at the\nend of struct RelationData to avoid ABI breaks.\n\nOh: we might also need some change in RelationCacheInitializePhase3,\ndepending on the decision about [2].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/036852f2-ba7f-7a1f-21c6-00bc3515eda3@lab.ntt.co.jp\n[2] https://www.postgresql.org/message-id/5706.1555093031@sss.pgh.pa.us", "msg_date": "Fri, 12 Apr 2019 15:47:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Thanks for the updated patch.\n\nOn Sat, Apr 13, 2019 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > As a parenthetical note, I observe that relcache.c seems to know\n> > almost nothing about rd_partcheck. rd_partkey and rd_partdesc both\n> > have handling in RelationClearRelation(), but rd_partcheck does not,\n> > and I suspect that's wrong. So the problems are probably not confined\n> > to the relcache-drop-time problem that you observed.\n>\n> I concluded that that's not parenthetical but pretty relevant...\n\nHmm, do you mean we should grow the same \"keep_\" logic for\nrd_partcheck as we have for rd_partkey and rd_partdesc? I don't see\nit in the updated patch though.\n\n> Attached is a revised version of Amit's patch at [1] that makes these\n> data structures be treated more similarly. I also added some Asserts\n> and comment improvements to address the complaints I made upthread about\n> under-documentation of all this logic.\n\nThanks for all the improvements.\n\n> I also cleaned up the problem the code had with failing to distinguish\n> \"partcheck list is NIL\" from \"partcheck list hasn't been computed yet\".\n> For a relation with no such constraints, generate_partition_qual would do\n> the full pushups every time.\n\nActually, callers must have checked that the table is a partition\n(relispartition). It wouldn't be a bad idea to add an Assert or elog\nin generate_partition_qual.\n\n> I'm not sure if the case actually occurs\n> commonly enough that that's a performance problem, but failure to account\n> for it made my added assertions fall over :-( and I thought fixing it\n> was better than weakening the assertions.\n\nHmm, I wonder why the Asserts failed given what I said above.\n\n> I haven't made back-patch versions yet. I'd expect they could be\n> substantially the same, except the two new fields have to go at the\n> end of struct RelationData to avoid ABI breaks.\n\nTo save you the pain of finding the right files to patch in\nback-branches, I made those (attached).\n\nThanks,\nAmit", "msg_date": "Sun, 14 Apr 2019 00:32:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sat, Apr 13, 2019 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I concluded that that's not parenthetical but pretty relevant...\n\n> Hmm, do you mean we should grow the same \"keep_\" logic for\n> rd_partcheck as we have for rd_partkey and rd_partdesc? I don't see\n> it in the updated patch though.\n\nNo, the \"keep_\" stuff is only necessary when we're trying to preserve\ndata structures in-place, which is only important if non-relcache\ncode might be using pointers to it. Since rd_partcheck is never\ndirectly accessed by external code, only copied, there can't be any\nlive pointers to it to worry about. Besides which, since it's load\non demand rather than something that RelationBuildDesc forces to be\nvalid immediately, any comparison would be guaranteed to fail: the\nfield in the new reldesc will always be empty at this point.\n\nPerhaps there's an argument that it should be load-immediately not\nload-on-demand, but that would be an optimization not a bug fix,\nand I'm skeptical that it'd be an improvement anyway.\n\nProbably this is something to revisit whenever somebody gets around to\naddressing the whole copy-vs-dont-copy-vs-use-a-refcount business that\nwe were handwaving about upthread.\n\n>> I also cleaned up the problem the code had with failing to distinguish\n>> \"partcheck list is NIL\" from \"partcheck list hasn't been computed yet\".\n>> For a relation with no such constraints, generate_partition_qual would do\n>> the full pushups every time.\n\n> Actually, callers must have checked that the table is a partition\n> (relispartition).\n\nThat does not mean that it's guaranteed to have any partcheck constraints;\nthere are counterexamples in the regression tests. It looks like the main\ncase is a LIST-partitioned table that has only a default partition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2019 11:53:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> To save you the pain of finding the right files to patch in\n> back-branches, I made those (attached).\n\nBTW, as far as that goes: I'm of the opinion that the partitioning logic's\nfactorization in this area is pretty awful, and v12 has made it worse not\nbetter. It's important IMO that there be a clear distinction between code\nthat belongs to/can manipulate the relcache, and outside code that ought\nat most to examine it (and maybe not even that, depending on where we come\ndown on this copy-vs-refcount business). Maybe allowing\nutils/cache/partcache.c to be effectively halfway inside the relcache\nmodule is tolerable, but I don't think it's great design. Allowing\nfiles over in partitioning/ to also have functions inside that boundary\nis really not good, especially when you can't even say that all of\npartdesc.c is part of relcache.\n\nI\"m seriously inclined to put RelationBuildPartitionDesc back where\nit was in v11.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2019 12:13:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/04/10 15:42, Michael Paquier wrote:\n>> It seems to me that Tom's argument to push in the way relcache\n>> information is handled by copying its contents sounds sensible to me.\n>> That's not perfect, but it is consistent with what exists (note\n>> PublicationActions for a rather-still-not-much-complex example of\n>> structure which gets copied from the relcache).\n\n> I gave my vote for direct access of unchangeable relcache substructures\n> (TupleDesc, PartitionDesc, etc.), because they're accessed during the\n> planning of every user query and fairly expensive to copy compared to list\n> of indexes or PublicationActions that you're citing. To further my point\n> a bit, I wonder why PublicationActions needs to be copied out of relcache\n> at all? Looking at its usage in execReplication.c, it seems we can do\n> fine without copying, because we are holding both a lock and a relcache\n> reference on the replication target relation during the entirety of\n> apply_handle_insert(), which means that information can't change under us,\n> neither logically nor physically.\n\nSo the point here is that that reasoning is faulty. You *cannot* assume,\nno matter how strong a lock or how many pins you hold, that a relcache\nentry will not get rebuilt underneath you. Cache flushes happen\nregardless. And unless relcache.c takes special measures to prevent it,\na rebuild will result in moving subsidiary data structures and thereby\nbreaking any pointers you may have pointing into those data structures.\n\nFor certain subsidiary structures such as the relation tupdesc,\nwe do take such special measures: that's what the \"keep_xxx\" dance in\nRelationClearRelation is. However, that's expensive, both in cycles\nand maintenance effort: it requires having code that can decide equality\nof the subsidiary data structures, which we might well have no other use\nfor, and which we certainly don't have strong tests for correctness of.\nIt's also very error-prone for callers, because there isn't any good way\nto cross-check that code using a long-lived pointer to a subsidiary\nstructure is holding a lock that's strong enough to guarantee non-mutation\nof that structure, or even that relcache.c provides any such guarantee\nat all. (If our periodic attempts to reduce lock strength for assorted\nDDL operations don't scare the pants off you in this connection, you have\nnot thought hard enough about it.) So I think that even though we've\nlargely gotten away with this approach so far, it's also a half-baked\nkluge that we should be looking to get rid of, not extend to yet more\ncases.\n\nTo my mind there are only two trustworthy solutions to the problem of\nwanting time-extended usage of a relcache subsidiary data structure: one\nis to copy it, and the other is to reference-count it. I think that going\nover to a reference-count-based approach for many of these structures\nmight well be something we should do in future, maybe even the very near\nfuture. In the mean time, though, I'm not really satisfied with inserting\nhalf-baked kluges, especially not ones that are different from our other\nhalf-baked kluges for similar purposes. I think that's a path to creating\nhard-to-reproduce bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Apr 2019 13:38:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 15, 2019 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I agree that copying data isn't great. What I don't agree is that this\n>> solution is better.\n\n> I am not very convinced that it makes sense to lump something like\n> RelationGetIndexAttrBitmap in with something like\n> RelationGetPartitionDesc. RelationGetIndexAttrBitmap is returning a\n> single Bitmapset, whereas the data structure RelationGetPartitionDesc\n> is vastly larger and more complex. You can't say \"well, if it's best\n> to copy 32 bytes of data out of the relcache every time we need it, it\n> must also be right to copy 10k or 100k of data out of the relcache\n> every time we need it.\"\n\nI did not say that. What I did say is that they're both correct\nsolutions. Copying large data structures is clearly a potential\nperformance problem, but that doesn't mean we can take correctness\nshortcuts.\n\n> If we want an at-least-somewhat unified solution here, I think we need\n> to bite the bullet and make the planner hold a reference to the\n> relcache throughout planning. (The executor already keeps it open, I\n> believe.) Otherwise, how is the relcache supposed to know when it can\n> throw stuff away and when it can't?\n\nThe real problem with that is that we *still* won't know whether it's\nokay to throw stuff away or not. The issue with these subsidiary\ndata structures is exactly that we're trying to reduce the lock strength\nrequired for changing one of them, and as soon as you make that lock\nstrength less than AEL, you have the problem that that value may need\nto change in relcache entries despite them being pinned. The code I'm\ncomplaining about is trying to devise a shortcut solution to exactly\nthat situation ... and it's not a good shortcut.\n\n> The only alternative seems to be to have each subsystem hold its own\n> reference count, as I did with the PartitionDirectory stuff, which is\n> not something we'd want to scale out.\n\nWell, we clearly don't want to devise a separate solution for every\nsubsystem, but it doesn't seem out of the question to me that we could\nbuild a \"reference counted cache sub-structure\" mechanism and apply it\nto each of these relcache fields. Maybe it could be unified with the\nexisting code in the typcache that does a similar thing. Sure, this'd\nbe a fair amount of work, but we've done it before. Syscache entries\ndidn't use to have reference counts, for example.\n\nBTW, the problem I have with the PartitionDirectory stuff is exactly\nthat it *isn't* a reference-counted solution. If it were, we'd not\nhave this problem of not knowing when we could free rd_pdcxt.\n\n>> I especially don't care for the much-less-than-half-baked kluge of\n>> chaining the old rd_pdcxt onto the new one and hoping that it will go away\n>> at a suitable time.\n\n> It will go away at a suitable time, but maybe not at the soonest\n> suitable time. It wouldn't be hard to improve that, though. The\n> easiest thing to do, I think, would be to have an rd_oldpdcxt and\n> stuff the old context there. If there already is one there, parent\n> the new one under it. When RelationDecrementReferenceCount reduces\n> the reference count to zero, destroy anything found in rd_oldpdcxt.\n\nMeh. While it seems likely that that would mask most practical problems,\nit still seems like covering up a wart with a dirty bandage. In\nparticular, that fix doesn't fix anything unless relcache reference counts\ngo to zero pretty quickly --- which I'll just note is directly contrary\nto your enthusiasm for holding relcache pins longer.\n\nI think that what we ought to do for v12 is have PartitionDirectory\ncopy the data, and then in v13 work on creating real reference-count\ninfrastructure that would allow eliminating the copy steps with full\nsafety. The $64 question is whether that really would cause unacceptable\nperformance problems. To look into that, I made the attached WIP patches.\n(These are functionally complete, but I didn't bother for instance with\nremoving the hunk that 898e5e329 added to relcache.c, and the comments\nneed work, etc.) The first one just changes the PartitionDirectory\ncode to do that, and then the second one micro-optimizes\npartition_bounds_copy() to make it somewhat less expensive, mostly by\ncollapsing lots of small palloc's into one big one.\n\nWhat I get for test cases like [1] is\n\nsingle-partition SELECT, hash partitioning:\n\nN tps, HEAD tps, patch\n2 11426.243754 11448.615193\n8 11254.833267 11374.278861\n32 11288.329114 11371.942425\n128 11222.329256 11185.845258\n512 11001.177137 10572.917288\n1024 10612.456470 9834.172965\n4096 8819.110195 7021.864625\n8192 7372.611355 5276.130161\n\nsingle-partition SELECT, range partitioning:\n\nN tps, HEAD tps, patch\n2 11037.855338 11153.595860\n8 11085.218022 11019.132341\n32 10994.348207 10935.719951\n128 10884.417324 10532.685237\n512 10635.583411 9578.108915\n1024 10407.286414 8689.585136\n4096 8361.463829 5139.084405\n8192 7075.880701 3442.542768\n\nNow certainly these numbers suggest that avoiding the copy could be worth\nour trouble, but these results are still several orders of magnitude\nbetter than where we were two weeks ago [2]. Plus, this is an extreme\ncase that's not really representative of real-world usage, since the test\ntables have neither indexes nor any data. In practical situations the\nbaseline would be lower and the dropoff less bad. So I don't feel bad\nabout shipping v12 with these sorts of numbers and hoping for more\nimprovement later.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3529.1554051867%40sss.pgh.pa.us\n\n[2] https://www.postgresql.org/message-id/0F97FA9ABBDBE54F91744A9B37151A512BAC60%40g01jpexmbkw24", "msg_date": "Sun, 14 Apr 2019 15:29:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/04/14 0:53, Tom Lane wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> On Sat, Apr 13, 2019 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I concluded that that's not parenthetical but pretty relevant...\n> \n>> Hmm, do you mean we should grow the same \"keep_\" logic for\n>> rd_partcheck as we have for rd_partkey and rd_partdesc? I don't see\n>> it in the updated patch though.\n> \n> No, the \"keep_\" stuff is only necessary when we're trying to preserve\n> data structures in-place, which is only important if non-relcache\n> code might be using pointers to it. Since rd_partcheck is never\n> directly accessed by external code, only copied, there can't be any\n> live pointers to it to worry about. Besides which, since it's load\n> on demand rather than something that RelationBuildDesc forces to be\n> valid immediately, any comparison would be guaranteed to fail: the\n> field in the new reldesc will always be empty at this point.\n\nAh, that's right. It was just that you were replying to this:\n\nRobert wrote:\n> As a parenthetical note, I observe that relcache.c seems to know\n> almost nothing about rd_partcheck. rd_partkey and rd_partdesc both\n> have handling in RelationClearRelation(), but rd_partcheck does not,\n> and I suspect that's wrong.\n\nMaybe I just got confused.\n\n> Perhaps there's an argument that it should be load-immediately not\n> load-on-demand, but that would be an optimization not a bug fix,\n> and I'm skeptical that it'd be an improvement anyway.\n\nMakes sense.\n\n> Probably this is something to revisit whenever somebody gets around to\n> addressing the whole copy-vs-dont-copy-vs-use-a-refcount business that\n> we were handwaving about upthread.\n\nOK.\n\n>>> I also cleaned up the problem the code had with failing to distinguish\n>>> \"partcheck list is NIL\" from \"partcheck list hasn't been computed yet\".\n>>> For a relation with no such constraints, generate_partition_qual would do\n>>> the full pushups every time.\n> \n>> Actually, callers must have checked that the table is a partition\n>> (relispartition).\n> \n> That does not mean that it's guaranteed to have any partcheck constraints;\n> there are counterexamples in the regression tests. It looks like the main\n> case is a LIST-partitioned table that has only a default partition.\n\nAh, yes. Actually, even a RANGE default partition that's the only\npartition of its parent has NIL partition constraint.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Mon, 15 Apr 2019 15:36:35 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/04/15 2:38, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> On 2019/04/10 15:42, Michael Paquier wrote:\n>>> It seems to me that Tom's argument to push in the way relcache\n>>> information is handled by copying its contents sounds sensible to me.\n>>> That's not perfect, but it is consistent with what exists (note\n>>> PublicationActions for a rather-still-not-much-complex example of\n>>> structure which gets copied from the relcache).\n> \n>> I gave my vote for direct access of unchangeable relcache substructures\n>> (TupleDesc, PartitionDesc, etc.), because they're accessed during the\n>> planning of every user query and fairly expensive to copy compared to list\n>> of indexes or PublicationActions that you're citing. To further my point\n>> a bit, I wonder why PublicationActions needs to be copied out of relcache\n>> at all? Looking at its usage in execReplication.c, it seems we can do\n>> fine without copying, because we are holding both a lock and a relcache\n>> reference on the replication target relation during the entirety of\n>> apply_handle_insert(), which means that information can't change under us,\n>> neither logically nor physically.\n> \n> So the point here is that that reasoning is faulty. You *cannot* assume,\n> no matter how strong a lock or how many pins you hold, that a relcache\n> entry will not get rebuilt underneath you. Cache flushes happen\n> regardless. And unless relcache.c takes special measures to prevent it,\n> a rebuild will result in moving subsidiary data structures and thereby\n> breaking any pointers you may have pointing into those data structures.\n> \n> For certain subsidiary structures such as the relation tupdesc,\n> we do take such special measures: that's what the \"keep_xxx\" dance in\n> RelationClearRelation is. However, that's expensive, both in cycles\n> and maintenance effort: it requires having code that can decide equality\n> of the subsidiary data structures, which we might well have no other use\n> for, and which we certainly don't have strong tests for correctness of.\n> It's also very error-prone for callers, because there isn't any good way\n> to cross-check that code using a long-lived pointer to a subsidiary\n> structure is holding a lock that's strong enough to guarantee non-mutation\n> of that structure, or even that relcache.c provides any such guarantee\n> at all. (If our periodic attempts to reduce lock strength for assorted\n> DDL operations don't scare the pants off you in this connection, you have\n> not thought hard enough about it.) So I think that even though we've\n> largely gotten away with this approach so far, it's also a half-baked\n> kluge that we should be looking to get rid of, not extend to yet more\n> cases.\n\nThanks for the explanation.\n\nI understand that simply having a lock and a nonzero refcount on a\nrelation doesn't prevent someone else from changing it concurrently.\n\nI get that we want to get rid of the keep_* kludge in the long term, but\nis it wrong to think, for example, that having keep_partdesc today allows\nus today to keep the pointer to rd_partdesc as long as we're holding the\nrelation open or refcnt on the whole relation such as with\nPartitionDirectory mechanism?\n\n> To my mind there are only two trustworthy solutions to the problem of\n> wanting time-extended usage of a relcache subsidiary data structure: one\n> is to copy it, and the other is to reference-count it. I think that going\n> over to a reference-count-based approach for many of these structures\n> might well be something we should do in future, maybe even the very near\n> future. In the mean time, though, I'm not really satisfied with inserting\n> half-baked kluges, especially not ones that are different from our other\n> half-baked kluges for similar purposes. I think that's a path to creating\n> hard-to-reproduce bugs.\n\n+1 to reference-count-based approach.\n\nI've occasionally wondered why there is both keep_tupdesc kludge and\nrefcounting for table TupleDescs. I guess it's because *only* the\nTupleTableSlot mechanism in the executor uses TupleDesc pinning (that is,\nrefcounting) and the rest of the sites depend on the shaky guarantee\nprovided by keep_tupdesc.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Mon, 15 Apr 2019 17:05:16 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Mon, Apr 15, 2019 at 5:05 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> On 2019/04/15 2:38, Tom Lane wrote:\n> > So the point here is that that reasoning is faulty. You *cannot* assume,\n> > no matter how strong a lock or how many pins you hold, that a relcache\n> > entry will not get rebuilt underneath you. Cache flushes happen\n> > regardless. And unless relcache.c takes special measures to prevent it,\n> > a rebuild will result in moving subsidiary data structures and thereby\n> > breaking any pointers you may have pointing into those data structures.\n> >\n> > For certain subsidiary structures such as the relation tupdesc,\n> > we do take such special measures: that's what the \"keep_xxx\" dance in\n> > RelationClearRelation is. However, that's expensive, both in cycles\n> > and maintenance effort: it requires having code that can decide equality\n> > of the subsidiary data structures, which we might well have no other use\n> > for, and which we certainly don't have strong tests for correctness of.\n> > It's also very error-prone for callers, because there isn't any good way\n> > to cross-check that code using a long-lived pointer to a subsidiary\n> > structure is holding a lock that's strong enough to guarantee non-mutation\n> > of that structure, or even that relcache.c provides any such guarantee\n> > at all. (If our periodic attempts to reduce lock strength for assorted\n> > DDL operations don't scare the pants off you in this connection, you have\n> > not thought hard enough about it.) So I think that even though we've\n> > largely gotten away with this approach so far, it's also a half-baked\n> > kluge that we should be looking to get rid of, not extend to yet more\n> > cases.\n>\n> Thanks for the explanation.\n>\n> I understand that simply having a lock and a nonzero refcount on a\n> relation doesn't prevent someone else from changing it concurrently.\n>\n> I get that we want to get rid of the keep_* kludge in the long term, but\n> is it wrong to think, for example, that having keep_partdesc today allows\n> us today to keep the pointer to rd_partdesc as long as we're holding the\n> relation open or refcnt on the whole relation such as with\n> PartitionDirectory mechanism?\n\nAh, we're also trying to fix the memory leak caused by the current\ndesign of PartitionDirectory. AIUI, the design assumes that the leak\nwould occur in fairly rare cases, but maybe not so? If partitions are\nfrequently attached/detached concurrently (maybe won't be too uncommon\nif reduced lock levels encourages users) causing the PartitionDesc of\na given relation changing all the time, then a planning session that's\nholding the PartitionDirectory containing that relation would leak as\nmany PartitionDescs as there were concurrent changes, I guess.\n\nI see that you've proposed to change the PartitionDirectory design to\ncopy PartitionDesc as way of keeping it around instead holding the\nrelation open, but having to resort to that would be unfortunate.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 15 Apr 2019 23:57:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/04/15 2:38, Tom Lane wrote:\n>> To my mind there are only two trustworthy solutions to the problem of\n>> wanting time-extended usage of a relcache subsidiary data structure: one\n>> is to copy it, and the other is to reference-count it. I think that going\n>> over to a reference-count-based approach for many of these structures\n>> might well be something we should do in future, maybe even the very near\n>> future. In the mean time, though, I'm not really satisfied with inserting\n>> half-baked kluges, especially not ones that are different from our other\n>> half-baked kluges for similar purposes. I think that's a path to creating\n>> hard-to-reproduce bugs.\n\n> +1 to reference-count-based approach.\n\n> I've occasionally wondered why there is both keep_tupdesc kludge and\n> refcounting for table TupleDescs. I guess it's because *only* the\n> TupleTableSlot mechanism in the executor uses TupleDesc pinning (that is,\n> refcounting) and the rest of the sites depend on the shaky guarantee\n> provided by keep_tupdesc.\n\nThe reason for that is simply that at the time we added TupleDesc\nrefcounts, we didn't want to do the extra work of making all uses\nof relcache entries' tupdescs deal with refcounting; keep_tupdesc\nis certainly a kluge, but it works for an awful lot of callers.\nWe'd have to go back and deal with that more honestly if we go down\nthis path.\n\nI'm inclined to think we could still allow many call sites to not\ndo an incr/decr-refcount dance as long as they're just fetching\nscalar values out of the relcache's tupdesc, and not keeping any\npointer into it. However, it's hard to see how to enforce such\na rule mechanically, so it might be impractically error-prone\nto allow that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Apr 2019 13:35:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n>> I get that we want to get rid of the keep_* kludge in the long term, but\n>> is it wrong to think, for example, that having keep_partdesc today allows\n>> us today to keep the pointer to rd_partdesc as long as we're holding the\n>> relation open or refcnt on the whole relation such as with\n>> PartitionDirectory mechanism?\n\nWell, it's safe from the caller's standpoint as long as a suitable lock is\nbeing held, which is neither well-defined nor enforced in any way :-(\n\n> Ah, we're also trying to fix the memory leak caused by the current\n> design of PartitionDirectory. AIUI, the design assumes that the leak\n> would occur in fairly rare cases, but maybe not so? If partitions are\n> frequently attached/detached concurrently (maybe won't be too uncommon\n> if reduced lock levels encourages users) causing the PartitionDesc of\n> a given relation changing all the time, then a planning session that's\n> holding the PartitionDirectory containing that relation would leak as\n> many PartitionDescs as there were concurrent changes, I guess.\n\nWe should get a relcache inval after a partdesc change, but the problem\nwith the current code is that that will only result in freeing the old\npartdesc if the inval event is processed while the relcache entry has\nrefcount zero. Otherwise the old rd_pdcxt is just shoved onto the\ncontext chain, where it could survive indefinitely.\n\nI'm not sure that this is really a huge problem in practice. The example\nI gave upthread shows that a partdesc-changing transaction's own internal\ninvals do arrive during CommandCounterIncrement calls that occur while the\nrelcache pin is held; but it seems a bit artificial to assume that one\ntransaction would do a huge number of such changes. (Although, hm, maybe\na single-transaction pg_restore run could have an issue.) Once out of\nthe transaction, it's okay because we'll again invalidate the entry\nat the start of the next transaction, and then the refcount will be zero\nand we'll clean up. For other sessions it'd only happen if they saw the\ninval while already holding a pin on the partitioned table, which probably\nrequires some unlucky timing; and that'd have to happen repeatedly to have\na leak that amounts to anything.\n\nStill, though, I'm unhappy with the code as it stands. It's risky to\nassume that it has no unpleasant behaviors that we haven't spotted yet\nbut will manifest after v12 is in the field. And I do not think that\nit represents a solid base to build on. (As an example, if we made\nany effort to get rid of the redundant extra inval events that occur\npost-transaction, we'd suddenly have a much worse problem here.)\nI'd rather go over to the copy-based solution for now, which *is*\nsemantically sound, and accept that we still have more performance\nwork to do. It's not like v12 isn't going to be light-years ahead of\nv11 in this area anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Apr 2019 14:11:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Sun, Apr 14, 2019 at 3:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I get for test cases like [1] is\n>\n> single-partition SELECT, hash partitioning:\n>\n> N tps, HEAD tps, patch\n> 2 11426.243754 11448.615193\n> 8 11254.833267 11374.278861\n> 32 11288.329114 11371.942425\n> 128 11222.329256 11185.845258\n> 512 11001.177137 10572.917288\n> 1024 10612.456470 9834.172965\n> 4096 8819.110195 7021.864625\n> 8192 7372.611355 5276.130161\n>\n> single-partition SELECT, range partitioning:\n>\n> N tps, HEAD tps, patch\n> 2 11037.855338 11153.595860\n> 8 11085.218022 11019.132341\n> 32 10994.348207 10935.719951\n> 128 10884.417324 10532.685237\n> 512 10635.583411 9578.108915\n> 1024 10407.286414 8689.585136\n> 4096 8361.463829 5139.084405\n> 8192 7075.880701 3442.542768\n\nI have difficulty interpreting these results in any way other than as\nan endorsement of the approach that I took. It seems like you're\nproposing to throw away what is really a pretty substantial amount of\nperformance basically so that the code will look more like you think\nit should look. But I dispute the idea that the current code is so\nbad that we need to do this. I don't think that's the case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Apr 2019 18:55:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/04/15 4:29, Tom Lane wrote:\n> I think that what we ought to do for v12 is have PartitionDirectory\n> copy the data, and then in v13 work on creating real reference-count\n> infrastructure that would allow eliminating the copy steps with full\n> safety. The $64 question is whether that really would cause unacceptable\n> performance problems. To look into that, I made the attached WIP patches.\n> (These are functionally complete, but I didn't bother for instance with\n> removing the hunk that 898e5e329 added to relcache.c, and the comments\n> need work, etc.) The first one just changes the PartitionDirectory\n> code to do that, and then the second one micro-optimizes\n> partition_bounds_copy() to make it somewhat less expensive, mostly by\n> collapsing lots of small palloc's into one big one.\n\nThanks for the patches. The partition_bound_copy()-micro-optimize one\nlooks good in any case.\n\n> What I get for test cases like [1] is\n> \n> single-partition SELECT, hash partitioning:\n> \n> N tps, HEAD tps, patch\n> 2 11426.243754 11448.615193\n> 8 11254.833267 11374.278861\n> 32 11288.329114 11371.942425\n> 128 11222.329256 11185.845258\n> 512 11001.177137 10572.917288\n> 1024 10612.456470 9834.172965\n> 4096 8819.110195 7021.864625\n> 8192 7372.611355 5276.130161\n> \n> single-partition SELECT, range partitioning:\n> \n> N tps, HEAD tps, patch\n> 2 11037.855338 11153.595860\n> 8 11085.218022 11019.132341\n> 32 10994.348207 10935.719951\n> 128 10884.417324 10532.685237\n> 512 10635.583411 9578.108915\n> 1024 10407.286414 8689.585136\n> 4096 8361.463829 5139.084405\n> 8192 7075.880701 3442.542768\n> \n> Now certainly these numbers suggest that avoiding the copy could be worth\n> our trouble, but these results are still several orders of magnitude\n> better than where we were two weeks ago [2]. Plus, this is an extreme\n> case that's not really representative of real-world usage, since the test\n> tables have neither indexes nor any data.\n\nI tested the copyPartitionDesc() patch and here are the results for\nsingle-partition SELECT using hash partitioning, where index on queries\ncolumn, and N * 1000 rows inserted into the parent table before the test.\nI've confirmed that the plan is always an Index Scan on selected partition\n(in PG 11, it's under Append, but in HEAD there's no Append due to 8edd0e794)\n\nN tps, HEAD tps, patch tps, PG 11\n2 3093.443043 3039.804101 2928.777570\n8 3024.545820 3064.333027 2372.738622\n32 3029.580531 3032.755266 1417.706212\n128 3019.359793 3032.726006 567.099745\n512 2948.639216 2986.987862 98.710664\n1024 2971.629939 2882.233026 41.720955\n4096 2680.703000 1937.988908 7.035816\n8192 2599.120308 2069.271274 3.635512\n\nSo, the TPS degrades by 14% when going from 128 partitions to 8192\npartitions on HEAD, whereas it degrades by 31% with the patch.\n\nHere are the numbers with no indexes defined on the tables, and no data\ninserted.\n\nN tps, HEAD tps, patch tps, PG 11\n2 3498.247862 3463.695950 3110.314290\n8 3524.430780 3445.165206 2741.340770\n32 3476.427781 3427.400879 1645.602269\n128 3427.121901 3430.385433 651.586373\n512 3394.907072 3335.842183 182.201349\n1024 3454.050819 3274.266762 67.942075\n4096 3201.266380 2845.974556 12.320716\n8192 2955.850804 2413.723443 6.151703\n\nHere, the TPS degrades by 13% when going from 128 partitions to 8192\npartitions on HEAD, whereas it degrades by 29% with the patch.\n\nSo, the degradation caused by copying the bounds is almost same in both\ncases. Actually, even in the more realistic test with indexes and data,\nexecuting the plan is relatively faster than planning as the partition\ncount grows, because the PartitionBoundInfo that the planner now copies\ngrows bigger.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Wed, 17 Apr 2019 18:58:46 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On 2019/04/17 18:58, Amit Langote wrote:\n> where index on queries\n> column\n\nOops, I meant \"with an index on the queried column\".\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Wed, 17 Apr 2019 19:00:30 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Hi\n\nThe message I'm replying to is marked as an open item. Robert, what do\nyou think needs to be done here before release? Could you summarize,\nso we then can see what others think?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 May 2019 08:59:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, May 1, 2019 at 11:59 AM Andres Freund <andres@anarazel.de> wrote:\n> The message I'm replying to is marked as an open item. Robert, what do\n> you think needs to be done here before release? Could you summarize,\n> so we then can see what others think?\n\nThe original issue on this thread was that hyrax started running out\nof memory when it hadn't been doing so previously. That happened\nbecause, for complicated reasons, commit\n898e5e3290a72d288923260143930fb32036c00c resulted in the leak being\nhit lots of times in CLOBBER_CACHE_ALWAYS builds instead of just once.\nCommits 2455ab48844c90419714e27eafd235a85de23232 and\nd3f48dfae42f9655425d1f58f396e495c7fb7812 fixed that problem.\n\nIn the email at issue, Tom complains about two things. First, he\ncomplains about the fact that I try to arrange things so that relcache\ndata remains valid for as long as required instead of just copying it.\nSecond, he complains about the fact repeated ATTACH and DETACH\nPARTITION operations can produce a slow session-lifespan memory leak.\n\nI think it's reasonable to fix the second issue for v12. I am not\nsure how important it is, because (1) the leak is small, (2) it seems\nunlikely that anyone would execute enough ATTACH/DETACH PARTITION\noperations in one backend for the leak to amount to anything\nsignificant, and (3) if a relcache flush ever happens when you don't\nhave the relation open, all of the \"leaked\" memory will be un-leaked.\nHowever, I believe that we could fix it as follows. First, invent\nrd_pdoldcxt and put the first old context there; if that pointer is\nalready in use, then parent the new context under the old one.\nSecond, in RelationDecrementReferenceCount, if the refcount hits 0,\nnuke rd_pdoldcxt and set the pointer back to NULL. With that change,\nyou would only keep the old PartitionDesc around until the ref count\nhits 0, whereas at present, you keep the old PartitionDesc around\nuntil an invalidation happens while the ref count is 0.\n\nI think the first issue is not v12 material. Tom proposed to fix it\nby copying all the relevant data out of the relcache, but his own\nperformance results show a pretty significant hit, and AFAICS he\nhasn't pointed to anything that's actually broken by the current\napproach. What I think should be done is actually generalize the\napproach I took in this patch, so that instead of the partition\ndirectory holding a reference count, the planner or executor holds\none. Then not only could people who want information about the\nPartitionDesc avoid copying stuff from the relcache, but maybe other\nthings as well. I think that would be significantly better than\ncontinuing to double-down on the current copy-everything approach,\nwhich really only works well in a world where a table can't have all\nthat much metadata, which is clearly not true for PostgreSQL any more.\nI'm not sure that Tom is actually opposed to this approach -- although\nI may have misunderstood his position -- but where we disagree, I\nthink, is that I see what I did in this commit as a stepping-stone\ntoward a better world, and he sees it as something that should be\nkilled with fire until that better world has fully arrived.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 May 2019 13:09:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Hi,\n\nOn 2019-05-01 13:09:07 -0400, Robert Haas wrote:\n> The original issue on this thread was that hyrax started running out\n> of memory when it hadn't been doing so previously. That happened\n> because, for complicated reasons, commit\n> 898e5e3290a72d288923260143930fb32036c00c resulted in the leak being\n> hit lots of times in CLOBBER_CACHE_ALWAYS builds instead of just once.\n> Commits 2455ab48844c90419714e27eafd235a85de23232 and\n> d3f48dfae42f9655425d1f58f396e495c7fb7812 fixed that problem.\n> \n> In the email at issue, Tom complains about two things. First, he\n> complains about the fact that I try to arrange things so that relcache\n> data remains valid for as long as required instead of just copying it.\n> Second, he complains about the fact repeated ATTACH and DETACH\n> PARTITION operations can produce a slow session-lifespan memory leak.\n> \n> I think it's reasonable to fix the second issue for v12. I am not\n> sure how important it is, because (1) the leak is small, (2) it seems\n> unlikely that anyone would execute enough ATTACH/DETACH PARTITION\n> operations in one backend for the leak to amount to anything\n> significant, and (3) if a relcache flush ever happens when you don't\n> have the relation open, all of the \"leaked\" memory will be un-leaked.\n> However, I believe that we could fix it as follows. First, invent\n> rd_pdoldcxt and put the first old context there; if that pointer is\n> already in use, then parent the new context under the old one.\n> Second, in RelationDecrementReferenceCount, if the refcount hits 0,\n> nuke rd_pdoldcxt and set the pointer back to NULL. With that change,\n> you would only keep the old PartitionDesc around until the ref count\n> hits 0, whereas at present, you keep the old PartitionDesc around\n> until an invalidation happens while the ref count is 0.\n\nThat sounds roughly reasonably. Tom, any objections? I think it'd be\ngood if we fixed this soon.\n\n\n> I think the first issue is not v12 material. Tom proposed to fix it\n> by copying all the relevant data out of the relcache, but his own\n> performance results show a pretty significant hit,\n\nYea, the numbers didn't look very convincing.\n\n\n> and AFAICS he\n> hasn't pointed to anything that's actually broken by the current\n> approach. What I think should be done is actually generalize the\n> approach I took in this patch, so that instead of the partition\n> directory holding a reference count, the planner or executor holds\n> one. Then not only could people who want information about the\n> PartitionDesc avoid copying stuff from the relcache, but maybe other\n> things as well. I think that would be significantly better than\n> continuing to double-down on the current copy-everything approach,\n> which really only works well in a world where a table can't have all\n> that much metadata, which is clearly not true for PostgreSQL any more.\n> I'm not sure that Tom is actually opposed to this approach -- although\n> I may have misunderstood his position -- but where we disagree, I\n> think, is that I see what I did in this commit as a stepping-stone\n> toward a better world, and he sees it as something that should be\n> killed with fire until that better world has fully arrived.\n\nTom, are you ok with deferring further work here for v13?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 May 2019 12:59:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-01 13:09:07 -0400, Robert Haas wrote:\n>> In the email at issue, Tom complains about two things. First, he\n>> complains about the fact that I try to arrange things so that relcache\n>> data remains valid for as long as required instead of just copying it.\n>> Second, he complains about the fact repeated ATTACH and DETACH\n>> PARTITION operations can produce a slow session-lifespan memory leak.\n>> \n>> I think it's reasonable to fix the second issue for v12. I am not\n>> sure how important it is, because (1) the leak is small, (2) it seems\n>> unlikely that anyone would execute enough ATTACH/DETACH PARTITION\n>> operations in one backend for the leak to amount to anything\n>> significant, and (3) if a relcache flush ever happens when you don't\n>> have the relation open, all of the \"leaked\" memory will be un-leaked.\n\nYeah, I did some additional testing that showed that it's pretty darn\nhard to get the leak to amount to anything. The test case that I\noriginally posted did many DDLs in a single transaction, and it\nseems that that's actually essential to get a meaningful leak; as\nsoon as you exit the transaction the leaked contexts will be recovered\nduring sinval cleanup.\n\n>> However, I believe that we could fix it as follows. First, invent\n>> rd_pdoldcxt and put the first old context there; if that pointer is\n>> already in use, then parent the new context under the old one.\n>> Second, in RelationDecrementReferenceCount, if the refcount hits 0,\n>> nuke rd_pdoldcxt and set the pointer back to NULL. With that change,\n>> you would only keep the old PartitionDesc around until the ref count\n>> hits 0, whereas at present, you keep the old PartitionDesc around\n>> until an invalidation happens while the ref count is 0.\n\n> That sounds roughly reasonably. Tom, any objections? I think it'd be\n> good if we fixed this soon.\n\nMy fundamental objection is that this kluge is ugly as sin.\nAdding more ugliness on top of it doesn't improve that; rather\nthe opposite.\n\n> Tom, are you ok with deferring further work here for v13?\n\nYeah, I think that that's really what we ought to do at this point.\nI'd like to see a new design here, but it's not happening for v12,\nand I don't want to add even more messiness that's predicated on\na wrong design.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2019 16:36:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Fri, May 17, 2019 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I did some additional testing that showed that it's pretty darn\n> hard to get the leak to amount to anything. The test case that I\n> originally posted did many DDLs in a single transaction, and it\n> seems that that's actually essential to get a meaningful leak; as\n> soon as you exit the transaction the leaked contexts will be recovered\n> during sinval cleanup.\n\nMy colleague Amul Sul rediscovered this same leak when he tried to\nattach lots of partitions to an existing partitioned table, all in the\ncourse of a single transaction. This seems a little less artificial\nthan Tom's original reproducer, which involved attaching and detaching\nthe same partition repeatedly.\n\nHere is a patch that tries to fix this, along the lines I previously\nsuggested; Amul reports that it does work for him. I am OK to hold\nthis for v13 if that's what people want, but I think it might be\nsmarter to commit it to v12. Maybe it's not a big leak, but it seems\neasy enough to do better, so I think we should.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 4 Jun 2019 08:25:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Tue, Jun 4, 2019 at 9:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, May 17, 2019 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Yeah, I did some additional testing that showed that it's pretty darn\n> > hard to get the leak to amount to anything. The test case that I\n> > originally posted did many DDLs in a single transaction, and it\n> > seems that that's actually essential to get a meaningful leak; as\n> > soon as you exit the transaction the leaked contexts will be recovered\n> > during sinval cleanup.\n>\n> My colleague Amul Sul rediscovered this same leak when he tried to\n> attach lots of partitions to an existing partitioned table, all in the\n> course of a single transaction. This seems a little less artificial\n> than Tom's original reproducer, which involved attaching and detaching\n> the same partition repeatedly.\n>\n> Here is a patch that tries to fix this, along the lines I previously\n> suggested; Amul reports that it does work for him.\n\nThanks for the patch.\n\nI noticed a crash with one of the scenarios that I think the patch is\nmeant to address. Let me describe the steps:\n\nAttach gdb (or whatever) to session 1 in which we'll run a query that\nwill scan at least two partitions and set a breakpoint in\nexpand_partitioned_rtentry(). Run the query, so the breakpoint will\nbe hit. Step through up to the start of the following loop in this\nfunction:\n\n i = -1;\n while ((i = bms_next_member(live_parts, i)) >= 0)\n {\n Oid childOID = partdesc->oids[i];\n Relation childrel;\n RangeTblEntry *childrte;\n Index childRTindex;\n RelOptInfo *childrelinfo;\n\n /* Open rel, acquiring required locks */\n childrel = table_open(childOID, lockmode);\n\nNote that 'partdesc' in the above code is from the partition\ndirectory. Before stepping through into the loop, start another\nsession and attach a new partition. On into the loop. When the 1st\npartition is opened, its locking will result in\nRelationClearRelation() being called on the parent relation due to\npartition being attached concurrently, which I observed, is actually\ninvoked a couple of times due to recursion. Parent relation's\nrd_oldpdcxt will be set in this process, which contains the\naforementioned partition descriptor.\n\nBefore moving the loop to process the 2nd partition, attach another\npartition in session 2. When the 2nd partition is opened,\nRelationClearRelation() will run again and in one of its recursive\ninvocations, it destroys the rd_oldpdcxt that was set previously,\ntaking the partition directory's partition descriptor with it.\nAnything that tries to access it later crashes.\n\nI couldn't figure out what to blame here though -- the design of\nrd_pdoldcxt or the fact that RelationClearRelation() is invoked\nmultiple times. I will try to study it more closely tomorrow.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 5 Jun 2019 20:03:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Wed, Jun 5, 2019 at 8:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I noticed a crash with one of the scenarios that I think the patch is\n> meant to address. Let me describe the steps:\n>\n> Attach gdb (or whatever) to session 1 in which we'll run a query that\n> will scan at least two partitions and set a breakpoint in\n> expand_partitioned_rtentry(). Run the query, so the breakpoint will\n> be hit. Step through up to the start of the following loop in this\n> function:\n>\n> i = -1;\n> while ((i = bms_next_member(live_parts, i)) >= 0)\n> {\n> Oid childOID = partdesc->oids[i];\n> Relation childrel;\n> RangeTblEntry *childrte;\n> Index childRTindex;\n> RelOptInfo *childrelinfo;\n>\n> /* Open rel, acquiring required locks */\n> childrel = table_open(childOID, lockmode);\n>\n> Note that 'partdesc' in the above code is from the partition\n> directory. Before stepping through into the loop, start another\n> session and attach a new partition. On into the loop. When the 1st\n> partition is opened, its locking will result in\n> RelationClearRelation() being called on the parent relation due to\n> partition being attached concurrently, which I observed, is actually\n> invoked a couple of times due to recursion. Parent relation's\n> rd_oldpdcxt will be set in this process, which contains the\n> aforementioned partition descriptor.\n>\n> Before moving the loop to process the 2nd partition, attach another\n> partition in session 2. When the 2nd partition is opened,\n> RelationClearRelation() will run again and in one of its recursive\n> invocations, it destroys the rd_oldpdcxt that was set previously,\n> taking the partition directory's partition descriptor with it.\n> Anything that tries to access it later crashes.\n>\n> I couldn't figure out what to blame here though -- the design of\n> rd_pdoldcxt or the fact that RelationClearRelation() is invoked\n> multiple times. I will try to study it more closely tomorrow.\n\nOn further study, I concluded that it's indeed the multiple\ninvocations of RelationClearRelation() on the parent relation that\ncauses rd_pdoldcxt to be destroyed prematurely. While it's\nproblematic that the session in which a new partition is attached to\nthe parent relation broadcasts 2 SI messages to invalidate the parent\nrelation [1], it's obviously better to fix how RelationClearRelation\nmanipulates rd_pdoldcxt so that duplicate SI messages are not harmful,\nfixing the latter is hardly an option.\n\nAttached is a patch that applies on top of Robert's pdoldcxt-v1.patch,\nwhich seems to fix this issue for me. In the rare case where inval\nmessages due to multiple concurrent attachments get processed in a\nsession holding a reference on a partitioned table, there will be\nmultiple \"old\" partition descriptors and corresponding \"old\" contexts.\nAll of the old contexts are chained together via context-re-parenting,\nwith the newest \"old\" context being accessible from the table's\nrd_pdoldcxt.\n\nThanks,\nAmit\n\n[1]: inval.c: AddRelcacheInvalidationMessage() does try to prevent\nduplicate messages from being emitted, but the logic to detect\nduplicates doesn't work across CommandCounterIncrement(). There are\nat two relcache inval requests in ATTACH PARTITION code path, emitted\nby SetRelationHasSubclass and StorePartitionBound, resp., which are\nseparated by at least one CCI, so the 2nd request isn't detected as\nthe duplicate of the 1st.", "msg_date": "Thu, 6 Jun 2019 15:47:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Thu, Jun 6, 2019 at 3:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> While it's\n> problematic that the session in which a new partition is attached to\n> the parent relation broadcasts 2 SI messages to invalidate the parent\n> relation [1], it's obviously better to fix how RelationClearRelation\n> manipulates rd_pdoldcxt so that duplicate SI messages are not harmful,\n> fixing the latter is hardly an option.\n\nSorry, I had meant to say \"fixing the former/the other is hardly an option\".\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 7 Jun 2019 13:02:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Thu, Jun 6, 2019 at 2:48 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached is a patch that applies on top of Robert's pdoldcxt-v1.patch,\n> which seems to fix this issue for me.\n\nYeah, that looks right. I think my patch was full of fuzzy thinking\nand inadequate testing; thanks for checking it over and coming up with\nthe right solution.\n\nAnyone else want to look/comment?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jun 2019 08:51:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 6, 2019 at 2:48 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Attached is a patch that applies on top of Robert's pdoldcxt-v1.patch,\n>> which seems to fix this issue for me.\n\n> Yeah, that looks right. I think my patch was full of fuzzy thinking\n> and inadequate testing; thanks for checking it over and coming up with\n> the right solution.\n\n> Anyone else want to look/comment?\n\nI think the existing code is horribly ugly and this is even worse.\nIt adds cycles to RelationDecrementReferenceCount which is a hotspot\nthat has no business dealing with this; the invariants are unclear;\nand there's no strong reason to think there aren't still cases where\nwe accumulate lots of copies of old partition descriptors during a\nsequence of operations. Basically you're just doubling down on a\nwrong design.\n\nAs I said upthread, my current inclination is to do nothing in this\narea for v12 and then try to replace the whole thing with proper\nreference counting in v13. I think the cases where we have a major\nleak are corner-case-ish enough that we can leave it as-is for one\nrelease.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Jun 2019 13:57:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "On Tue, Jun 11, 2019 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think the existing code is horribly ugly and this is even worse.\n> It adds cycles to RelationDecrementReferenceCount which is a hotspot\n> that has no business dealing with this; the invariants are unclear;\n> and there's no strong reason to think there aren't still cases where\n> we accumulate lots of copies of old partition descriptors during a\n> sequence of operations. Basically you're just doubling down on a\n> wrong design.\n\nI don't understand why a function that decrements a reference count\nshouldn't be expected to free things if the reference count goes to\nzero.\n\nUnder what workload do you think adding this to\nRelationDecrementReferenceCount would cause a noticeable performance\nregression?\n\nI think the change is responsive to your previous complaint that the\ntiming of stuff getting freed is not very well pinned down. With this\nchange, it's much more tightly pinned down: it happens when the\nrefcount goes to 0. That is definitely not perfect, but I think that\nit is a lot easier to come up with scenarios where the leak\naccumulates because no cache flush happens while the relfcount is 0\nthan it is to come up with scenarios where the refcount never reaches\n0. I agree that the latter type of scenario probably exists, but I\ndon't think we've come up with one yet.\n\n> As I said upthread, my current inclination is to do nothing in this\n> area for v12 and then try to replace the whole thing with proper\n> reference counting in v13. I think the cases where we have a major\n> leak are corner-case-ish enough that we can leave it as-is for one\n> release.\n\nIs this something you're planning to work on yourself? Do you have a\ndesign in mind? Is the idea to reference-count the PartitionDesc?\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 12 Jun 2019 14:53:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jun 11, 2019 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I think the change is responsive to your previous complaint that the\n> timing of stuff getting freed is not very well pinned down. With this\n> change, it's much more tightly pinned down: it happens when the\n> refcount goes to 0. That is definitely not perfect, but I think that\n> it is a lot easier to come up with scenarios where the leak\n> accumulates because no cache flush happens while the relfcount is 0\n> than it is to come up with scenarios where the refcount never reaches\n> 0. I agree that the latter type of scenario probably exists, but I\n> don't think we've come up with one yet.\n\nI don't know why you think that's improbable, given that the changes\naround PartitionDirectory-s cause relcache entries to be held open much\nlonger than before (something I've also objected to on this thread).\n\n>> As I said upthread, my current inclination is to do nothing in this\n>> area for v12 and then try to replace the whole thing with proper\n>> reference counting in v13. I think the cases where we have a major\n>> leak are corner-case-ish enough that we can leave it as-is for one\n>> release.\n\n> Is this something you're planning to work on yourself?\n\nWell, I'd rather farm it out to somebody else, but ...\n\n> Do you have a\n> design in mind? Is the idea to reference-count the PartitionDesc?\n\nWhat we discussed upthread was refcounting each of the various\nlarge sub-objects of relcache entries, not just the partdesc.\nI think if we're going to go this way we should bite the bullet and fix\nthem all. I really want to burn down RememberToFreeTupleDescAtEOX() in\nparticular ... it seems likely to me that that's also a source of\nunpleasant memory leaks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2019 15:11:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hyrax vs. RelationBuildPartitionDesc" } ]
[ { "msg_contents": "Hi,\n\nI was reviewing Andrey Borodin's patch for GiST VACUUM [1], which \nincludes a new \"block set\" data structure, to track internal and empty \npages while vacuuming a GiST. The blockset data structure was a pretty \nsimple radix tree, that can hold a set of BlockNumbers.\n\nThe memory usage of the radix tree would probably be good enough in real \nlife, as we also discussed on the thread. Nevertheless, I was somewhat \nbothered by it, so I did some measurements. I added some \nMemoryContextStats() calls to Andrey's test_blockset module, to print \nout memory usage.\n\nFor storing 5000000 random 32-bit integers, or a density of about 1% of \nbits set, the blockset consumed about 380 MB of memory. I think that's a \npretty realistic ratio of internal pages : leaf pages on a GiST index, \nso I would hope the blockset to be efficient in that ballpark. However, \n380 MB / 5000000 is about 76 bytes, so it's consuming about 76 bytes per \nstored block number. That's a lot of overhead! For comparison, a plain \nBlockNumber is just 4 bytes. With more sparse sets, it is even more \nwasteful, on a per-item basis, although the total memory usage will of \ncourse be smaller. (To be clear, no one is pushing around GiST indexes \nwith anywhere near 2^32 blocks, or 32 TB, but the per-BlockNumber stats \nare nevertheless interesting.)\n\nI started to consider rewriting the data structure into something more \nlike B-tree. Then I remembered that I wrote a data structure pretty much \nlike that last year already! We discussed that on the \"Vacuum: allow \nusage of more than 1GB of work mem\" thread [2], to replace the current \nhuge array that holds the dead TIDs during vacuum.\n\nSo I dusted off that patch, and made it more general, so that it can be \nused to store arbitrary 64-bit integers, rather than ItemPointers or \nBlockNumbers. I then added a rudimentary form of compression to the leaf \npages, so that clusters of nearby values can be stored as an array of \n32-bit integers, or as a bitmap. That would perhaps be overkill, if it \nwas just to conserve some memory in GiST vacuum, but I think this will \nturn out to be a useful general-purpose facility.\n\nI plugged this new \"sparse bitset\" implementation into the same \ntest_blockset test. The memory usage for 5000000 values is now just over \n20 MB, or about 4.3 bytes per value. That's much more reasonable than \nthe 76 bytes.\n\nI'll do some more performance testing on this, to make sure it performs \nwell enough on random lookups, to also replace VACUUM's dead item \npointer array. Assuming that works out, I plan to polish up and commit \nthis, and use it in the GiST vacuum. I'm also tempted to change VACUUM \nto use this, since that should be pretty straightforward once we have \nthe data structure.\n\n[1] \nhttps://www.postgresql.org/message-id/A51F64E3-850D-4249-814E-54967103A859%40yandex-team.ru\n\n[2] \nhttps://www.postgresql.org/message-id/8e5cbf08-5dd8-466d-9271-562fc65f133f%40iki.fi\n\n- Heikki", "msg_date": "Wed, 13 Mar 2019 21:18:12 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Sparse bit set data structure" }, { "msg_contents": "On Wed, Mar 13, 2019 at 3:18 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I started to consider rewriting the data structure into something more\n> like B-tree. Then I remembered that I wrote a data structure pretty much\n> like that last year already! We discussed that on the \"Vacuum: allow\n> usage of more than 1GB of work mem\" thread [2], to replace the current\n> huge array that holds the dead TIDs during vacuum.\n>\n> So I dusted off that patch, and made it more general, so that it can be\n> used to store arbitrary 64-bit integers, rather than ItemPointers or\n> BlockNumbers. I then added a rudimentary form of compression to the leaf\n> pages, so that clusters of nearby values can be stored as an array of\n> 32-bit integers, or as a bitmap. That would perhaps be overkill, if it\n> was just to conserve some memory in GiST vacuum, but I think this will\n> turn out to be a useful general-purpose facility.\n\nYeah, that sounds pretty cool.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 13 Mar 2019 15:48:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "Hi!\n\n> 14 марта 2019 г., в 0:18, Heikki Linnakangas <hlinnaka@iki.fi> написал(а):\n> <0001-Add-SparseBitset-to-hold-a-large-set-of-64-bit-ints-.patch><0002-Andrey-Borodin-s-test_blockset-tool-adapted-for-Spar.patch>\n\nThat is very interesting idea. Basically, B-tree and radix tree is a tradeoff between space and time.\n\nIn general, lookup into radix tree will touch less CPU cache lines.\nIn this terms Bitmapset is on most performant and memory-wasteful side: lookup into Bitmapset is always 1 cache line.\nPerformance of radix tree can be good in case of skewed distribution, while B-tree will be OK on uniform. I think that distribution of GiST inner pages is uniform, distribution of empty leafs is... I have no idea, let's consider uniform too.\n\nI'd review this data structure ASAP. I just need to understand: do we aim to v12 or v13? (I did not solve concurrency issues in GiST VACUUM yet, but will fix them at weekend)\n\nAlso, maybe we should consider using RoaringBitmaps? [0]\nAs a side not I would add that while balanced trees are widely used for operations on external memory, there are more performant versions for main memory. Like AVL-tree and RB-tree.\n\n\nBrest regards, Andrey Borodin.\n\n[0] https://github.com/RoaringBitmap/CRoaring\n", "msg_date": "Thu, 14 Mar 2019 10:15:58 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On 14/03/2019 07:15, Andrey Borodin wrote:\n>> 14 марта 2019 г., в 0:18, Heikki Linnakangas <hlinnaka@iki.fi> написал(а):\n>> <0001-Add-SparseBitset-to-hold-a-large-set-of-64-bit-ints-.patch><0002-Andrey-Borodin-s-test_blockset-tool-adapted-for-Spar.patch>\n> \n> That is very interesting idea. Basically, B-tree and radix tree is a tradeoff between space and time.\n> \n> In general, lookup into radix tree will touch less CPU cache lines.\n> In this terms Bitmapset is on most performant and memory-wasteful side: lookup into Bitmapset is always 1 cache line.\n> Performance of radix tree can be good in case of skewed distribution, while B-tree will be OK on uniform. I think that distribution of GiST inner pages is uniform, distribution of empty leafs is... I have no idea, let's consider uniform too.\n\nYeah. In this implementation, the leaf nodes are packed into bitmaps \nwhen possible, so it should perform quite well on skewed distributions, too.\n\n> I'd review this data structure ASAP. I just need to understand: do we aim to v12 or v13? (I did not solve concurrency issues in GiST VACUUM yet, but will fix them at weekend)\n\nI'm aiming v12 with this. It's a fairly large patch, but it's very \nisolated. I think the most pressing issue is getting the rest of the \nGiST vacuum patch fixed. If you get that fixed over the weekend, I'll \ntake another look at it on Monday.\n\n> Also, maybe we should consider using RoaringBitmaps? [0]\n> As a side not I would add that while balanced trees are widely used for operations on external memory, there are more performant versions for main memory. Like AVL-tree and RB-tree.\n\nHmm. Yeah, this is quite similar to Roaring Bitmaps. Roaring bitmaps \nalso have a top-level, at which you binary search, and \"leaf\" nodes \nwhich can be bitmaps or arrays. In a roaring bitmap, the key space is \ndivided into fixed-size chunks, like in a radix tree, but different from \na B-tree.\n\nEven if we used AVL-trees or RB-trees or something else for the top \nlayers of the tree, I think at the bottom level, we'd still need to use \nsorted arrays or bitmaps, to get the density we want. So I think the \nimplementation at the leaf level would look pretty much the same, in any \ncase. And the upper levels don't take very much space, regardless of the \nimplementation. So I don't think it matters much.\n\n- Heikki\n\n", "msg_date": "Thu, 14 Mar 2019 10:42:49 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On Wed, Mar 13, 2019 at 8:18 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I started to consider rewriting the data structure into something more\n> like B-tree. Then I remembered that I wrote a data structure pretty much\n> like that last year already! We discussed that on the \"Vacuum: allow\n> usage of more than 1GB of work mem\" thread [2], to replace the current\n> huge array that holds the dead TIDs during vacuum.\n>\n> So I dusted off that patch, and made it more general, so that it can be\n> used to store arbitrary 64-bit integers, rather than ItemPointers or\n> BlockNumbers. I then added a rudimentary form of compression to the leaf\n> pages, so that clusters of nearby values can be stored as an array of\n> 32-bit integers, or as a bitmap. That would perhaps be overkill, if it\n> was just to conserve some memory in GiST vacuum, but I think this will\n> turn out to be a useful general-purpose facility.\n\nI had a quick look at it, so I thought first comments could be helpful.\n\n+ * If you change this, you must recalculate MAX_INTERVAL_LEVELS, too!\n+ * MAX_INTERNAL_ITEMS ^ MAX_INTERNAL_LEVELS >= 2^64.\n\nI think that MAX_INTERVAL_LEVELS was a typo for MAX_INTERNAL_LEVELS,\nwhich has probably been renamed to MAX_TREE_LEVELS in this patch.\n\n+ * with varying levels of \"compression\". Which one is used depending on the\n+ * values stored.\n\ndepends on?\n\n+ if (newitem <= sbs->last_item)\n+ elog(ERROR, \"cannot insert to sparse bitset out of order\");\n\nIs there any reason to disallow inserting duplicates? AFAICT nothing\nprevents that in the current code. If that's intended, that probably\nshould be documented.\n\nNothing struck me other than that, that's a pretty nice new lib :)\n\n", "msg_date": "Thu, 14 Mar 2019 16:37:16 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On Thu, Mar 14, 2019 at 4:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> + if (newitem <= sbs->last_item)\n> + elog(ERROR, \"cannot insert to sparse bitset out of order\");\n>\n> Is there any reason to disallow inserting duplicates? AFAICT nothing\n> prevents that in the current code. If that's intended, that probably\n> should be documented.\n\nThat of course won't work well with SBS_LEAF_BITMAP. I'd still prefer\na more explicit error message.\n\n", "msg_date": "Fri, 15 Mar 2019 07:30:10 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On 14/03/2019 17:37, Julien Rouhaud wrote:\n> On Wed, Mar 13, 2019 at 8:18 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> I started to consider rewriting the data structure into something more\n>> like B-tree. Then I remembered that I wrote a data structure pretty much\n>> like that last year already! We discussed that on the \"Vacuum: allow\n>> usage of more than 1GB of work mem\" thread [2], to replace the current\n>> huge array that holds the dead TIDs during vacuum.\n>>\n>> So I dusted off that patch, and made it more general, so that it can be\n>> used to store arbitrary 64-bit integers, rather than ItemPointers or\n>> BlockNumbers. I then added a rudimentary form of compression to the leaf\n>> pages, so that clusters of nearby values can be stored as an array of\n>> 32-bit integers, or as a bitmap. That would perhaps be overkill, if it\n>> was just to conserve some memory in GiST vacuum, but I think this will\n>> turn out to be a useful general-purpose facility.\n> \n> I had a quick look at it, so I thought first comments could be helpful.\n\nThanks!\n\n> + if (newitem <= sbs->last_item)\n> + elog(ERROR, \"cannot insert to sparse bitset out of order\");\n> \n> Is there any reason to disallow inserting duplicates? AFAICT nothing\n> prevents that in the current code. If that's intended, that probably\n> should be documented.\n\nYeah, we could easily allow setting the last item again. It would be a \nno-op, though, which doesn't seem very useful. It would be useful to \nlift the limitation that the values have to be added in ascending order, \nbut current users that we're thinking of don't need it. Let's add that \nlater, if the need arises.\n\nOr did you mean that the structure would be a \"bag\" rather than a \"set\", \nso that it would keep the duplicates? I don't think that would be good. \nI guess the vacuum code that this will be used in wouldn't care either \nway, but \"set\" seems like a more clean concept.\n\nOn 13/03/2019 21:18, I wrote:\n> I'll do some more performance testing on this, to make sure it performs\n> well enough on random lookups, to also replace VACUUM's dead item\n> pointer array.\n\nTurns out, it didn't perform very well for that use case. I tested with \ndistributions where you have clusters of 1-200 integers, at 2^16 \nintervals. That's close to the distribution of ItemPointers in a VACUUM, \nwhere you have 1-200 (dead) items per page, and the offset number is \nstored in the low 16 bits. It used slightly less memory than the plain \narray of ItemPointers that we use today, but the code to use a bitmap at \nthe leaf level hardly ever kicks in, because there just isn't ever \nenough set bits for that to win. In order to get the dense packing, it \nneeds to be done at a much more fine-grained fashion.\n\nSo I rewrote the way the leaf nodes work, so that the leaf nodes no \nlonger use a bitmap, but a simple array of items, like on internal \nnodes. To still get the dense packing, the leaf items are packed using \nan algorithm called Simple-8b, which can encode between 1-240 integers \nin a single 64-bit word, depending on how far the integers are from each \nother. That works much better, and actually makes the code simpler, too.\n\nI renamed this thing to IntegerSet. That seems like a more accurate name \nthan the \"sparse bitset\" that I used call it. There aren't any \"bits\" \nvisible in the public interface of this, after all.\n\nI improved the regression tests, so that it tests all the interface \nfunctions, and covers various corner-cases. It tests the set with \ndifferent patterns of integers, and it can print the memory usage and \nexecution times of adding values to the set, probing random values, and \niterating through the set. That is a useful micro-benchmark. The speed \nof all the operations seem to be in the same ballpark as with a simple \nsorted array, but it uses much less memory.\n\nI'm now pretty satisfied with this. Barring objections, I'll commit this \nin the next few days. Please review, if you have a chance.\n\n- Heikki", "msg_date": "Wed, 20 Mar 2019 03:10:51 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "Hi!\n\nGreat job!\n\n> 20 марта 2019 г., в 9:10, Heikki Linnakangas <hlinnaka@iki.fi> написал(а):\n> \n> Please review, if you have a chance.\n> \n> - Heikki\n> <0001-Add-IntegerSet-to-hold-large-sets-of-64-bit-ints-eff.patch>\n\nI'm looking into the code and have few questions:\n1. I'm not sure it is the best interface for iteration\nuint64\nintset_iterate_next(IntegerSet *intset, bool *found)\n\nwe will use it like\n\nwhile\n{\n bool found;\n BlockNumber x = (BlockNumber) intset_iterate_next(is, &found);\n if (!found)\n break;\n // do stuff\n}\n\nwe could use it like\n\nBlockNumber x;\nwhile(intset_iterate_next(is, &x))\n{\n // do stuff\n}\n\nBut that's not a big difference.\n\n\n2. \n * Limitations\n * -----------\n *\n * - Values must be added in order. (Random insertions would require\n * splitting nodes, which hasn't been implemented.)\n *\n * - Values cannot be added while iteration is in progress.\n\nYou check for violation of these limitation in code, but there is not tests for this checks.\nShould we add these tests?\n\nBest regards, Andrey Borodin.\n", "msg_date": "Wed, 20 Mar 2019 10:48:00 +0800", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On Wed, Mar 20, 2019 at 2:10 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 14/03/2019 17:37, Julien Rouhaud wrote:\n>\n> > + if (newitem <= sbs->last_item)\n> > + elog(ERROR, \"cannot insert to sparse bitset out of order\");\n> >\n> > Is there any reason to disallow inserting duplicates? AFAICT nothing\n> > prevents that in the current code. If that's intended, that probably\n> > should be documented.\n>\n> Yeah, we could easily allow setting the last item again. It would be a\n> no-op, though, which doesn't seem very useful. It would be useful to\n> lift the limitation that the values have to be added in ascending order,\n> but current users that we're thinking of don't need it. Let's add that\n> later, if the need arises.\n>\n> Or did you mean that the structure would be a \"bag\" rather than a \"set\",\n> so that it would keep the duplicates? I don't think that would be good.\n> I guess the vacuum code that this will be used in wouldn't care either\n> way, but \"set\" seems like a more clean concept.\n\nYes, I was thinking about \"bag\". For a set, allowing inserting\nduplicates is indeed a no-op and should be pretty cheap with almost no\nextra code for that. Maybe VACUUM can't have duplicate, but is it\nthat unlikely that other would need it? I'm wondering if just\nrequiring to merge multiple such structure isn't going to be needed\nsoon for instance. If that's not wanted, I'm still thinking that a\nless ambiguous error should be raised.\n\n> I'm now pretty satisfied with this. Barring objections, I'll commit this\n> in the next few days. Please review, if you have a chance.\n\nYou're defining SIMPLE8B_MAX_VALUE but never use it. Maybe you wanted\nto add an assert / explicit test in intset_add_member()?\n\n/*\n * We buffer insertions in a simple array, before packing and inserting them\n * into the B-tree. MAX_BUFFERED_VALUES sets the size of the buffer. The\n * encoder assumes that it is large enough, that we can always fill a leaf\n * item with buffered new items. In other words, MAX_BUFFERED_VALUES must be\n * larger than MAX_VALUES_PER_LEAF_ITEM.\n */\n#define MAX_BUFFERED_VALUES (MAX_VALUES_PER_LEAF_ITEM * 2)\n\nThe *2 is not immediately obvious here (at least it wasn't to me),\nmaybe explaining intset_flush_buffered_values() main loop rationale\nhere could be worthwhile.\n\nOtherwise, everything looks just fine!\n\n", "msg_date": "Wed, 20 Mar 2019 17:20:11 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On Wed, Mar 20, 2019 at 5:20 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Mar 20, 2019 at 2:10 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> > I'm now pretty satisfied with this. Barring objections, I'll commit this\n> > in the next few days. Please review, if you have a chance.\n>\n> You're defining SIMPLE8B_MAX_VALUE but never use it. Maybe you wanted\n> to add an assert / explicit test in intset_add_member()?\n>\n> /*\n> * We buffer insertions in a simple array, before packing and inserting them\n> * into the B-tree. MAX_BUFFERED_VALUES sets the size of the buffer. The\n> * encoder assumes that it is large enough, that we can always fill a leaf\n> * item with buffered new items. In other words, MAX_BUFFERED_VALUES must be\n> * larger than MAX_VALUES_PER_LEAF_ITEM.\n> */\n> #define MAX_BUFFERED_VALUES (MAX_VALUES_PER_LEAF_ITEM * 2)\n>\n> The *2 is not immediately obvious here (at least it wasn't to me),\n> maybe explaining intset_flush_buffered_values() main loop rationale\n> here could be worthwhile.\n>\n> Otherwise, everything looks just fine!\n\nI forgot to mention a minor gripe about the intset_binsrch_uint64 /\nintset_binsrch_leaf function, which are 99% duplicates. But I don't\nknow if fixing that (something like passing the array as a void * and\npassing a getter function?) is worth the trouble.\n\n", "msg_date": "Wed, 20 Mar 2019 18:50:04 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "Committed, thanks for the review!\n\nI made one last-minute change: Instead of allocating a memory context in \nintset_create(), it is left to the caller. The GiST vacuum code created \ntwo integer sets, and it made more sense for it to use the same context \nfor both. Also, the VACUUM tid list patch would like to use a memory \ncontext with very large defaults, so that malloc() will decide to \nmmap() it, so that the memory can be returned to the OS. Because the \ndesired memory allocation varies between callers, so it's best to leave \nit to the caller.\n\nOn 20/03/2019 19:50, Julien Rouhaud wrote:\n> I forgot to mention a minor gripe about the intset_binsrch_uint64 /\n> intset_binsrch_leaf function, which are 99% duplicates. But I don't\n> know if fixing that (something like passing the array as a void * and\n> passing a getter function?) is worth the trouble.\n\nYeah. I felt that it's simpler to have two almost-identical functions, \neven though it's a bit repetitive, than try to have one function serve \nboth uses. The 'nextkey' argument is always passed as false to \nintset_binsrch_leaf() function, but I kept it anyway, to keep the \nfunctions identical. The compiler will surely optimize it away, so it \nmakes no difference to performance.\n\nOn 20/03/2019 04:48, Andrey Borodin wrote:\n> 1. I'm not sure it is the best interface for iteration\n>\n> uint64\n> intset_iterate_next(IntegerSet *intset, bool *found)\n> \n> we will use it like\n> \n> while\n> {\n> bool found;\n> BlockNumber x = (BlockNumber) intset_iterate_next(is, &found);\n> if (!found)\n> break;\n> // do stuff\n> }\n> \n> we could use it like\n> \n> BlockNumber x;\n> while(intset_iterate_next(is, &x))\n> {\n> // do stuff\n> }\n> \n> But that's not a big difference.\n\nAgreed, I changed it that way.\n\n- Heikki\n\n", "msg_date": "Fri, 22 Mar 2019 13:35:16 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "Hello,\n\nAccording to the draw and simple8b_mode struct comment, it seems there \nis a typo:\n\n> * 20-bit integer 20-bit integer 20-bit integer\n> * 1101 00000000000000010010 01111010000100100000 00000000000000010100\n> * ^\n> * selector\n> *\n> * The selector 1101 is 13 in decimal. From the modes table below, we see\n> * that it means that the codeword encodes three 12-bit integers. In decimal,\n> * those integers are 18, 500000 and 20. Because we encode deltas rather than\n> * absolute values, the actual values that they represent are 18, 500018 and\n> * 500038.\n[...]\n> {20, 3}, /* mode 13: three 20-bit integers */\n\n\nThe comment should be \"the codeword encodes three *20-bit* integers\" ?\n\nPatch attached.\n\nRegards,", "msg_date": "Thu, 28 Mar 2019 15:46:03 +0100", "msg_from": "Adrien NAYRAT <adrien.nayrat@anayrat.info>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "\nUh, should this be applied?\n\n---------------------------------------------------------------------------\n\nOn Thu, Mar 28, 2019 at 03:46:03PM +0100, Adrien NAYRAT wrote:\n> Hello,\n> \n> According to the draw and simple8b_mode struct comment, it seems there is a\n> typo:\n> \n> > * 20-bit integer 20-bit integer 20-bit integer\n> > * 1101 00000000000000010010 01111010000100100000 00000000000000010100\n> > * ^\n> > * selector\n> > *\n> > * The selector 1101 is 13 in decimal. From the modes table below, we see\n> > * that it means that the codeword encodes three 12-bit integers. In decimal,\n> > * those integers are 18, 500000 and 20. Because we encode deltas rather than\n> > * absolute values, the actual values that they represent are 18, 500018 and\n> > * 500038.\n> [...]\n> > {20, 3}, /* mode 13: three 20-bit integers */\n> \n> \n> The comment should be \"the codeword encodes three *20-bit* integers\" ?\n> \n> Patch attached.\n> \n> Regards,\n\n> diff --git a/src/backend/lib/integerset.c b/src/backend/lib/integerset.c\n> index 28b4a38609..9984fd55e8 100644\n> --- a/src/backend/lib/integerset.c\n> +++ b/src/backend/lib/integerset.c\n> @@ -805,7 +805,7 @@ intset_binsrch_leaf(uint64 item, leaf_item *arr, int arr_elems, bool nextkey)\n> * selector\n> *\n> * The selector 1101 is 13 in decimal. From the modes table below, we see\n> - * that it means that the codeword encodes three 12-bit integers. In decimal,\n> + * that it means that the codeword encodes three 20-bit integers. In decimal,\n> * those integers are 18, 500000 and 20. Because we encode deltas rather than\n> * absolute values, the actual values that they represent are 18, 500018 and\n> * 500038.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 8 Apr 2019 19:38:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On 2019-Apr-08, Bruce Momjian wrote:\n\n> Uh, should this be applied?\n\nYes, it's a pretty obvious typo methinks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Apr 2019 20:45:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Sparse bit set data structure" }, { "msg_contents": "On 09/04/2019 03:45, Alvaro Herrera wrote:\n> On 2019-Apr-08, Bruce Momjian wrote:\n> \n>> Uh, should this be applied?\n> \n> Yes, it's a pretty obvious typo methinks.\n\nPushed, thanks, and sorry for the delay.\n\n- Heikki\n\n\n", "msg_date": "Tue, 9 Apr 2019 08:35:19 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Sparse bit set data structure" } ]
[ { "msg_contents": "Hi, hackers,\n\nAccording to the document, \"to_reg* functions return null rather than\nthrowing an error if the name is not found\", but this is not the case\nif the arguments to those functions are schema qualified and the\ncaller does not have access permission of the schema even if the table\n(or other object) does exist -- we get an error.\n\nFor example, to_regclass() throws an error if its argument is\n'schema_name.table_name'' (i.e. contains schema name) and caller's\nrole doesn't have access permission of the schema. Same thing can be\nsaid to Other functions,\n\nI get complain from Pgpool-II users because it uses to_regclass()\ninternally to confirm table's existence but in the case above it's\nnot useful because the error aborts user's transaction.\n\nTo be more consistent with the doc and to make those functions more\nuseful, I propose to change current implementation so that they do not\nthrow an error if the name space cannot be accessible by the caller.\n\nAttached patch implements this. Comments and suggestions are welcome.\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_date": "Thu, 14 Mar 2019 13:37:00 +0900", "msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "Hi Takuma,\n\nOn Thu, 14 Mar 2019 13:37:00 +0900\nTakuma Hoshiai <hoshiai@sraoss.co.jp> wrote:\n\n> Hi, hackers,\n> \n> According to the document, \"to_reg* functions return null rather than\n> throwing an error if the name is not found\", but this is not the case\n> if the arguments to those functions are schema qualified and the\n> caller does not have access permission of the schema even if the table\n> (or other object) does exist -- we get an error.\n> \n> For example, to_regclass() throws an error if its argument is\n> 'schema_name.table_name'' (i.e. contains schema name) and caller's\n> role doesn't have access permission of the schema. Same thing can be\n> said to Other functions,\n> \n> I get complain from Pgpool-II users because it uses to_regclass()\n> internally to confirm table's existence but in the case above it's\n> not useful because the error aborts user's transaction.\n> \n> To be more consistent with the doc and to make those functions more\n> useful, I propose to change current implementation so that they do not\n> throw an error if the name space cannot be accessible by the caller.\n> \n> Attached patch implements this. Comments and suggestions are welcome.\n\nI reviewed the patch. Here are some comments:\n\n /*\n+ * RangeVarCheckNamespaceAccessNoError\n+ * Returns true if given relation's namespace can be accessable by the\n+ * current user. If no namespace is given in the relation, just returns\n+ * true.\n+ */\n+bool\n+RangeVarCheckNamespaceAccessNoError(RangeVar *relation)\n\nAlthough it might be trivial, the new function's name 'RangeVar...' seems a bit\nweird to me because this is used for not only to_regclass but also to_regproc,\nto_regtype, and so on, that is, the argument \"relation\" is not always a relation. \n\nThis function is used always with makeRangeVarFromNameList() as\n\n if (!RangeVarCheckNamespaceAccessNoError(makeRangeVarFromNameList(names)))\n\n, so how about merging the two function as below, for example:\n\n /*\n * CheckNamespaceAccessNoError\n * Returns true if the namespace in given qualified-name can be accessable\n * by the current user. If no namespace is given in the names, just returns\n * true.\n */\n bool\n CheckNamespaceAccessNoError(List *names);\n\n\nBTW, this patch supresses also \"Cross-database references is not allowed\" error in\naddition to the namespace ACL error. Is this an intentional change? If this error\ncan be allowed, you can use DeconstructQualifiedName() instead of\nmakeRangeVarFromNameList().\n\n\nIn the regression test, you are using \\gset and \\connect psql meta-commands to test\nthe user privilege to a namespace, but I think we can make this more simpler \nby using SET SESSION AUTHORIZATION and RESET AUTHORIZATION.\n\nRegards,\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n", "msg_date": "Tue, 19 Mar 2019 15:13:04 +0900", "msg_from": "Yugo Nagata <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "Hello.\n\nAt Thu, 14 Mar 2019 13:37:00 +0900, Takuma Hoshiai <hoshiai@sraoss.co.jp> wrote in <20190314133700.c271429ddc00ddab3aac2619@sraoss.co.jp>\n> Hi, hackers,\n> \n> According to the document, \"to_reg* functions return null rather than\n> throwing an error if the name is not found\", but this is not the case\n> if the arguments to those functions are schema qualified and the\n> caller does not have access permission of the schema even if the table\n> (or other object) does exist -- we get an error.\n\nYou explicitly specified the namespace and I think that it is not\nthe case of not-found. It is right that the error happens since\nyou explicitly tried to access a unprivileged schema.\n\n> For example, to_regclass() throws an error if its argument is\n> 'schema_name.table_name'' (i.e. contains schema name) and caller's\n> role doesn't have access permission of the schema. Same thing can be\n> said to Other functions,\n> \n> I get complain from Pgpool-II users because it uses to_regclass()\n> internally to confirm table's existence but in the case above it's\n> not useful because the error aborts user's transaction.\n\nI'm not sure how such unaccessible table names are given to the\nfunction there, but it is also natural that any user trying to\naccess unprivileged objects gets an error.\n\n> To be more consistent with the doc and to make those functions more\n> useful, I propose to change current implementation so that they do not\n> throw an error if the name space cannot be accessible by the caller.\n\nSince it doesn't seem a bug, I think that changing the existing\nbehavior is not acceptable. Maybe we can live with another\nsignature of the function like to_regproc(name text, noerror\nbool).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Mar 2019 16:00:58 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": ">> According to the document, \"to_reg* functions return null rather than\n>> throwing an error if the name is not found\", but this is not the case\n>> if the arguments to those functions are schema qualified and the\n>> caller does not have access permission of the schema even if the table\n>> (or other object) does exist -- we get an error.\n> \n> You explicitly specified the namespace and I think that it is not\n> the case of not-found. It is right that the error happens since\n> you explicitly tried to access a unprivileged schema.\n> \n>> For example, to_regclass() throws an error if its argument is\n>> 'schema_name.table_name'' (i.e. contains schema name) and caller's\n>> role doesn't have access permission of the schema. Same thing can be\n>> said to Other functions,\n>> \n>> I get complain from Pgpool-II users because it uses to_regclass()\n>> internally to confirm table's existence but in the case above it's\n>> not useful because the error aborts user's transaction.\n> \n> I'm not sure how such unaccessible table names are given to the\n> function there, but it is also natural that any user trying to\n> access unprivileged objects gets an error.\n\nYou misunderstand the functionality of to_regclass(). Even if a user\ndoes not have an access privilege of certain table, to_regclass() does\nnot raise an error.\n\ntest=> select * from t1;\nERROR: permission denied for table t1\n\ntest=> select to_regclass('t1')::oid;\n to_regclass \n-------------\n 1647238\n(1 row)\n\nSo why can't we do the same thing for schema? For me, that way seems\nto be more consistent.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n", "msg_date": "Tue, 19 Mar 2019 16:35:32 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "At Tue, 19 Mar 2019 16:35:32 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in <20190319.163532.529526338176696856.t-ishii@sraoss.co.jp>\n> >> According to the document, \"to_reg* functions return null rather than\n> >> throwing an error if the name is not found\", but this is not the case\n> >> if the arguments to those functions are schema qualified and the\n> >> caller does not have access permission of the schema even if the table\n> >> (or other object) does exist -- we get an error.\n> > \n> > You explicitly specified the namespace and I think that it is not\n> > the case of not-found. It is right that the error happens since\n> > you explicitly tried to access a unprivileged schema.\n> > \n> >> For example, to_regclass() throws an error if its argument is\n> >> 'schema_name.table_name'' (i.e. contains schema name) and caller's\n> >> role doesn't have access permission of the schema. Same thing can be\n> >> said to Other functions,\n> >> \n> >> I get complain from Pgpool-II users because it uses to_regclass()\n> >> internally to confirm table's existence but in the case above it's\n> >> not useful because the error aborts user's transaction.\n> > \n> > I'm not sure how such unaccessible table names are given to the\n> > function there, but it is also natural that any user trying to\n> > access unprivileged objects gets an error.\n> \n> You misunderstand the functionality of to_regclass(). Even if a user\n> does not have an access privilege of certain table, to_regclass() does\n> not raise an error.\n>\n> test=> select * from t1;\n> ERROR: permission denied for table t1\n> \n> test=> select to_regclass('t1')::oid;\n> to_regclass \n> -------------\n> 1647238\n> (1 row)\n> \n> So why can't we do the same thing for schema? For me, that way seems\n> to be more consistent.\n\nIt seems to be a different thing. The oid 1647239 would be a\ntable in public schema or any schema that the user has access\nto. If search_path contained only unprivileged schemas, the\nfunction silently ignores such schemas.\n\n=> set search_path to s1; -- the user doesn't have access to this schema.\n=> select to_regclass('t1')::oid; -- the table is really exists.\n> to_regclass \n> -------------\n> \n> (1 row)\n\nSuperuser gets the exepcted result.\n\n=# set search_path to s1;\n=# select to_regclass('t1')::oid; -- superuser has access to s1.\n> to_regclass \n> -------------\n> 87612\n> (1 row)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Mar 2019 17:23:42 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": ">> You misunderstand the functionality of to_regclass(). Even if a user\n>> does not have an access privilege of certain table, to_regclass() does\n>> not raise an error.\n>>\n>> test=> select * from t1;\n>> ERROR: permission denied for table t1\n>> \n>> test=> select to_regclass('t1')::oid;\n>> to_regclass \n>> -------------\n>> 1647238\n>> (1 row)\n>> \n>> So why can't we do the same thing for schema? For me, that way seems\n>> to be more consistent.\n> \n> It seems to be a different thing. The oid 1647239 would be a\n> table in public schema or any schema that the user has access\n> to. If search_path contained only unprivileged schemas, the\n> function silently ignores such schemas.\n> \n> => set search_path to s1; -- the user doesn't have access to this schema.\n> => select to_regclass('t1')::oid; -- the table is really exists.\n>> to_regclass \n>> -------------\n>> \n>> (1 row)\n\nI (and Hoshiai-san) concern about following case:\n\n# revoke usage on schema s1 from foo;\nREVOKE\n:\n[connect as foo]\ntest=> select to_regclass('s1.t1')::oid;\nERROR: permission denied for schema s1\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n", "msg_date": "Tue, 19 Mar 2019 17:54:01 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "At Tue, 19 Mar 2019 17:54:01 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in <20190319.175401.646838939186238443.t-ishii@sraoss.co.jp>\n> > It seems to be a different thing. The oid 1647239 would be a\n> > table in public schema or any schema that the user has access\n> > to. If search_path contained only unprivileged schemas, the\n> > function silently ignores such schemas.\n> > \n> > => set search_path to s1; -- the user doesn't have access to this schema.\n> > => select to_regclass('t1')::oid; -- the table is really exists.\n> >> to_regclass \n> >> -------------\n> >> \n> >> (1 row)\n> \n> I (and Hoshiai-san) concern about following case:\n> \n> # revoke usage on schema s1 from foo;\n> REVOKE\n> :\n> [connect as foo]\n> test=> select to_regclass('s1.t1')::oid;\n> ERROR: permission denied for schema s1\n\nThat works in a transaction. It looks right that the actually\nrevoked schema cannot be accessed.\n\nS1:foo: begin;\nS2:su : revoke usage on schema s1 from foo;\nS1:foo: select to_regclass('s1.t1')::oid;\n> to_regclass \n> -------------\n> 16418\nS2:foo: commit;\nS2:foo: select to_regclass('s1.t1')::oid;\n> ERROR: permission denied for schema s1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Mar 2019 19:09:59 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": ">> I (and Hoshiai-san) concern about following case:\n>> \n>> # revoke usage on schema s1 from foo;\n>> REVOKE\n>> :\n>> [connect as foo]\n>> test=> select to_regclass('s1.t1')::oid;\n>> ERROR: permission denied for schema s1\n> \n> That works in a transaction. It looks right that the actually\n> revoked schema cannot be accessed.\n> \n> S1:foo: begin;\n> S2:su : revoke usage on schema s1 from foo;\n> S1:foo: select to_regclass('s1.t1')::oid;\n>> to_regclass \n>> -------------\n>> 16418\n> S2:foo: commit;\n> S2:foo: select to_regclass('s1.t1')::oid;\n>> ERROR: permission denied for schema s1\n\nI'm confused. How is an explicit transaction related to the topic?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n", "msg_date": "Wed, 20 Mar 2019 07:13:28 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "At Tue, 19 Mar 2019 19:09:59 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190319.190959.25783254.horiguchi.kyotaro@lab.ntt.co.jp>\n> That works in a transaction. It looks right that the actually\n> revoked schema cannot be accessed.\n\n From another viewpoint, the behavior really doesn't protect nothing. The unprivileged user still can do that as the follows.\n\n=> select to_regclass('s1.t1')::oid;\nERROR: permission denied for schema s1\n=> select c.oid from pg_class c join pg_namespace n on c.relnamespace = n.oid where n.nspname = 's1' and c.relname = 't1';\n oid \n-------\n 16418\n(1 row)\n\nSo, couldn't we just ignore the privilege there?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 Mar 2019 09:03:45 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "At Wed, 20 Mar 2019 07:13:28 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in <20190320.071328.485760446856666486.t-ishii@sraoss.co.jp>\n> >> I (and Hoshiai-san) concern about following case:\n> >> \n> >> # revoke usage on schema s1 from foo;\n> >> REVOKE\n> >> :\n> >> [connect as foo]\n> >> test=> select to_regclass('s1.t1')::oid;\n> >> ERROR: permission denied for schema s1\n> > \n> > That works in a transaction. It looks right that the actually\n> > revoked schema cannot be accessed.\n> > \n> > S1:foo: begin;\n> > S2:su : revoke usage on schema s1 from foo;\n> > S1:foo: select to_regclass('s1.t1')::oid;\n> >> to_regclass \n> >> -------------\n> >> 16418\n> > S2:foo: commit;\n> > S2:foo: select to_regclass('s1.t1')::oid;\n> >> ERROR: permission denied for schema s1\n> \n> I'm confused. How is an explicit transaction related to the topic?\n\nSince your example revokes the privilege just before (or\nsimultaneously with) \"using\" the unprivileged object. If the\ngiven object name is obtained before the revokation, it can be\nprotected by beginning a transaction before obtaining the\nname. If not, it is right to emit an error.\n\nAs another discussion, as I wrote just before, can be raised that\nthe behavior really doesn't protect nothing. We can lookup the\noid of an unprivileged objects through the system catalogs.\n\nSo I think it is reasonable that we just ignore privileges in the\ncommands. Maybe regclassin and friends also should be changed in\nthe same way.\n\nIf we protect system catalogs later, the commands naturally will\nfollow the change.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 Mar 2019 09:48:59 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "On Wed, 20 Mar 2019 09:48:59 +0900 (Tokyo Standard Time)\nKyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> At Wed, 20 Mar 2019 07:13:28 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in <20190320.071328.485760446856666486.t-ishii@sraoss.co.jp>\n> > >> I (and Hoshiai-san) concern about following case:\n> > >> \n> > >> # revoke usage on schema s1 from foo;\n> > >> REVOKE\n> > >> :\n> > >> [connect as foo]\n> > >> test=> select to_regclass('s1.t1')::oid;\n> > >> ERROR: permission denied for schema s1\n> > > \n> > > That works in a transaction. It looks right that the actually\n> > > revoked schema cannot be accessed.\n> > > \n> > > S1:foo: begin;\n> > > S2:su : revoke usage on schema s1 from foo;\n> > > S1:foo: select to_regclass('s1.t1')::oid;\n> > >> to_regclass \n> > >> -------------\n> > >> 16418\n> > > S2:foo: commit;\n> > > S2:foo: select to_regclass('s1.t1')::oid;\n> > >> ERROR: permission denied for schema s1\n> > \n> > I'm confused. How is an explicit transaction related to the topic?\n> \n> Since your example revokes the privilege just before (or\n> simultaneously with) \"using\" the unprivileged object. If the\n> given object name is obtained before the revokation, it can be\n> protected by beginning a transaction before obtaining the\n> name. If not, it is right to emit an error.\n\nWhat we want to say below is 'foo' has no privilege. not important to execute REVOKE.\n> # revoke usage on schema s1 from foo;\n> REVOKE\n> :\n> [connect as foo]\n> test=> select to_regclass('s1.t1')::oid;\n> ERROR: permission denied for schema s1\n\n> As another discussion, as I wrote just before, can be raised that\n> the behavior really doesn't protect nothing. We can lookup the\n> oid of an unprivileged objects through the system catalogs.\n> \n> So I think it is reasonable that we just ignore privileges in the\n> commands. Maybe regclassin and friends also should be changed in\n> the same way.\n\nYes, I think so too. \nBut their functions may be used for confirming a obejct visibility, so this time \nI want to supress errors only. \nAnd if want to raise an error about \"permission denied for schema xx\", \nwould use regclass() function.\n\n\nbest regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n> If we protect system catalogs later, the commands naturally will\n> follow the change.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n> \n> \n\n\n", "msg_date": "Wed, 20 Mar 2019 13:55:44 +0900", "msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "st 20. 3. 2019 v 5:55 odesílatel Takuma Hoshiai <hoshiai@sraoss.co.jp>\nnapsal:\n\n> On Wed, 20 Mar 2019 09:48:59 +0900 (Tokyo Standard Time)\n> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>\n> > At Wed, 20 Mar 2019 07:13:28 +0900 (JST), Tatsuo Ishii <\n> ishii@sraoss.co.jp> wrote in <\n> 20190320.071328.485760446856666486.t-ishii@sraoss.co.jp>\n> > > >> I (and Hoshiai-san) concern about following case:\n> > > >>\n> > > >> # revoke usage on schema s1 from foo;\n> > > >> REVOKE\n> > > >> :\n> > > >> [connect as foo]\n> > > >> test=> select to_regclass('s1.t1')::oid;\n> > > >> ERROR: permission denied for schema s1\n> > > >\n> > > > That works in a transaction. It looks right that the actually\n> > > > revoked schema cannot be accessed.\n> > > >\n> > > > S1:foo: begin;\n> > > > S2:su : revoke usage on schema s1 from foo;\n> > > > S1:foo: select to_regclass('s1.t1')::oid;\n> > > >> to_regclass\n> > > >> -------------\n> > > >> 16418\n> > > > S2:foo: commit;\n> > > > S2:foo: select to_regclass('s1.t1')::oid;\n> > > >> ERROR: permission denied for schema s1\n> > >\n> > > I'm confused. How is an explicit transaction related to the topic?\n> >\n> > Since your example revokes the privilege just before (or\n> > simultaneously with) \"using\" the unprivileged object. If the\n> > given object name is obtained before the revokation, it can be\n> > protected by beginning a transaction before obtaining the\n> > name. If not, it is right to emit an error.\n>\n> What we want to say below is 'foo' has no privilege. not important to\n> execute REVOKE.\n> > # revoke usage on schema s1 from foo;\n> > REVOKE\n> > :\n> > [connect as foo]\n> > test=> select to_regclass('s1.t1')::oid;\n> > ERROR: permission denied for schema s1\n>\n> > As another discussion, as I wrote just before, can be raised that\n> > the behavior really doesn't protect nothing. We can lookup the\n> > oid of an unprivileged objects through the system catalogs.\n> >\n> > So I think it is reasonable that we just ignore privileges in the\n> > commands. Maybe regclassin and friends also should be changed in\n> > the same way.\n>\n> Yes, I think so too.\n> But their functions may be used for confirming a obejct visibility, so\n> this time\n> I want to supress errors only.\n> And if want to raise an error about \"permission denied for schema xx\",\n> would use regclass() function.\n>\n\n+1\n\nPavel\n\n\n>\n> best regards,\n>\n> --\n> Takuma Hoshiai <hoshiai@sraoss.co.jp>\n>\n> > If we protect system catalogs later, the commands naturally will\n> > follow the change.\n> >\n> > regards.\n> >\n> > --\n> > Kyotaro Horiguchi\n> > NTT Open Source Software Center\n> >\n> >\n>\n>\n>\n\nst 20. 3. 2019 v 5:55 odesílatel Takuma Hoshiai <hoshiai@sraoss.co.jp> napsal:On Wed, 20 Mar 2019 09:48:59 +0900 (Tokyo Standard Time)\nKyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n\n> At Wed, 20 Mar 2019 07:13:28 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in <20190320.071328.485760446856666486.t-ishii@sraoss.co.jp>\n> > >> I (and Hoshiai-san) concern about following case:\n> > >> \n> > >> # revoke usage on schema s1 from foo;\n> > >> REVOKE\n> > >> :\n> > >> [connect as foo]\n> > >> test=> select to_regclass('s1.t1')::oid;\n> > >> ERROR:  permission denied for schema s1\n> > > \n> > > That works in a transaction. It looks right that the actually\n> > > revoked schema cannot be accessed.\n> > > \n> > > S1:foo: begin;\n> > > S2:su : revoke usage on schema s1 from foo;\n> > > S1:foo: select to_regclass('s1.t1')::oid;\n> > >>  to_regclass \n> > >> -------------\n> > >>        16418\n> > > S2:foo: commit;\n> > > S2:foo: select to_regclass('s1.t1')::oid;\n> > >> ERROR:  permission denied for schema s1\n> > \n> > I'm confused. How is an explicit transaction related to the topic?\n> \n> Since your example revokes the privilege just before (or\n> simultaneously with) \"using\" the unprivileged object. If the\n> given object name is obtained before the revokation, it can be\n> protected by beginning a transaction before obtaining the\n> name. If not, it is right to emit an error.\n\nWhat we want to say below is 'foo' has no privilege. not important to execute REVOKE.\n> # revoke usage on schema s1 from foo;\n> REVOKE\n> :\n> [connect as foo]\n> test=> select to_regclass('s1.t1')::oid;\n> ERROR:  permission denied for schema s1\n\n> As another discussion, as I wrote just before, can be raised that\n> the behavior really doesn't protect nothing. We can lookup the\n> oid of an unprivileged objects through the system catalogs.\n> \n> So I think it is reasonable that we just ignore privileges in the\n> commands. Maybe regclassin and friends also should be changed in\n> the same way.\n\nYes, I think so too. \nBut their functions may be used for confirming a obejct visibility, so this time \nI want to supress errors only. \nAnd if want to raise  an error about \"permission denied for schema xx\", \nwould use regclass() function.+1Pavel\n\n\nbest regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n> If we protect system catalogs later, the commands naturally will\n> follow the change.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n> \n>", "msg_date": "Wed, 20 Mar 2019 06:34:53 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "Hi Nagata-san,\n\nsorry for te late reply.\nThank you for your comments, I have attached a patch that reflected it.\n\nOn Tue, 19 Mar 2019 15:13:04 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> I reviewed the patch. Here are some comments:\n> \n> /*\n> + * RangeVarCheckNamespaceAccessNoError\n> + * Returns true if given relation's namespace can be accessable by the\n> + * current user. If no namespace is given in the relation, just returns\n> + * true.\n> + */\n> +bool\n> +RangeVarCheckNamespaceAccessNoError(RangeVar *relation)\n> \n> Although it might be trivial, the new function's name 'RangeVar...' seems a bit\n> weird to me because this is used for not only to_regclass but also to_regproc,\n> to_regtype, and so on, that is, the argument \"relation\" is not always a relation. \n> \n> This function is used always with makeRangeVarFromNameList() as\n> \n> if (!RangeVarCheckNamespaceAccessNoError(makeRangeVarFromNameList(names)))\n> \n> , so how about merging the two function as below, for example:\n> \n> /*\n> * CheckNamespaceAccessNoError\n> * Returns true if the namespace in given qualified-name can be accessable\n> * by the current user. If no namespace is given in the names, just returns\n> * true.\n> */\n> bool\n> CheckNamespaceAccessNoError(List *names);\n> \n> \n> BTW, this patch supresses also \"Cross-database references is not allowed\" error in\n> addition to the namespace ACL error. Is this an intentional change? If this error\n> can be allowed, you can use DeconstructQualifiedName() instead of\n> makeRangeVarFromNameList().\n\nI think it is enough to supress napesapace ACL error only. so I will use its function.\n\n> In the regression test, you are using \\gset and \\connect psql meta-commands to test\n> the user privilege to a namespace, but I think we can make this more simpler \n> by using SET SESSION AUTHORIZATION and RESET AUTHORIZATION.\n\nI forgot this SQL, I fixed to use it.\n\nBest regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_date": "Fri, 22 Mar 2019 09:45:09 +0900", "msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "Takuma Hoshiai <hoshiai@sraoss.co.jp> writes:\n> [ fix_to_reg_v2.patch ]\n\nI took a quick look through this patch. I'm on board with the goal\nof not having schema-access violations throw an error in these\nfunctions, but the implementation feels pretty ugly and bolted-on.\nNobody who had designed the code to do this from the beginning\nwould have chosen to parse the names twice, much less check the\nACLs twice which is what's going to happen in the success path.\n\nBut a worse problem is that this only fixes the issue for the\nobject name proper. to_regprocedure and to_regoperator also\nhave type name(s) to think about, and this doesn't fix the\nproblem for those:\n\nregression=# create schema s1;\nCREATE SCHEMA\nregression=# create type s1.d1 as enum('a','b');\nCREATE TYPE\nregression=# create function f1(s1.d1) returns s1.d1 language sql as\nregression-# 'select $1';\nCREATE FUNCTION\nregression=# select to_regprocedure('f1(s1.d1)');\n to_regprocedure \n-----------------\n f1(s1.d1)\n(1 row)\n\nregression=# create user joe;\nCREATE ROLE\nregression=# \\c - joe\nYou are now connected to database \"regression\" as user \"joe\".\nregression=> select to_regprocedure('f1(s1.d1)');\nERROR: permission denied for schema s1\n\n\nA closely related issue that's been complained of before is that\nwhile these functions properly return NULL when the main object\nname includes a nonexistent schema:\n\nregression=> select to_regprocedure('q1.f1(int)');\n to_regprocedure \n-----------------\n \n(1 row)\n\nit doesn't work for a nonexistent schema in a type name:\n\nregression=> select to_regprocedure('f1(q1.d1)');\nERROR: schema \"q1\" does not exist\n\n\nLooking at the back-traces for these failures,\n\n#0 errfinish (dummy=0) at elog.c:411\n#1 0x0000000000553626 in aclcheck_error (aclerr=<value optimized out>, \n objtype=OBJECT_SCHEMA, objectname=<value optimized out>) at aclchk.c:3623\n#2 0x000000000055028f in LookupExplicitNamespace (\n nspname=<value optimized out>, missing_ok=false) at namespace.c:2928\n#3 0x00000000005b3433 in LookupTypeName (pstate=0x0, typeName=0x20d87a0, \n typmod_p=0x7fff94c3ee38, missing_ok=<value optimized out>)\n at parse_type.c:162\n#4 0x00000000005b3b29 in parseTypeString (str=<value optimized out>, \n typeid_p=0x7fff94c3ee3c, typmod_p=0x7fff94c3ee38, missing_ok=false)\n at parse_type.c:822\n#5 0x000000000086fe21 in parseNameAndArgTypes (string=<value optimized out>, \n allowNone=false, names=<value optimized out>, nargs=0x7fff94c3f01c, \n argtypes=0x7fff94c3ee80) at regproc.c:1874\n#6 0x0000000000870b2d in to_regprocedure (fcinfo=0x2134900) at regproc.c:305\n\n#0 errfinish (dummy=0) at elog.c:411\n#1 0x000000000054dc7b in get_namespace_oid (nspname=<value optimized out>, \n missing_ok=false) at namespace.c:3061\n#2 0x0000000000550230 in LookupExplicitNamespace (\n nspname=<value optimized out>, missing_ok=false) at namespace.c:2922\n#3 0x00000000005b3433 in LookupTypeName (pstate=0x0, typeName=0x216bd20, \n typmod_p=0x7fff94c3ee38, missing_ok=<value optimized out>)\n at parse_type.c:162\n#4 0x00000000005b3b29 in parseTypeString (str=<value optimized out>, \n typeid_p=0x7fff94c3ee3c, typmod_p=0x7fff94c3ee38, missing_ok=false)\n at parse_type.c:822\n#5 0x000000000086fe21 in parseNameAndArgTypes (string=<value optimized out>, \n allowNone=false, names=<value optimized out>, nargs=0x7fff94c3f01c, \n argtypes=0x7fff94c3ee80) at regproc.c:1874\n#6 0x0000000000870b2d in to_regprocedure (fcinfo=0x2170f50) at regproc.c:305\n\nit's not *that* far from where we know we need no-error behavior to\nwhere it's failing to happen. parseNameAndArgTypes isn't even global,\nso certainly changing its API is not problematic. For the functions\nbelow it, we'd have to decide whether it's okay to consider that\nmissing_ok=true also enables treating a schema access failure as\n\"missing\", or whether we should add an additional flag argument\nto decide that. It seems like it might be more flexible to use a\nseparate flag, but that decision could propagate to a lot of places,\nespecially if we conclude that all the namespace.c functions that\nexpose missing_ok also need to expose schema_access_violation_ok.\n\nSo I think you ought to drop this implementation and fix it properly\nby adjusting the behaviors of the functions cited above.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jul 2019 12:24:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "On Tue, 30 Jul 2019 12:24:13 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Takuma Hoshiai <hoshiai@sraoss.co.jp> writes:\n> > [ fix_to_reg_v2.patch ]\n> \n> I took a quick look through this patch. I'm on board with the goal\n> of not having schema-access violations throw an error in these\n> functions, but the implementation feels pretty ugly and bolted-on.\n> Nobody who had designed the code to do this from the beginning\n> would have chosen to parse the names twice, much less check the\n> ACLs twice which is what's going to happen in the success path.\n> \n> But a worse problem is that this only fixes the issue for the\n> object name proper. to_regprocedure and to_regoperator also\n> have type name(s) to think about, and this doesn't fix the\n> problem for those:\n> \n> regression=# create schema s1;\n> CREATE SCHEMA\n> regression=# create type s1.d1 as enum('a','b');\n> CREATE TYPE\n> regression=# create function f1(s1.d1) returns s1.d1 language sql as\n> regression-# 'select $1';\n> CREATE FUNCTION\n> regression=# select to_regprocedure('f1(s1.d1)');\n> to_regprocedure \n> -----------------\n> f1(s1.d1)\n> (1 row)\n> \n> regression=# create user joe;\n> CREATE ROLE\n> regression=# \\c - joe\n> You are now connected to database \"regression\" as user \"joe\".\n> regression=> select to_regprocedure('f1(s1.d1)');\n> ERROR: permission denied for schema s1\n> \n> \n> A closely related issue that's been complained of before is that\n> while these functions properly return NULL when the main object\n> name includes a nonexistent schema:\n> \n> regression=> select to_regprocedure('q1.f1(int)');\n> to_regprocedure \n> -----------------\n> \n> (1 row)\n> \n> it doesn't work for a nonexistent schema in a type name:\n> \n> regression=> select to_regprocedure('f1(q1.d1)');\n> ERROR: schema \"q1\" does not exist\n> \n> \n> Looking at the back-traces for these failures,\n> \n> #0 errfinish (dummy=0) at elog.c:411\n> #1 0x0000000000553626 in aclcheck_error (aclerr=<value optimized out>, \n> objtype=OBJECT_SCHEMA, objectname=<value optimized out>) at aclchk.c:3623\n> #2 0x000000000055028f in LookupExplicitNamespace (\n> nspname=<value optimized out>, missing_ok=false) at namespace.c:2928\n> #3 0x00000000005b3433 in LookupTypeName (pstate=0x0, typeName=0x20d87a0, \n> typmod_p=0x7fff94c3ee38, missing_ok=<value optimized out>)\n> at parse_type.c:162\n> #4 0x00000000005b3b29 in parseTypeString (str=<value optimized out>, \n> typeid_p=0x7fff94c3ee3c, typmod_p=0x7fff94c3ee38, missing_ok=false)\n> at parse_type.c:822\n> #5 0x000000000086fe21 in parseNameAndArgTypes (string=<value optimized out>, \n> allowNone=false, names=<value optimized out>, nargs=0x7fff94c3f01c, \n> argtypes=0x7fff94c3ee80) at regproc.c:1874\n> #6 0x0000000000870b2d in to_regprocedure (fcinfo=0x2134900) at regproc.c:305\n> \n> #0 errfinish (dummy=0) at elog.c:411\n> #1 0x000000000054dc7b in get_namespace_oid (nspname=<value optimized out>, \n> missing_ok=false) at namespace.c:3061\n> #2 0x0000000000550230 in LookupExplicitNamespace (\n> nspname=<value optimized out>, missing_ok=false) at namespace.c:2922\n> #3 0x00000000005b3433 in LookupTypeName (pstate=0x0, typeName=0x216bd20, \n> typmod_p=0x7fff94c3ee38, missing_ok=<value optimized out>)\n> at parse_type.c:162\n> #4 0x00000000005b3b29 in parseTypeString (str=<value optimized out>, \n> typeid_p=0x7fff94c3ee3c, typmod_p=0x7fff94c3ee38, missing_ok=false)\n> at parse_type.c:822\n> #5 0x000000000086fe21 in parseNameAndArgTypes (string=<value optimized out>, \n> allowNone=false, names=<value optimized out>, nargs=0x7fff94c3f01c, \n> argtypes=0x7fff94c3ee80) at regproc.c:1874\n> #6 0x0000000000870b2d in to_regprocedure (fcinfo=0x2170f50) at regproc.c:305\n> \n> it's not *that* far from where we know we need no-error behavior to\n> where it's failing to happen. parseNameAndArgTypes isn't even global,\n> so certainly changing its API is not problematic. For the functions\n> below it, we'd have to decide whether it's okay to consider that\n> missing_ok=true also enables treating a schema access failure as\n> \"missing\", or whether we should add an additional flag argument\n> to decide that. It seems like it might be more flexible to use a\n> separate flag, but that decision could propagate to a lot of places,\n> especially if we conclude that all the namespace.c functions that\n> expose missing_ok also need to expose schema_access_violation_ok.\n> \n> So I think you ought to drop this implementation and fix it properly\n> by adjusting the behaviors of the functions cited above.\n\nThank you for watching.\nSorry, I overlooked an access permission error of argument.\n\nbehavior of 'missing_ok = true' is changed have an impact on\nDROP TABLE IF EXISTS for example. So, I will consider to add an additonal\nflag like schema_access_violation_ok, instead of checking ACL twice.\n\n> \t\t\tregards, tom lane\n> \n\nBest Regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n\n", "msg_date": "Wed, 31 Jul 2019 14:52:31 +0900", "msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "On Wed, Jul 31, 2019 at 4:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Takuma Hoshiai <hoshiai@sraoss.co.jp> writes:\n> > [ fix_to_reg_v2.patch ]\n>\n> I took a quick look through this patch. I'm on board with the goal\n> of not having schema-access violations throw an error in these\n> functions, but the implementation feels pretty ugly and bolted-on.\n> Nobody who had designed the code to do this from the beginning\n> would have chosen to parse the names twice, much less check the\n> ACLs twice which is what's going to happen in the success path.\n>\n> But a worse problem is that this only fixes the issue for the\n> object name proper. to_regprocedure and to_regoperator also\n> have type name(s) to think about, and this doesn't fix the\n> problem for those:\n\n...\n\n> So I think you ought to drop this implementation and fix it properly\n> by adjusting the behaviors of the functions cited above.\n\nHello Hoshiai-san,\n\nBased on the above review, I have set this to 'Returned with\nfeedback'. If you plan to produce a new patch in time for the next\nCommitfest in September, please let me know very soon and I'll change\nit to 'Moved to next CF', but otherwise please feel free to create a\nnew entry when you are ready.\n\nThanks!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 20:21:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "On Thu, 1 Aug 2019 20:21:57 +1200\nThomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Jul 31, 2019 at 4:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Takuma Hoshiai <hoshiai@sraoss.co.jp> writes:\n> > > [ fix_to_reg_v2.patch ]\n> >\n> > I took a quick look through this patch. I'm on board with the goal\n> > of not having schema-access violations throw an error in these\n> > functions, but the implementation feels pretty ugly and bolted-on.\n> > Nobody who had designed the code to do this from the beginning\n> > would have chosen to parse the names twice, much less check the\n> > ACLs twice which is what's going to happen in the success path.\n> >\n> > But a worse problem is that this only fixes the issue for the\n> > object name proper. to_regprocedure and to_regoperator also\n> > have type name(s) to think about, and this doesn't fix the\n> > problem for those:\n> \n> ...\n> \n> > So I think you ought to drop this implementation and fix it properly\n> > by adjusting the behaviors of the functions cited above.\n> \n> Hello Hoshiai-san,\n> \n> Based on the above review, I have set this to 'Returned with\n> feedback'. If you plan to produce a new patch in time for the next\n> Commitfest in September, please let me know very soon and I'll change\n> it to 'Moved to next CF', but otherwise please feel free to create a\n> new entry when you are ready.\n\nYes, I plan to create next patch, Would you change it to 'Moved to next CF'?\n\nBest Regards,\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n> Thanks!\n> \n> -- \n> Thomas Munro\n> https://enterprisedb.com\n> \n\n\n\n\n", "msg_date": "Thu, 1 Aug 2019 17:42:09 +0900", "msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "On Thu, Aug 1, 2019 at 8:41 PM Takuma Hoshiai <hoshiai@sraoss.co.jp> wrote:\n> On Thu, 1 Aug 2019 20:21:57 +1200\n> Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Based on the above review, I have set this to 'Returned with\n> > feedback'. If you plan to produce a new patch in time for the next\n> > Commitfest in September, please let me know very soon and I'll change\n> > it to 'Moved to next CF', but otherwise please feel free to create a\n> > new entry when you are ready.\n>\n> Yes, I plan to create next patch, Would you change it to 'Moved to next CF'?\n\nDone! Thanks.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Aug 2019 20:54:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Aug 1, 2019 at 8:41 PM Takuma Hoshiai <hoshiai@sraoss.co.jp> wrote:\n>> On Thu, 1 Aug 2019 20:21:57 +1200\n>> Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> Based on the above review, I have set this to 'Returned with\n>>> feedback'. If you plan to produce a new patch in time for the next\n>>> Commitfest in September, please let me know very soon and I'll change\n>>> it to 'Moved to next CF', but otherwise please feel free to create a\n>>> new entry when you are ready.\n\n>> Yes, I plan to create next patch, Would you change it to 'Moved to next CF'?\n\n> Done! Thanks.\n\nNo new patch has appeared, so I'm not sure why this was in \"Needs review\"\nstate. I've set it to \"Waiting on Author\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Sep 2019 18:56:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal to suppress errors thrown by to_reg*()" } ]
[ { "msg_contents": "Hi, hackers\n\nI want to improve the generic plan mechanism and speed up the UPDATE/DELETE planning of a table partitioned into thousands.\nHowever, I am not sure if this is realistic, I would like advice.\n\nThe current generic plan creates access plans for all child tables without using the parameters specified in EXECUTE.\nAlso, inheritance_planner() creates access plans for all child tables after copying the RTEs of the parent to subroot.\n\nAs mentioned above, I think it is wasteful to create access plans for all child tables including child tables that may not be accessed in EXECUTE.\nTherefore, instead of creating an access plan for all child tables at one time, I think that it would be better to create an access plan for the child table at the time of EXECUTE execution and add it to the cached plans (plans of ModifyTable).\n\nAccording to my research, I know that there are two issues.\n\n#1 How to add a partially created plan to an existing cached plan\n#2 Which child table plan to create\n\n#1\nIn order to partially create a plan, it is necessary to consider which information is cached to create a plan in addition to the plan cache.\nAlso, it is necessary to consider how to manage cached plans.\n\n#2\nIf a parameter of EXEXUTE is specified in boundParams of pg_queries(), a custom plan will be created, so it is necessary to use bounParams to narrow down the child tables for creating a plan.\n\nAt first I will try to work from #1.\n\nregards,\n\nSho Kato\n\n\n", "msg_date": "Thu, 14 Mar 2019 06:21:19 +0000", "msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "Improve the generic plan mechanism" } ]
[ { "msg_contents": "Hi,\n\nI'm curious why DestroyPartitionDirectory doesn't do\nhash_destroy(pdir->pdir_hash)?\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 14 Mar 2019 16:13:23 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "At Thu, 14 Mar 2019 16:13:23 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <3ad792cd-0805-858e-595c-c09e9d1ce042@lab.ntt.co.jp>\n> Hi,\n> \n> I'm curious why DestroyPartitionDirectory doesn't do\n> hash_destroy(pdir->pdir_hash)?\n\nMaybe it is trashed involved in destruction of es_query_cxt or\nplanner_cxt?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Mar 2019 16:32:25 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On 2019/03/14 16:32, Kyotaro HORIGUCHI wrote:\n> At Thu, 14 Mar 2019 16:13:23 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <3ad792cd-0805-858e-595c-c09e9d1ce042@lab.ntt.co.jp>\n>> Hi,\n>>\n>> I'm curious why DestroyPartitionDirectory doesn't do\n>> hash_destroy(pdir->pdir_hash)?\n> \n> Maybe it is trashed involved in destruction of es_query_cxt or\n> planner_cxt?\n\nHmm, the executor's partition directory (its hash table) is indeed\nallocated under es_query_cxt. But, the planner's partition directory is\nnot allocated under planner_cxt, it appears to be using memory under\nMessageContext.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 14 Mar 2019 16:46:47 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On 2019/03/14 16:46, Amit Langote wrote:\n> On 2019/03/14 16:32, Kyotaro HORIGUCHI wrote:\n>> At Thu, 14 Mar 2019 16:13:23 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <3ad792cd-0805-858e-595c-c09e9d1ce042@lab.ntt.co.jp>\n>>> Hi,\n>>>\n>>> I'm curious why DestroyPartitionDirectory doesn't do\n>>> hash_destroy(pdir->pdir_hash)?\n>>\n>> Maybe it is trashed involved in destruction of es_query_cxt or\n>> planner_cxt?\n> \n> Hmm, the executor's partition directory (its hash table) is indeed\n> allocated under es_query_cxt. But, the planner's partition directory is\n> not allocated under planner_cxt, it appears to be using memory under\n> MessageContext.\n\nI forgot to mention that it would be wrong to put it under planner_cxt, as\nit's the context for planning a given subquery, whereas a partition\ndirectory is maintained throughout the whole planning.\n\nThanks,\nAmit\n\n\n\n", "msg_date": "Thu, 14 Mar 2019 17:18:29 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "At Thu, 14 Mar 2019 17:18:29 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <e7bcef25-317b-75fc-cc63-34e1fbec514b@lab.ntt.co.jp>\n> On 2019/03/14 16:46, Amit Langote wrote:\n> > On 2019/03/14 16:32, Kyotaro HORIGUCHI wrote:\n> >> At Thu, 14 Mar 2019 16:13:23 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <3ad792cd-0805-858e-595c-c09e9d1ce042@lab.ntt.co.jp>\n> >>> Hi,\n> >>>\n> >>> I'm curious why DestroyPartitionDirectory doesn't do\n> >>> hash_destroy(pdir->pdir_hash)?\n> >>\n> >> Maybe it is trashed involved in destruction of es_query_cxt or\n> >> planner_cxt?\n> > \n> > Hmm, the executor's partition directory (its hash table) is indeed\n> > allocated under es_query_cxt. But, the planner's partition directory is\n> > not allocated under planner_cxt, it appears to be using memory under\n> > MessageContext.\n\nCurrentMemoryContext? It is PortalContext while planning CLUSTER\nscan. And it seems to be the same with planner_cxt with several\nnarrow exceptions..\n\nI think everything linked from PlannrInfo ought to be allocated\nin the top planner's planner_cxt if finally not expilcitly\ndelinked and I *believe* subquery's planner_cxt is always same\nwith that of the top-level subquery_planner.\n\n> I forgot to mention that it would be wrong to put it under planner_cxt, as\n> it's the context for planning a given subquery, whereas a partition\n> directory is maintained throughout the whole planning.\n\nSo if the ParitionDirectory is allocaed explicitly in\nMessageContext, it would be danger on CLUSTER command. But as I\nsee in the code, if it is in CurrentMemoryContext it would be\nsafe but I think it should be fixed to use planner_cxt.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Mar 2019 17:58:19 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On Thu, Mar 14, 2019 at 3:13 AM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> I'm curious why DestroyPartitionDirectory doesn't do\n> hash_destroy(pdir->pdir_hash)?\n\nWhat would be the point? It's more efficient to let context teardown\ntake care of it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 14 Mar 2019 12:02:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 14, 2019 at 3:13 AM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> I'm curious why DestroyPartitionDirectory doesn't do\n>> hash_destroy(pdir->pdir_hash)?\n\n> What would be the point? It's more efficient to let context teardown\n> take care of it.\n\nAgreed, but the comments in this area are crap. Why doesn't\nCreatePartitionDirectory say something like\n\n * The object lives inside the given memory context and will be\n * freed when that context is destroyed. Nonetheless, the caller\n * must *also* ensure that (unless the transaction is aborted)\n * DestroyPartitionDirectory is called before that happens, else\n * we may leak some relcache reference counts.\n\nIt's completely not acceptable that every reader of this code should\nhave to reverse-engineer these design assumptions, especially given\nhow shaky they are.\n\nThere's an independent question as to whether the planner's use of\nthe feature is specifying a safe memory context. Has this code been\nexercised under GEQO?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 12:56:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "I wrote:\n> Agreed, but the comments in this area are crap.\n\nActually, now that I've absorbed a bit more about 898e5e329,\nI don't like very much about it at all. I think having it\ntry to hang onto pointers into the relcache is a completely\nwrongheaded design decision, and the right way for it to work\nis to just copy the PartitionDescs out of the relcache so that\nthey're fully owned by the PartitionDirectory. I don't see\na CopyPartitionDesc function anywhere (maybe it's named something\nelse?) but it doesn't look like it'd be hard to build; most\nof the work is in partition_bounds_copy() which does exist already.\n\nAlso, at least so far as the planner's usage is concerned, claiming\nthat we're saving something by not copying is completely bogus,\nbecause if we look into set_relation_partition_info, what do we\nfind but a partition_bounds_copy call. That wouldn't be necessary\nif we could rely on the PartitionDirectory to own the data structure.\n(Maybe it's not necessary today. But given what a house of cards\nthis is, I wouldn't propose ripping it out, just moving it into\nthe PartitionDirectory code.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 13:16:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On Thu, Mar 14, 2019 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Agreed, but the comments in this area are crap. Why doesn't\n> CreatePartitionDirectory say something like\n>\n> * The object lives inside the given memory context and will be\n> * freed when that context is destroyed. Nonetheless, the caller\n> * must *also* ensure that (unless the transaction is aborted)\n> * DestroyPartitionDirectory is called before that happens, else\n> * we may leak some relcache reference counts.\n>\n> It's completely not acceptable that every reader of this code should\n> have to reverse-engineer these design assumptions, especially given\n> how shaky they are.\n\nWell, one reason is that everything you just said is basically\nself-evident. If you spend 5 seconds looking at the header file,\nyou'll see that there is a CreatePartitionDirectory() function and a\nDestroyPartitionDirectory() function, and so you'll probably figure\nout that the latter is intended to be called rather than just ignored.\nYou will probably also guess that it doesn't need to be called if\nthere's an ERROR, just like basically everything else in PostgreSQL.\nAnd if you want to know why, you can look at the code and you\nshouldn't have any trouble determining that it releases relcache ref\ncounts, which may tip you off that if you don't call it, some relcache\nrefcounts will not be released.\n\nBut, look, I wrote the code. What's clear to me may not be clear to\neverybody. I have to admit I'm kinda surprised that this is the thing\nthat is confusing to anybody, but if it is, then sure, let's add the\ncomment!\n\n> There's an independent question as to whether the planner's use of\n> the feature is specifying a safe memory context. Has this code been\n> exercised under GEQO?\n\nProbably not. That's a good idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 14 Mar 2019 13:31:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 14, 2019 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's completely not acceptable that every reader of this code should\n>> have to reverse-engineer these design assumptions, especially given\n>> how shaky they are.\n\n> Well, one reason is that everything you just said is basically\n> self-evident. If you spend 5 seconds looking at the header file,\n> you'll see that there is a CreatePartitionDirectory() function and a\n> DestroyPartitionDirectory() function, and so you'll probably figure\n> out that the latter is intended to be called rather than just ignored.\n> You will probably also guess that it doesn't need to be called if\n> there's an ERROR, just like basically everything else in PostgreSQL.\n> And if you want to know why, you can look at the code and you\n> shouldn't have any trouble determining that it releases relcache ref\n> counts, which may tip you off that if you don't call it, some relcache\n> refcounts will not be released.\n\nSo here's my problem with that argument: you're effectively saying that\nyou needn't write any API spec for the PartitionDirectory functions\nbecause you intend that every person calling them will read their code,\ncarefully and fully, before using them. This is not my idea of sound\nsoftware engineering. If you need me to spell out why not, I will do\nso, but I'd like to think that I needn't explain abstraction to a senior\ncommitter.\n\nBut anyway, it's somewhat moot because I think we need to change this\nAPI spec anyhow, per the other thread. PartitionDirectory should not\nbe holding on to relcache refcounts, which would make\nDestroyPartitionDirectory unnecessary.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 13:40:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On Thu, Mar 14, 2019 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Actually, now that I've absorbed a bit more about 898e5e329,\n> I don't like very much about it at all. I think having it\n> try to hang onto pointers into the relcache is a completely\n> wrongheaded design decision, and the right way for it to work\n> is to just copy the PartitionDescs out of the relcache so that\n> they're fully owned by the PartitionDirectory. I don't see\n> a CopyPartitionDesc function anywhere (maybe it's named something\n> else?) but it doesn't look like it'd be hard to build; most\n> of the work is in partition_bounds_copy() which does exist already.\n\nYeah, we could do that. I have to admit that I don't necessarily\nunderstand why trying to hang onto pointers into the relcache is a bad\nidea. It is a bit complicated, but the savings in both memory and CPU\ntime seem worth pursuing. There are a lot of users who wish we scaled\nto a million partitions rather than a hundred, and just copying\neverything all over the place all the time won't get us closer to that\ngoal.\n\nMore generally, I think get_relation_info() is becoming an\nincreasingly nasty piece of work. It copies more and more stuff\ninstead of just pointing to it, which is necessary mostly because it\ncloses the table instead of arranging to do that at the end of query\nplanning. If it did the opposite, the refcount held by the planner\nwould make it unnecessary for the PartitionDirectory to hold one, and\nI bet we could also just point to a bunch of the other stuff in this\nfunction rather than copying that stuff, too. As time goes by,\nrelcache entries are getting more and more complex, and the optimizer\nwants to use more and more data from them for planning purposes, but,\nprobably partly because of inertia, we're clinging to an old design\nwhere everything has to be copied. Every time someone gets that\nwrong, and it's happened a number of times, we yell at them and tell\nthem to copy more stuff instead of thinking up a design where stuff\ndoesn't need to be copied. I think that's a mistake. You have\npreviously disagreed with that position so you probably will now, too,\nbut I still think it.\n\n> Also, at least so far as the planner's usage is concerned, claiming\n> that we're saving something by not copying is completely bogus,\n> because if we look into set_relation_partition_info, what do we\n> find but a partition_bounds_copy call. That wouldn't be necessary\n> if we could rely on the PartitionDirectory to own the data structure.\n> (Maybe it's not necessary today. But given what a house of cards\n> this is, I wouldn't propose ripping it out, just moving it into\n> the PartitionDirectory code.)\n\nUgh, I didn't notice the partition_bounds_copy() call in that\nfunction. Oops. Given the foregoing griping, you won't be surprised\nto hear that I'd rather just remove the copying step. However, it\nsounds like we can do that no matter whether we stick with my design\nor switch to yours, because PartitionDirectory holds a relcache refcnt\nthen the pointer will be stable, and if we deep-copy the whole data\nstructure then the pointer will also be stable. Prior to the commit\nat issue, we weren't doing either of those things, so that copy was\nneeded until very recently.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 14 Mar 2019 13:56:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On Thu, Mar 14, 2019 at 1:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So here's my problem with that argument: you're effectively saying that\n> you needn't write any API spec for the PartitionDirectory functions\n> because you intend that every person calling them will read their code,\n> carefully and fully, before using them. This is not my idea of sound\n> software engineering. If you need me to spell out why not, I will do\n> so, but I'd like to think that I needn't explain abstraction to a senior\n> committer.\n\nI think you're attacking a straw man. I expect you don't seriously\nbelieve that I lack an understanding of abstraction. However,\nabstraction doesn't mean that the comment for CreatePartitionDirectory\nmust describe what DestroyPartitionDirectory is going to do\ninternally, as you seem to be proposing. Had I thought about this\nissue more sooner, I think I would have guessed you would be opposed\nto such a comment, since the chances of someone neglecting to update\nit when changing DestroyPartitionDirectory seem to be non-negligible.\nAt the same time, and as I already said, I am also fine to improve the\ncomments for these functions. The fact that both you and Amit found\nthem inadequate - albeit in somewhat different ways - shows that they\nneed improvement. However, that doesn't mean that what I did was\nflagrantly unreasonable or that I'm full of crap. Those things may be\ntrue, but this isn't believable evidence of it.\n\nIf we're going to start talking about comments that make inadequate\nmention of important memory management details, I think a much better\nplace to start than anything that I did in this commit would be the\nget_relation_info() function we were just discussing in a different\npart of this email thread. As I said over there, people keep failing\nto understand that any data you want to use during query planning has\ngot to be copied out of the relcache in that function -- and this is a\npretty subtle hazard, actually, because it works fine unless you get a\ncache flush at the wrong time or build with CLOBBER_CACHE_ALWAYS.\nFailing to call DestroyPartitionDirectory() breaks in a much more\nobvious way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Thu, 14 Mar 2019 14:19:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On 2019/03/15 1:02, Robert Haas wrote:\n> On Thu, Mar 14, 2019 at 3:13 AM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> I'm curious why DestroyPartitionDirectory doesn't do\n>> hash_destroy(pdir->pdir_hash)?\n> \n> What would be the point? It's more efficient to let context teardown\n> take care of it.\n\nYeah, I only noticed that after posting my email.\n\nAs I said in another reply, while the executor's partition directory is\nset up and torn down under a dedicated memory context used for execution\n(es_query_context), planner's is stuck into MessageContext. But all of\nthe other stuff that planner allocates goes into it too, so maybe it's fine.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 18 Mar 2019 11:01:51 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" }, { "msg_contents": "On 2019/03/15 2:56, Robert Haas wrote:\n> On Thu, Mar 14, 2019 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Actually, now that I've absorbed a bit more about 898e5e329,\n>> I don't like very much about it at all. I think having it\n>> try to hang onto pointers into the relcache is a completely\n>> wrongheaded design decision, and the right way for it to work\n>> is to just copy the PartitionDescs out of the relcache so that\n>> they're fully owned by the PartitionDirectory. I don't see\n>> a CopyPartitionDesc function anywhere (maybe it's named something\n>> else?) but it doesn't look like it'd be hard to build; most\n>> of the work is in partition_bounds_copy() which does exist already.\n> \n> Yeah, we could do that. I have to admit that I don't necessarily\n> understand why trying to hang onto pointers into the relcache is a bad\n> idea. It is a bit complicated, but the savings in both memory and CPU\n> time seem worth pursuing. There are a lot of users who wish we scaled\n> to a million partitions rather than a hundred, and just copying\n> everything all over the place all the time won't get us closer to that\n> goal.\n> \n> More generally, I think get_relation_info() is becoming an\n> increasingly nasty piece of work. It copies more and more stuff\n> instead of just pointing to it, which is necessary mostly because it\n> closes the table instead of arranging to do that at the end of query\n> planning. If it did the opposite, the refcount held by the planner\n> would make it unnecessary for the PartitionDirectory to hold one, and\n> I bet we could also just point to a bunch of the other stuff in this\n> function rather than copying that stuff, too. As time goes by,\n> relcache entries are getting more and more complex, and the optimizer\n> wants to use more and more data from them for planning purposes, but,\n> probably partly because of inertia, we're clinging to an old design\n> where everything has to be copied. Every time someone gets that\n> wrong, and it's happened a number of times, we yell at them and tell\n> them to copy more stuff instead of thinking up a design where stuff\n> doesn't need to be copied. I think that's a mistake. You have\n> previously disagreed with that position so you probably will now, too,\n> but I still think it.\n\n+1.\n\n>> Also, at least so far as the planner's usage is concerned, claiming\n>> that we're saving something by not copying is completely bogus,\n>> because if we look into set_relation_partition_info, what do we\n>> find but a partition_bounds_copy call. That wouldn't be necessary\n>> if we could rely on the PartitionDirectory to own the data structure.\n>> (Maybe it's not necessary today. But given what a house of cards\n>> this is, I wouldn't propose ripping it out, just moving it into\n>> the PartitionDirectory code.)\n> \n> Ugh, I didn't notice the partition_bounds_copy() call in that\n> function. Oops. Given the foregoing griping, you won't be surprised\n> to hear that I'd rather just remove the copying step. However, it\n> sounds like we can do that no matter whether we stick with my design\n> or switch to yours, because PartitionDirectory holds a relcache refcnt\n> then the pointer will be stable, and if we deep-copy the whole data\n> structure then the pointer will also be stable. Prior to the commit\n> at issue, we weren't doing either of those things, so that copy was\n> needed until very recently.\n\nOne of the patches I've proposed in the \"speed up planning with\npartitions\" thread [1] gets rid of the partition_bounds_copy() call,\nbecause: 1) I too think it's unnecessary as the PartitionBoundInfo won't\nchange (logically or physically) as long as we've got some lock on the\ntable, which we do, 2) I've seen it become a significant bottleneck as the\nnumber of partitions crosses thousands.\n\nThanks,\nAmit\n\n[1] https://commitfest.postgresql.org/22/1778/\n\n\n", "msg_date": "Mon, 18 Mar 2019 11:21:57 +0900", "msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>", "msg_from_op": true, "msg_subject": "Re: why doesn't DestroyPartitionDirectory hash_destroy?" } ]
[ { "msg_contents": "Hi\n\nI'm afraid that PREPARE of ecpglib is not thread safe.\nThe following global variables are modified without any locking system.\nIs it unnecessary worry?\n\n [interfaces/ecpg/ecpglib/prepare.c]\n static int nextStmtID = 1;\n static stmtCacheEntry *stmtCacheEntries = NULL;\n static struct declared_statement *g_declared_list;\n\nRegards\nRyo Matsumura\n\n\n", "msg_date": "Thu, 14 Mar 2019 07:17:08 +0000", "msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Hello.\n\nAt Thu, 14 Mar 2019 07:17:08 +0000, \"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com> wrote in <03040DFF97E6E54E88D3BFEE5F5480F737AC390D@G01JPEXMBYT04>\n> Hi\n> \n> I'm afraid that PREPARE of ecpglib is not thread safe.\n> The following global variables are modified without any locking system.\n> Is it unnecessary worry?\n> \n> [interfaces/ecpg/ecpglib/prepare.c]\n> static int nextStmtID = 1;\n> static stmtCacheEntry *stmtCacheEntries = NULL;\n> static struct declared_statement *g_declared_list;\n\nA connection cannot be concurrently used by multiple threads so\nthe programmer must guard connections using mutex [1] or\nfriends. If it is done by a single mutex (I suppose it is\ncommon.), there's no race condition also on the prepared\nstatement storage. I'm not sure it is explicitly aimed but I\nsuppose that there's no problem in a common usage of the library.\n\n\n[1] https://www.postgresql.org/docs/current/ecpg-connect.html\n\n> If your application uses multiple threads of execution, they\n> cannot share a connection concurrently. You must either\n> explicitly control access to the connection (using mutexes) or\n> use a connection for each thread.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Mar 2019 17:05:24 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Horiguchi-san\n\nThank you for your comment.\n\n> A connection cannot be concurrently used by multiple threads so\n> the programmer must guard connections using mutex [1] or\n> friends. If it is done by a single mutex (I suppose it is\n> common.), there's no race condition also on the prepared\n> statement storage. I'm not sure it is explicitly aimed but I\n> suppose that there's no problem in a common usage of the library.\n\nI understand it, but current scope of StatementCache and DeclareStatementList seems not\nto be limitted within each connection, isn't it?\nTherefore, I thought the operation on them must be thread safe.\n\nFor example, scope of DescriptorList in descriptor.c is within thread (not connection)\nby using pthread_getspecific/ pthread_setspecific().\n\nRegards\nRyo Matsumura\n\n\n", "msg_date": "Thu, 14 Mar 2019 09:49:11 +0000", "msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "At Thu, 14 Mar 2019 09:49:11 +0000, \"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com> wrote in <03040DFF97E6E54E88D3BFEE5F5480F737AC3AD8@G01JPEXMBYT04>\n> Horiguchi-san\n> \n> Thank you for your comment.\n> \n> > A connection cannot be concurrently used by multiple threads so\n> > the programmer must guard connections using mutex [1] or\n> > friends. If it is done by a single mutex (I suppose it is\n> > common.), there's no race condition also on the prepared\n> > statement storage. I'm not sure it is explicitly aimed but I\n> > suppose that there's no problem in a common usage of the library.\n> \n> I understand it, but current scope of StatementCache and DeclareStatementList seems not\n> to be limitted within each connection, isn't it?\n\nYes, so I wrote that \"if it is done by a single mutex\". Feel free\nto fix that if it is still problematic:)\n\n> Therefore, I thought the operation on them must be thread safe.\n\nI'm not against that.\n\n> For example, scope of DescriptorList in descriptor.c is within thread (not connection)\n> by using pthread_getspecific/ pthread_setspecific().\n\nIt seems like a local cache of server-side data, which is similar\nto catcache on server side for each process. I don't think\nprepared statements is in that category. A prepared statement is\nbonded to a connection, not to a thread. Different threads can\nexecute the same prepared statement on the same connection.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Mar 2019 19:55:53 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Hi Horiguchi-san, Kuroda-san\n\nHoriguchi-san, thank you for your comment.\n\nI have a question.\nA bug of StatementCache is occurred in previous versions.\nShould a patch be separated?\n\n> Horiguchi-san wrote:\n> It seems like a local cache of server-side data, which is similar\n> to catcache on server side for each process. \n\nI agree.\nI will fix it with using pthread_setspecific like descriptor.c.\n\n> I don't think\n> prepared statements is in that category. A prepared statement is\n> bonded to a connection, not to a thread. Different threads can\n> execute the same prepared statement on the same connection.\n\nA namespace of declared statement is not connection independent.\nTherefore, we must manage the namespce in global and consider about race condition.\nFor example, ecpglib must refer the information of (A) when ecpglib executes (B)\nin order to occur \"double declare\" error.\n\n (A) exec sql at conn1 declare st1 statement;\n (B) exec sql at conn2 declare st1 statement;\n\n // If ecpglib didn't reject the above, ecpglib cannot judge\n // which connection the followings should be executed on.\n exec sql prepare st1 from \"select 1\";\n exec sql execute st1;\n\nKuroda-san, is it right?\nIf it's right, I will fix it with using pthread_lock.\n\nRegards\nRyo Matsumura\n\n\n", "msg_date": "Fri, 15 Mar 2019 05:27:01 +0000", "msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "At Fri, 15 Mar 2019 05:27:01 +0000, \"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com> wrote in <03040DFF97E6E54E88D3BFEE5F5480F737AC3F24@G01JPEXMBYT04>\n> Hi Horiguchi-san, Kuroda-san\n> \n> Horiguchi-san, thank you for your comment.\n> \n> I have a question.\n> A bug of StatementCache is occurred in previous versions.\n> Should a patch be separated?\n> \n> > Horiguchi-san wrote:\n> > It seems like a local cache of server-side data, which is similar\n> > to catcache on server side for each process. \n> \n> I agree.\n> I will fix it with using pthread_setspecific like descriptor.c.\n> \n> > I don't think\n> > prepared statements is in that category. A prepared statement is\n> > bonded to a connection, not to a thread. Different threads can\n> > execute the same prepared statement on the same connection.\n> \n> A namespace of declared statement is not connection independent.\n> Therefore, we must manage the namespce in global and consider about race condition.\n\nRight, and but thread independent.\n\n> For example, ecpglib must refer the information of (A) when ecpglib executes (B)\n> in order to occur \"double declare\" error.\n> \n> (A) exec sql at conn1 declare st1 statement;\n> (B) exec sql at conn2 declare st1 statement;\n\nOn an interactive SQL environment like psql, we can declar\npareared statements with the same name on different\nconnections. Do you mean you are going to implement different way\non ECPG? Actually the current ECPGprepare seems already managing\nprepared statements separately for each connections. This is\nnaturally guarded by per-connection concurrencly control that\napplications should do.\n\n> this = ecpg_find_prepared_statement(name, con, &prev);\n\nWhat you showed at the beginning of this thread was the sutff for\nauto prepare, the name of which is generated using the global\nvariable nextStmtID and stored into global stmtCacheEntries. They\nare not guarded at all and seem to need multithread-protection.\n\n> // If ecpglib didn't reject the above, ecpglib cannot judge\n> // which connection the followings should be executed on.\n> exec sql prepare st1 from \"select 1\";\n> exec sql execute st1;\n\nI'm not sure about ECPG, but according to the documentation, the\nfollowing statements should work correctly.\n\n SQL SET CONNECTION con1;\n EXEC SQL PREPARE st1 FROM \"select 1\";\n EXEC SQL EXECUTE st1;\n\nshould succeed and executed on con1.\n\n> Kuroda-san, is it right?\n> If it's right, I will fix it with using pthread_lock.\n\nMmm. Are you saying that prepared statements on ECPG should have\nnames in global namespace and EXECUTE should implicitly choose\nthe underlying connection automatically from the name of a\nprepared statement? I don't think it is the right direction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 15 Mar 2019 15:33:50 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Oops.\n\nAt Fri, 15 Mar 2019 15:33:50 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190315.153350.226491548.horiguchi.kyotaro@lab.ntt.co.jp>\n> > // If ecpglib didn't reject the above, ecpglib cannot judge\n> > // which connection the followings should be executed on.\n> > exec sql prepare st1 from \"select 1\";\n> > exec sql execute st1;\n> \n> I'm not sure about ECPG, but according to the documentation, the\n> following statements should work correctly.\n> \n> SQL SET CONNECTION con1;\n\nOf course this is missing prefixing \"EXEC\".\n\n> EXEC SQL PREPARE st1 FROM \"select 1\";\n> EXEC SQL EXECUTE st1;\n> \n> should succeed and executed on con1.\n> \n> > Kuroda-san, is it right?\n> > If it's right, I will fix it with using pthread_lock.\n> \n> Mmm. Are you saying that prepared statements on ECPG should have\n> names in global namespace and EXECUTE should implicitly choose\n> the underlying connection automatically from the name of a\n> prepared statement? I don't think it is the right direction.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 15 Mar 2019 15:37:30 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Horiguchi-san, Kuroda-san\n\n> Horiguchi-san wrote:\n> > A namespace of declared statement is not connection independent.\n> > Therefore, we must manage the namespce in global and consider about race condition.\n> \n> Right, and but thread independent.\n\nI was wrong. I understand that DECLARE STATEMENT should be same policy as the combination of PREPARE STATEMENT and SET CONNECTION.\nWe should fix the current implementation of DECLARE STATEMENT.\n\nCurrent:\n t1:Thread1: exec sql at conn1 declare st1 statement;\n t2:Thread2: exec sql at conn2 declare st1 statement; // NG\n\nToBe:\n t1:Thread1: exec sql at conn1 declare st1 statement;\n t2:Thread2: exec sql at conn2 declare st1 statement; // OK\n t3:Thread2: exec sql prepared st1 from \"select 1\"; // OK: prepared on conn2\n t4:Thread1: exec sql execute st1; // NG: not prepared\n t5:Thread2: exec sql execute st1; // OK: executed on conn2\n\n t1:Thread1: exec sql at conn1 declare st1 statement;\n t2:Thread1: exec sql at conn2 declare st1 statement; // NG\n\nRegards\nRyo Matsumura\n\n\n", "msg_date": "Fri, 15 Mar 2019 07:43:47 +0000", "msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Dear Matsumura-san, Horiguchi-san,\n\nWe should check the specfication of Pro*c before coding\nbecause this follows its feature.\nI'll test some cases and send you a simple report.\n\nBest Regards,\nHayato Kuroda\nFujitsu LIMITED\n\n\n\n\n", "msg_date": "Fri, 15 Mar 2019 08:19:14 +0000", "msg_from": "\"Kuroda, Hayato\" <kuroda.hayato@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Is PREPARE of ecpglib thread safe?" }, { "msg_contents": "Dear all,\n\nI apologize for my late reply.\nI realized that the current implementation could not imitate the oracle's precompiler.\nThe attached example can be accepted by both precompilers, but building the c file made by ecpg fails.\n\nFor handling this source, we have to refactor some sources related with DECLARE STATEMENT.\nMy draft amendment is converting the sentence from executable to declarative, that is:\n\n* change to operate only if a pgc file is precompiled\n* remove related code from ecpglib directory\n\nIn this case, the namespace of a SQL identifier is file independent, and\nsources becomes more simple.\n\nI will start making patches.\nDo you have any comments or suggestions?\n\n\nBest Regards.\nHayato Kuroda\nFujitsu LIMITED", "msg_date": "Thu, 13 Jun 2019 05:53:19 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Is PREPARE of ecpglib thread safe?" } ]
[ { "msg_contents": "It seems this is a leftover from commit 578b229718e8. Patch attached.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 14 Mar 2019 15:49:40 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "outdated reference to tuple header OIDs" }, { "msg_contents": "Hi,\n\nOn 2019-03-14 15:49:40 +0800, John Naylor wrote:\n> It seems this is a leftover from commit 578b229718e8. Patch attached.\n\nThanks for noticing. Pushed.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Mon, 18 Mar 2019 13:16:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: outdated reference to tuple header OIDs" } ]
[ { "msg_contents": "Hello Moodle Team,\n\nI am a student who is currently take computer science classes at Glendale Community College, AZ who wants to take this new knowledge to the next. Since I’m so new to programming and software development I need some hands on experience beyond academic curriculum. So please let me know if you have google summer of code internships available. As well how may I apply?\n\nSincerely,\nKelvin Amoako\n\nSent from Mail for Windows 10\n\n\nHello Moodle Team, I am  a student who is currently take computer science classes at Glendale Community College, AZ who wants to take this new knowledge to the next. Since I’m so new to programming and software development I need some hands on experience beyond academic curriculum. So please let me know if you have google summer of code internships available. As well how may I apply? Sincerely,Kelvin Amoako Sent from Mail for Windows 10", "msg_date": "Thu, 14 Mar 2019 04:10:17 -0700", "msg_from": "Auguste Comte <minasiedu@gmail.com>", "msg_from_op": true, "msg_subject": "Google Summer of Code" }, { "msg_contents": "Greetings,\n\n* Auguste Comte (minasiedu@gmail.com) wrote:\n> Hello Moodle Team,\n\nThis isn't the Moodle team, I'm guessing you sent this to the wrong\nplace or at least forgot to update your email.\n\n> I am a student who is currently take computer science classes at Glendale Community College, AZ who wants to take this new knowledge to the next. Since I’m so new to programming and software development I need some hands on experience beyond academic curriculum. So please let me know if you have google summer of code internships available. As well how may I apply?\n\nInformation about GSoC for PostgreSQL is available here:\n\nhttps://wiki.postgresql.org/wiki/GSoC\n\nThanks!\n\nStephen", "msg_date": "Mon, 18 Mar 2019 02:25:31 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Google Summer of Code" } ]
[ { "msg_contents": "Hi,\n\nWhile running sqlsmith against 12devel, got the the following \nassertion-  (issue is reproducible  on v10/v11 as well)\n\nTRAP: FailedAssertion(\"!(bms_is_subset(appendrel->lateral_relids, \nrequired_outer))\", File: \"relnode.c\", Line: 1521)\n\nstack trace -\n\n#0  0x00007f2a2f349277 in raise () from /lib64/libc.so.6\n#1  0x00007f2a2f34a968 in abort () from /lib64/libc.so.6\n#2  0x0000000000893727 in ExceptionalCondition \n(conditionName=conditionName@entry=0xa44ae8 \n\"!(bms_is_subset(appendrel->lateral_relids, required_outer))\",\n     errorType=errorType@entry=0x8e1de9 \"FailedAssertion\", \nfileName=fileName@entry=0xa441e4 \"relnode.c\", \nlineNumber=lineNumber@entry=1521) at assert.c:54\n#3  0x00000000006f2e0c in get_appendrel_parampathinfo \n(appendrel=appendrel@entry=0x7f2a300e0b10, \nrequired_outer=required_outer@entry=0x0) at relnode.c:1521\n#4  0x00000000006e7d1d in create_append_path (root=root@entry=0x0, \nrel=rel@entry=0x7f2a300e0b10, subpaths=subpaths@entry=0x0, \npartial_subpaths=partial_subpaths@entry=0x0,\n     required_outer=required_outer@entry=0x0, \nparallel_workers=parallel_workers@entry=0, \nparallel_aware=parallel_aware@entry=false, \npartitioned_rels=partitioned_rels@entry=0x0,\n     rows=rows@entry=-1) at pathnode.c:1239\n#5  0x00000000006a7fc7 in set_dummy_rel_pathlist \n(rel=rel@entry=0x7f2a300e0b10) at allpaths.c:1976\n#6  0x00000000006a95e9 in set_subquery_pathlist (rte=<optimized out>, \nrti=2, rel=0x7f2a300e0b10, root=0x29975e0) at allpaths.c:2162\n#7  set_rel_size (root=root@entry=0x29975e0, \nrel=rel@entry=0x7f2a300e0b10, rti=rti@entry=2, rte=<optimized out>) at \nallpaths.c:422\n#8  0x00000000006ab1ed in set_base_rel_sizes (root=<optimized out>) at \nallpaths.c:324\n#9  make_one_rel (root=root@entry=0x29975e0, \njoinlist=joinlist@entry=0x7f2a300e1e88) at allpaths.c:186\n#10 0x00000000006cad5d in query_planner (root=root@entry=0x29975e0, \ntlist=tlist@entry=0x7f2a300e0608, qp_callback=qp_callback@entry=0x6cb920 \n<standard_qp_callback>,\n     qp_extra=qp_extra@entry=0x7ffc93d63d20) at planmain.c:265\n#11 0x00000000006cf3ac in grouping_planner (root=root@entry=0x29975e0, \ninheritance_update=inheritance_update@entry=false, \ntuple_fraction=<optimized out>, tuple_fraction@entry=0)\n     at planner.c:1933\n#12 0x00000000006d1b85 in subquery_planner (glob=glob@entry=0x29beb88, \nparse=parse@entry=0x29be638, parent_root=parent_root@entry=0x0, \nhasRecursion=hasRecursion@entry=false,\n     tuple_fraction=tuple_fraction@entry=0) at planner.c:1001\n#13 0x00000000006d2e36 in standard_planner (parse=0x29be638, \ncursorOptions=256, boundParams=0x0) at planner.c:417\n#14 0x000000000078793d in pg_plan_query \n(querytree=querytree@entry=0x29be638, \ncursorOptions=cursorOptions@entry=256, \nboundParams=boundParams@entry=0x0) at postgres.c:878\n#15 0x0000000000787a1e in pg_plan_queries (querytrees=<optimized out>, \ncursorOptions=cursorOptions@entry=256, \nboundParams=boundParams@entry=0x0) at postgres.c:968\n#16 0x0000000000787eba in exec_simple_query (\n     query_string=0x29979f8 \"select\\n  subq_1.c5 as c0\\nfrom\\n \npg_catalog.pg_description as ref_0,\\n  lateral (select\\n subq_0.c11 as \nc0,\\n        ref_0.objsubid as c1,\\n ref_0.classoid as c2,\\n        \nref_2.action_referen\"...) at postgres.c:1143\n#17 0x00000000007890d2 in PostgresMain (argc=<optimized out>, \nargv=argv@entry=0x29c1300, dbname=0x29c1140 \"postgres\", \nusername=<optimized out>) at postgres.c:4256\n#18 0x000000000047cee2 in BackendRun (port=<optimized out>, \nport=<optimized out>) at postmaster.c:4399\n#19 BackendStartup (port=0x29b9120) at postmaster.c:4090\n#20 ServerLoop () at postmaster.c:1703\n#21 0x00000000007105ff in PostmasterMain (argc=argc@entry=3, \nargv=argv@entry=0x2991ce0) at postmaster.c:1376\n#22 0x000000000047e083 in main (argc=3, argv=0x2991ce0) at main.c:228\n(gdb) q\n\nQuery -\n====\n\nselect\n   subq_1.c5 as c0\nfrom\n   pg_catalog.pg_description as ref_0,\n   lateral (select\n         subq_0.c11 as c0,\n         ref_0.objsubid as c1,\n         ref_0.classoid as c2,\n         ref_2.action_reference_new_row as c3,\n         ref_4.opfmethod as c4,\n         pg_catalog.pg_postmaster_start_time() as c5\n       from\n         pg_catalog.pg_prepared_statements as ref_1\n             inner join information_schema.triggers as ref_2\n               left join pg_catalog.pg_subscription_rel as ref_3\n               on ((select idx_scan from pg_catalog.pg_stat_all_indexes \nlimit 1 offset 6)\n                      > (select idx_blks_read from \npg_catalog.pg_statio_sys_indexes limit 1 offset 1)\n                     )\n             on (cast(null as macaddr) <= cast(null as macaddr))\n           left join pg_catalog.pg_opfamily as ref_4\n           on (ref_0.description is NULL),\n         lateral (select\n               ref_2.trigger_schema as c0,\n               ref_2.action_reference_old_row as c1,\n               ref_4.opfmethod as c2,\n               ref_1.statement as c3,\n               ref_0.objoid as c4,\n               ref_2.action_reference_old_table as c5,\n               ref_2.event_object_catalog as c6,\n               ref_2.event_object_schema as c7,\n               ref_2.trigger_catalog as c8,\n               ref_0.description as c9,\n               ref_4.opfname as c10,\n               sample_0.stxname as c11\n             from\n               pg_catalog.pg_statistic_ext as sample_0 tablesample \nsystem (2.7)\n             where cast(null as point) @ cast(null as polygon)\n             limit 105) as subq_0\n       where (true)\n         and ((ref_4.opfname is NULL)\n           and (cast(null as int8) < ref_0.objsubid))) as subq_1\nwhere subq_1.c5 is not NULL\nlimit 79;\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 Mar 2019 20:23:26 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[sqlsmith] Failed assertion at relnode.c" }, { "msg_contents": "On Thu, Mar 14, 2019 at 11:53 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> While running sqlsmith against 12devel, got the the following\n> assertion- (issue is reproducible on v10/v11 as well)\n>\n> TRAP: FailedAssertion(\"!(bms_is_subset(appendrel->lateral_relids,\n> required_outer))\", File: \"relnode.c\", Line: 1521)\n>\n> stack trace -\n>\n> #0 0x00007f2a2f349277 in raise () from /lib64/libc.so.6\n> #1 0x00007f2a2f34a968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000893727 in ExceptionalCondition\n> (conditionName=conditionName@entry=0xa44ae8\n> \"!(bms_is_subset(appendrel->lateral_relids, required_outer))\",\n> errorType=errorType@entry=0x8e1de9 \"FailedAssertion\",\n> fileName=fileName@entry=0xa441e4 \"relnode.c\",\n> lineNumber=lineNumber@entry=1521) at assert.c:54\n> #3 0x00000000006f2e0c in get_appendrel_parampathinfo\n> (appendrel=appendrel@entry=0x7f2a300e0b10,\n> required_outer=required_outer@entry=0x0) at relnode.c:1521\n> #4 0x00000000006e7d1d in create_append_path (root=root@entry=0x0,\n> rel=rel@entry=0x7f2a300e0b10, subpaths=subpaths@entry=0x0,\n> partial_subpaths=partial_subpaths@entry=0x0,\n> required_outer=required_outer@entry=0x0,\n> parallel_workers=parallel_workers@entry=0,\n> parallel_aware=parallel_aware@entry=false,\n> partitioned_rels=partitioned_rels@entry=0x0,\n> rows=rows@entry=-1) at pathnode.c:1239\n> #5 0x00000000006a7fc7 in set_dummy_rel_pathlist\n> (rel=rel@entry=0x7f2a300e0b10) at allpaths.c:1976\n\nLooks same as bug #15694 that Tom seems to be taking care of.\n\nThanks,\nAmit\n\n", "msg_date": "Fri, 15 Mar 2019 00:06:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [sqlsmith] Failed assertion at relnode.c" }, { "msg_contents": "tushar <tushar.ahuja@enterprisedb.com> writes:\n> While running sqlsmith against 12devel, got the the following \n> assertion-  (issue is reproducible  on v10/v11 as well)\n> TRAP: FailedAssertion(\"!(bms_is_subset(appendrel->lateral_relids, \n> required_outer))\", File: \"relnode.c\", Line: 1521)\n\nDoesn't crash for me, but that's probably because I just fixed bug #15694,\nwhich looks like the same thing. Thanks for the report though!\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 14 Mar 2019 12:07:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [sqlsmith] Failed assertion at relnode.c" } ]
[ { "msg_contents": "Hi,\n\nThere seems to be a bug in pgbench when used with '-R' option, resulting\nin stuck pgbench process. Reproducing it is pretty easy:\n\necho 'select 1' > select.sql\n\nwhile /bin/true; do\n pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;\n date;\ndone;\n\nI do get a stuck pgbench within a few rounds (~10-20), but YMMV. It gets\nstuck like this (this is on REL_11_STABLE):\n\n0x00007f3b1814fcb3 in __GI___select (nfds=nfds@entry=0,\nreadfds=readfds@entry=0x7fff9d668ec0, writefds=writefds@entry=0x0,\nexceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x0)\n at ../sysdeps/unix/sysv/linux/select.c:41\n41 return SYSCALL_CANCEL (select, nfds, readfds, writefds, exceptfds,\n(gdb) bt\n#0 0x00007f3b1814fcb3 in __GI___select (nfds=nfds@entry=0,\nreadfds=readfds@entry=0x7fff9d668ec0, writefds=writefds@entry=0x0,\nexceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x0)\n at ../sysdeps/unix/sysv/linux/select.c:41\n#1 0x000000000040834d in threadRun (arg=arg@entry=0x1c9f370) at\npgbench.c:5810\n#2 0x0000000000403e0a in main (argc=<optimized out>, argv=<optimized\nout>) at pgbench.c:5583\n\nAll the connections are already closed at this point (there is nothing\nin pg_stat_activity and log_disconnections=on confirms that), and then\nthere's this:\n\n(gdb) p remains\n$1 = 0\n(gdb) p nstate\n$1 = 1\n(gdb) p state[0]\n$2 = {con = 0x0, id = 0, state = CSTATE_FINISHED, cstack = 0xf21b10,\nuse_file = 0, command = 1, variables = 0xf2b0b0, nvariables = 4,\nvars_sorted = false, txn_scheduled = 699536519281, sleep_until = 0,\ntxn_begin = {tv_sec = 699536, tv_nsec = 518478603}, stmt_begin = {tv_sec\n= 0, tv_nsec = 0}, prepared = {false <repeats 128 times>}, cnt = 132,\necnt = 0}\n\nSo I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or\none of the other commits touching this part of the code.\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 14 Mar 2019 22:10:16 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "seems like a bug in pgbench -R" }, { "msg_contents": "\n> echo 'select 1' > select.sql\n>\n> while /bin/true; do\n> pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;\n> date;\n> done;\n\nIndeed. I'll look at it over the weekend.\n\n> So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or\n> one of the other commits touching this part of the code.\n\nYep, possibly.\n\n-- \nFabien.\n\n", "msg_date": "Fri, 15 Mar 2019 15:44:42 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seems like a bug in pgbench -R" }, { "msg_contents": "\n>> echo 'select 1' > select.sql\n>> \n>> while /bin/true; do\n>> pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;\n>> date;\n>> done;\n>\n> Indeed. I'll look at it over the weekend.\n>\n>> So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or\n>> one of the other commits touching this part of the code.\n\nI could not reproduce this issue on head, but I confirm on 11.2.\n\n-- \nFabien.\n\n", "msg_date": "Fri, 15 Mar 2019 17:16:50 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seems like a bug in pgbench -R" }, { "msg_contents": "On 3/15/19 5:16 PM, Fabien COELHO wrote:\n> \n>>> echo 'select 1' > select.sql\n>>>\n>>> while /bin/true; do\n>>>  pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;\n>>>  date;\n>>> done;\n>>\n>> Indeed. I'll look at it over the weekend.\n>>\n>>> So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or\n>>> one of the other commits touching this part of the code.\n> \n> I could not reproduce this issue on head, but I confirm on 11.2.\n> \n\nAFAICS on head it's fixed by 3bac77c48f166b9024a5ead984df73347466ae12\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Fri, 15 Mar 2019 23:05:28 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: seems like a bug in pgbench -R" }, { "msg_contents": "Hello Tomas,\n\n>>>> So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or\n>>>> one of the other commits touching this part of the code.\n>>\n>> I could not reproduce this issue on head, but I confirm on 11.2.\n>>\n>\n> AFAICS on head it's fixed by 3bac77c48f166b9024a5ead984df73347466ae12\n\nThanks for the information.\n\nI pinpointed the exact issue in one go: no surprise given that the patch \nwas motivated by cleaning up this kind of external state machine changes \nwhich I thought doubtful and error prone.\n\nAttached is a fix to apply on pg11.\n\n-- \nFabien.", "msg_date": "Sat, 16 Mar 2019 11:13:58 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: seems like a bug in pgbench -R" }, { "msg_contents": "Hi Fabien,\n\nOn Fri, Mar 15, 2019 at 4:17 PM, Fabien COELHO wrote:\n> >> echo 'select 1' > select.sql\n> >> \n> >> while /bin/true; do\n> >> pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;\n> >> date;\n> >> done;\n> >\n> > Indeed. I'll look at it over the weekend.\n> >\n> >> So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or\n> >> one of the other commits touching this part of the code.\n> \n> I could not reproduce this issue on head, but I confirm on 11.2.\n\nI could reproduce the stuck on 11.4.\n\nOn Sat, Mar 16, 2019 at 10:14 AM, Fabien COELHO wrote:\n> Attached is a fix to apply on pg11.\n\nI confirm the stuck doesn't happen after applying your patch.\n\nIt passes make check-world.\n\nThis change seems not to affect performance, so I didn't do any performance\ntest.\n\n> +\t\t/* under throttling we may have finished the last client above */\n> +\t\tif (remains == 0)\n> +\t\t\tbreak;\n\nIf there are only CSTATE_WAIT_RESULT, CSTATE_SLEEP or CSTATE_THROTTLE clients,\na thread needs to wait the results or sleep. In that logic, there are the case\nthat a thread tried to wait the results when there are no clients wait the\nresults, and this causes the issue. This is happened when there are only\nCSTATE_THROTLE clients and pgbench timeout is occured. Those clients will be\nfinished and \"remains\" will be 0.\n\nI confirmed above codes prevent such a case.\n\n\nI almost think this is ready for committer, but I have one question.\n\nIs it better adding any check like if(maxsock != -1) before the select?\n\n\nelse /* no explicit delay, select without timeout */\n{\n nsocks = select(maxsock + 1, &input_mask, NULL, NULL, NULL);\n}\n\n--\nYoshikazu Imai\n\n\n", "msg_date": "Wed, 24 Jul 2019 10:09:51 +0000", "msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: seems like a bug in pgbench -R" }, { "msg_contents": "\nHello Yoshikazu,\n\n>> I could not reproduce this issue on head, but I confirm on 11.2.\n>\n> I could reproduce the stuck on 11.4.\n>\n>> Attached is a fix to apply on pg11.\n>\n> I confirm the stuck doesn't happen after applying your patch.\n\nOk, thanks for the feedback.\n\n>> +\t\t/* under throttling we may have finished the last client above */\n>> +\t\tif (remains == 0)\n>> +\t\t\tbreak;\n>\n> If there are only CSTATE_WAIT_RESULT, CSTATE_SLEEP or CSTATE_THROTTLE clients,\n> a thread needs to wait the results or sleep. In that logic, there are the case\n> that a thread tried to wait the results when there are no clients wait the\n> results, and this causes the issue. This is happened when there are only\n> CSTATE_THROTLE clients and pgbench timeout is occured. Those clients will be\n> finished and \"remains\" will be 0.\n>\n> I confirmed above codes prevent such a case.\n\nYep.\n\n> I almost think this is ready for committer,\n\nAlmost great, then.\n\n> but I have one question. Is it better adding any check like if(maxsock \n> != -1) before the select?\n>\n> else /* no explicit delay, select without timeout */\n> {\n> nsocks = select(maxsock + 1, &input_mask, NULL, NULL, NULL);\n> }\n\nI think that it is not necessary because this case cannot happen: If some \nclients are still running (remains > 0), they are either sleeping, in \nwhich case there would be a timeout, or they are waiting for something \nfrom the server, otherwise the script could be advanced further so there \nwould be something else to do for the thread.\n\nWe could check this by adding \"Assert(maxsock != -1);\" before this select, \nbut I would not do that for a released version.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 24 Jul 2019 19:02:27 +0000 (GMT)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "RE: seems like a bug in pgbench -R" }, { "msg_contents": "On Wed, July 24, 2019 at 7:02 PM, Fabien COELHO wrote:\n> > but I have one question. Is it better adding any check like if(maxsock\n> > != -1) before the select?\n> >\n> > else /* no explicit delay, select without timeout */\n> > {\n> > nsocks = select(maxsock + 1, &input_mask, NULL, NULL, NULL); }\n> \n> I think that it is not necessary because this case cannot happen: If some\n> clients are still running (remains > 0), they are either sleeping, in\n> which case there would be a timeout, or they are waiting for something\n> from the server, otherwise the script could be advanced further so there\n> would be something else to do for the thread.\n\nAh, I understand.\n\n> We could check this by adding \"Assert(maxsock != -1);\" before this select,\n> but I would not do that for a released version.\n\nYeah I also imagined that we can use Assert, but ah, it's released version.\nI got it. Thanks for telling that.\n\nSo I'll mark this ready for committer.\n\n--\nYoshikazu Imai\n\n\n\n", "msg_date": "Thu, 25 Jul 2019 00:01:57 +0000", "msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: seems like a bug in pgbench -R" }, { "msg_contents": "\nHello Yoshikazu,\n\n> |...] So I'll mark this ready for committer.\n\nOk, thanks for the review.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 25 Jul 2019 06:27:40 +0000 (GMT)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "RE: seems like a bug in pgbench -R" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> |...] So I'll mark this ready for committer.\n\n> Ok, thanks for the review.\n\nLGTM, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jul 2019 15:17:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: seems like a bug in pgbench -R" } ]